diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cyberflex E-gate Driver Download Win7 REPACK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cyberflex E-gate Driver Download Win7 REPACK.md deleted file mode 100644 index b467609ea445856fac129cfd99c7e25da86524c4..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Cyberflex E-gate Driver Download Win7 REPACK.md +++ /dev/null @@ -1,44 +0,0 @@ -

cyberflex e-gate driver download win7


Download Zip ••• https://imgfil.com/2uy16N



-
-8 bit not working - - when i try to update my ubuntu it says operation not possible - - s0u][ight: Please pastebin the full output of "sudo apt-get update". - - paste? - -!pastebin | s0u][ight - - s0u][ight: For posting multi-line texts into the channel, please use | To post!screenshots use |!pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. - - ok - - - - i did an apt-get update and it started saying that - - s0u][ight: Please remove all "ppa.launchpad.net/n/n/" from your sources.list. - - s0u][ight: (Any instances of "ppa.launchpad.net/n/n/" in sources.list.d will also need to be removed). - - s0u][ight, ppa's must be disabled in the /etc/apt/sources.list.d file - - BluesKaj: That is incorrect. ppas are currently permitted for apt-get, and apt-get update is not even intended to read that file. - - Jordan_U, nevermind - - BluesKaj: s0u][ight: Please pastebin the entire contents of /etc/apt/sources.list and /etc/apt/sources.list.d/. - - did that - - removed all ppa lines - - now apt-get update - - Jordan_U, well, my mistake on ppa, it won't hurt anything at this time - - BluesKaj: It's not a mistake. s0u][ight: 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Jumpstart For Wireless Windows 7.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Jumpstart For Wireless Windows 7.md deleted file mode 100644 index bcff56656299db69322f8a16fc224c48baa08d09..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Jumpstart For Wireless Windows 7.md +++ /dev/null @@ -1,6 +0,0 @@ -

download jumpstart for wireless windows 7


Download Filehttps://imgfil.com/2uy11z



- -2K, XP, 2K3, VISTA, WIN7/32bits. Download. Jumpstart Wireless Intermediate Driver. Others. TL-WN7200ND_V1_Utility.zip. 1.0.0.46. 2008-09-25. 22.51 MB. 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/4k video downloader review is it safe and reliable? - Reddit.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/4k video downloader review is it safe and reliable? - Reddit.md deleted file mode 100644 index fadb0a6a8c358e536eb87860a9eecdd0e5dd5585..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/4k video downloader review is it safe and reliable? - Reddit.md +++ /dev/null @@ -1,117 +0,0 @@ - -

How to Download 4K Videos from YouTube and Reddit

-

If you are a fan of high-quality videos, you might have noticed that many YouTube and Reddit videos are available in 4K resolution. 4K videos offer stunning clarity, detail, and color that can enhance your viewing experience. However, watching 4K videos online requires a fast and stable internet connection, which is not always available. Moreover, you might want to save some 4K videos offline for personal use, such as editing, sharing, or watching later. In this article, we will show you how to download 4K videos from YouTube and Reddit using some of the best 4K video downloader software for PC and Mac.

-

4k download youtube reddit


Downloadhttps://urlin.us/2uSS9u



-

What is 4K Video and Why You Need It

-

4K video is a video format that has a resolution of 3840 x 2160 pixels, which is four times more than the standard HD resolution of 1920 x 1080 pixels. This means that 4K video has more pixels per inch (PPI), which results in sharper and clearer images. You can see more details, textures, and colors in 4K video than in HD video.

-

The Benefits of 4K Video Quality

-

There are many benefits of watching and downloading 4K videos, such as:

- -

The Challenges of 4K Video Downloading

-

However, there are also some challenges that you might face when downloading 4K videos from YouTube and Reddit, such as:

- -

The Best 4K Video Downloader Software for PC and Mac

-

To overcome these challenges, you need a reliable and powerful 4K video downloader software that can help you download YouTube and Reddit videos in 4K resolution with ease. Here are some of the best options that we recommend:

-

4k video downloader safe reddit
-4k video downloader error reddit
-4k video downloader proxy reddit
-4k video downloader alternative reddit
-4k video downloader license key reddit
-4k video downloader activation key reddit
-4k video downloader crack reddit
-4k video downloader premium reddit
-4k video downloader review reddit
-4k video downloader youtube-dl reddit
-best way to download 4k youtube videos reddit
-how to download 4k youtube videos reddit
-best 4k youtube downloader reddit
-free 4k youtube downloader reddit
-youtube-dl 4k reddit
-youtube-dl download 4k reddit
-youtube-dl best quality reddit
-youtube-dl mkv reddit
-youtube-dl proxy reddit
-youtube-dl alternative reddit
-download blocked youtube videos reddit
-download youtube videos in full hd reddit
-download youtube videos in 1080p reddit
-download youtube videos in mkv format reddit
-download youtube videos with subtitles reddit
-download youtube playlists in 4k reddit
-download youtube screensavers in 4k reddit
-download youtube stock footage in 4k reddit
-download youtube backgrounds in 4k reddit
-download youtube music videos in 4k reddit
-download youtube documentaries in 4k reddit
-download youtube movies in 4k reddit
-download youtube trailers in 4k reddit
-download youtube live streams in 4k reddit
-download youtube vr videos in 4k reddit
-download youtube hdr videos in 4k reddit
-download youtube 360 videos in 4k reddit
-download youtube slow motion videos in 4k reddit
-download youtube timelapse videos in 4k reddit
-download youtube hyperlapse videos in 4k reddit

-

4K Video Downloader

-

[10](https://www.4kdownload.com/)

-

As the name suggests, this software is designed to download 4K videos from YouTube and other popular video sites. It has a simple and user-friendly interface that allows you to download videos, playlists , channels, and subtitles in various formats and quality options. It also supports 3D and 360-degree videos, smart mode, and cross-platform compatibility. You can download up to 24 videos in a batch with the free version, or unlimited videos with the premium version.

-

Fucosoft Video Converter

-

[9](https://www.fucosoft.com/video-converter.html)

-

This software is not only a video converter, but also a video downloader that can download 4K videos from YouTube, Reddit, Facebook, Instagram, Twitter, and more. It can also convert downloaded videos to various formats, such as MP4, MOV, AVI, MKV, etc., and edit them with built-in tools. You can download multiple videos at once with high speed and quality.

-

Wondershare AllMyTube

-

[8](https://videoconverter.wondershare.com/allmytube-video-downloader.html)

-

This software is another versatile video downloader that can download 4K videos from YouTube and over 10,000 other video sites. It can also convert downloaded videos to different formats and devices, such as iPhone, iPad, Android, etc. It has a one-click download mode, a browser extension, a video player, and a video library. You can download up to 10 videos at the same time with the free trial version, or unlimited videos with the full version.

-

Gihosoft TubeGet

-

[7](https://www.gihosoft.com/free-youtube-downloader.html)

-

This software is a simple and effective video downloader that can download 4K videos from YouTube and other sites. It can also extract audio from videos and save them as MP3 files. It has a clean and intuitive interface that lets you download videos by copying and pasting URLs or using the browser add-on. You can download up to five videos per day with the free version, or unlimited videos with the pro version.

-

Leawo CleverGet

-

[6](https://www.leawo.org/cleverget/)

-

This software is a smart and powerful video downloader that can download 4K videos from YouTube and more than 1000 other sites. It can also convert downloaded videos to various formats and resolutions, such as 1080P, 720P, etc. It has a multi-threading technology that accelerates the download speed and ensures the video quality. You can download unlimited videos with the free version, but you need to upgrade to the premium version to remove the watermark.

-

AmoyShare AnyUTube

-

[5](https://www.amoyshare.com/anyutube/)

-

This software is a dedicated YouTube video downloader that can download 4K videos from YouTube in MP4 or MP3 format. It can also download YouTube playlists, channels, subtitles, and lyrics. It has a user-friendly interface that allows you to search for videos by keywords or URLs. You can download up to 14 videos per day with the free version, or unlimited videos with the paid version.

-

How to Use 4K Video Downloader Software to Download YouTube and Reddit Videos

-

Although each 4K video downloader software has its own features and functions, they generally share the same steps for downloading YouTube and Reddit videos in 4K resolution. Here are the common steps that you need to follow:

-

Step 1: Copy the Video URL

-

The first step is to copy the URL of the video that you want to download from YouTube or Reddit. You can do this by right-clicking on the video and selecting "Copy video URL" or "Copy link address". Alternatively, you can copy the URL from the address bar of your browser.

-

Step 2: Paste the URL into the Software

-

The next step is to paste the URL into the software that you have installed on your PC or Mac. You can do this by clicking on the "Paste URL" button or using the keyboard shortcut Ctrl+V (Windows) or Command+V (Mac). The software will automatically analyze the URL and show you the available download options.

-

Step 3: Choose the Output Format and Quality

-

The third step is to choose the output format and quality that you want for your downloaded video. You can choose from different formats, such as MP4, MKV, AVI, etc., depending on your preference and device compatibility. You can also choose the quality level, such as 4K, 1080P, 720P, etc., depending on your internet speed and storage space. You can also choose to download the audio only or the subtitles if available.

-

Step 4: Start the Download Process

-

The final step is to start the download process by clicking on the "Download" button or using the keyboard shortcut Enter. The software will begin to download the video in the background and show you the progress and speed. You can pause, resume, or cancel the download at any time. Once the download is completed, you can find the video in the output folder or the video library of the software.

-

Tips and Tricks for 4K Video Downloading and Playback

-

To make the most of your 4K video downloading and playback experience, here are some tips and tricks that you should keep in mind:

-

Check the Internet Speed and Bandwidth

-

Downloading 4K videos requires a fast and stable internet connection, as they are much larger than HD videos. You should check your internet speed and bandwidth before downloading 4K videos, and avoid downloading multiple videos at the same time or using other applications that consume internet resources. You can use online tools such as [9](https://www.speedtest.net/) to test your internet speed and performance.

-

Adjust the Encoder Settings and Bitrates

-

Downloading 4K videos also requires a lot of storage space, as they are much heavier than HD videos. You should check your storage space before downloading 4K videos, and delete or transfer some files if necessary. You can also adjust the encoder settings and bitrates of your downloaded videos to reduce their file size without compromising their quality. You can use online tools such as [8](https://www.onlineconverter.com/video) to compress and convert your videos online.

-

Use a Compatible Device and Player

-

Playing 4K videos requires a compatible device, player, and monitor that can support the high resolution. You should check your device specifications before playing 4K videos, and upgrade your hardware or software if needed. You can also use a dedicated 4K video player that can optimize the playback quality and performance of your videos. Some of the best 4K video players are [7](https://www.vlc.de/en/), [6](https://www.kmplayer.com/), [5](https://www.5kplayer.com/), [4](https://potplayer.daum.net/), and [3](https://mpv.io/).

-

Respect the Copyright Laws and Fair Use Policy

-

Downloading 4K videos from YouTube and Reddit is not illegal, as long as you follow the fair use policy and respect the rights of the original creators. You should not download or use any videos that are protected by copyright or have restricted permissions. You should also not download or use any videos for commercial purposes or without giving proper credit to the sources. You should also not download or use any videos that contain illegal, harmful, or offensive content. You should always respect the terms and conditions of YouTube and Reddit, and the laws and regulations of your country.

-

Conclusion

-

Downloading 4K videos from YouTube and Reddit can be a great way to enjoy high-quality videos offline or for personal use. However, you need to have the right tools and techniques to download and play 4K videos smoothly and safely. In this article, we have introduced some of the best 4K video downloader software for PC and Mac, and how to use them to download YouTube and Reddit videos in 4K resolution. We have also shared some tips and tricks for 4K video downloading and playback, and how to respect the copyright laws and fair use policy. We hope that this article has helped you learn more about 4K video downloading and how to do it properly.

-

FAQs

-

Here are some of the frequently asked questions about 4K video downloading:

-

Q: What is the difference between 4K and UHD?

-

A: 4K and UHD are often used interchangeably, but they are not exactly the same. 4K refers to the resolution of 4096 x 2160 pixels, which is used in digital cinema and professional video production. UHD refers to the resolution of 3840 x 2160 pixels, which is used in consumer TVs and monitors. Both resolutions have the same aspect ratio of 16:9, but 4K has slightly more pixels than UHD.

-

Q: How much storage space does a 4K video take?

-

A: The storage space of a 4K video depends on several factors, such as the length, format, codec, bitrate, and compression of the video. However, a rough estimate is that a one-minute 4K video can take up to 375 MB of storage space. Therefore, a one-hour 4K video can take up to 22.5 GB of storage space.

-

Q: How fast is the internet speed required for downloading or streaming 4K videos?

-

A: The internet speed required for downloading or streaming 4K videos also depends on several factors, such as the source, quality, platform, and network of the video. However, a general recommendation is that you need at least 25 Mbps of internet speed for downloading or streaming 4K videos smoothly.

-

Q: Can I download 4K videos from YouTube or Reddit on my mobile device?

-

A: Yes, you can download 4K videos from YouTube or Reddit on your mobile device, but you need to use a third-party app or website that can support 4K video downloading. Some of the popular apps or websites are [2](https://www.videoder.com/), [1](https://ytmp3.cc/en13/), [0](https://www.tubemate.net/), etc. However, you should be careful about the security and legality of these apps or websites, as they might contain malware or violate the terms and conditions of YouTube or Reddit.

-

Q: How can I play 4K videos on my PC or Mac?

-

A: To play 4K videos on your PC or Mac, you need to have a compatible device, player, and monitor that can support the high resolution. You can check your device specifications and upgrade your hardware or software if needed. You can also use a dedicated 4K video player that can optimize the playback quality and performance of your videos. Some of the best 4K video players are [7](https://www.vlc.de/en/), [6](https://www.kmplayer.com/), [5](https://www.5kplayer.com/), [4](https://potplayer.daum.net/), and [3](https://mpv.io/).

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Become a Pilot in the Indian Air Force A Cut Above Game for PC and Mobile.md b/spaces/1phancelerku/anime-remove-background/Become a Pilot in the Indian Air Force A Cut Above Game for PC and Mobile.md deleted file mode 100644 index 803fa26365d9a65ac2765c5f7d7ab74c8d35c5a9..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Become a Pilot in the Indian Air Force A Cut Above Game for PC and Mobile.md +++ /dev/null @@ -1,86 +0,0 @@ -
-

Indian Air Force: A Cut Above Game Download

-

If you have ever dreamed of becoming a fighter pilot in the Indian Air Force, then you might want to check out this game. Indian Air Force: A Cut Above is a free air combat mobile game that lets you experience the thrill and challenge of flying various IAF aircraft and weapons. The game was developed by Threye Interactive, a Delhi-based game studio, in collaboration with the Indian Air Force. It was officially launched on July 31, 2019 by the Air Chief Marshal B.S. Dhanoa for Android and iOS devices. The game aims to attract and inspire the youth to join the IAF and serve the nation.

-

Features of the game

-

Indian Air Force: A Cut Above has three game modes: Training, Single Player, and Free Flight. In Training mode, you can learn the basics of flying, landing, combat, and rescue operations. In Single Player mode, you can take on various missions that test your skills and courage. You can also customize your pilot avatar and choose from different IAF aircraft such as MiG-21, Mirage-2000, Su-30MKI, Rafale, Tejas, Apache, and Chinook. In Free Flight mode, you can explore the skies in any aircraft you want.

-

indian air force a cut above game download


Download Filehttps://jinyurl.com/2uNQQj



-

The game features realistic and engaging gameplay and graphics that simulate the actual scenarios faced by IAF pilots. You can experience flying in different weather conditions, terrains, and altitudes. You can also use different weapons such as guns, missiles, rockets, and bombs to destroy enemy targets. The game also has a character modelled after Wing Commander Abhinandan Varthaman, who shot down a Pakistani F-16 in February 2019. You can even sport his signature gunslinger moustache in the game.

-

Review of the game

-

Indian Air Force: A Cut Above is a fun and exciting game that gives you a glimpse of what it takes to be an IAF warrior. The game has many pros such as:

- -

However, the game also has some cons such as:

-

How to play Indian Air Force: A Cut Above on PC
-Indian Air Force: A Cut Above app store review
-Indian Air Force: A Cut Above gameplay and features
-Indian Air Force: A Cut Above apk file download
-Indian Air Force: A Cut Above best fighter jets and weapons
-Indian Air Force: A Cut Above solo and PvP missions
-Indian Air Force: A Cut Above tips and tricks for beginners
-Indian Air Force: A Cut Above official trailer and launch date
-Indian Air Force: A Cut Above system requirements and compatibility
-Indian Air Force: A Cut Above mod apk unlimited money
-Indian Air Force: A Cut Above online test and recruitment
-Indian Air Force: A Cut Above digital india initiative
-Indian Air Force: A Cut Above latest updates and news
-Indian Air Force: A Cut Above cheats and hacks
-Indian Air Force: A Cut Above bluestacks emulator download
-Indian Air Force: A Cut Above ios and android devices
-Indian Air Force: A Cut Above customer support and feedback
-Indian Air Force: A Cut Above ratings and rankings
-Indian Air Force: A Cut Above achievements and rewards
-Indian Air Force: A Cut Above realistic graphics and sound effects
-Indian Air Force: A Cut Above history and background
-Indian Air Force: A Cut Above offline mode and data usage
-Indian Air Force: A Cut Above installation and setup guide
-Indian Air Force: A Cut Above comparison with other air combat games
-Indian Air Force: A Cut Above pros and cons of playing
-Indian Air Force: A Cut Above free download link and size
-Indian Air Force: A Cut Above fun facts and trivia
-Indian Air Force: A Cut Above user reviews and testimonials
-Indian Air Force: A Cut Above challenges and difficulty levels
-Indian Air Force: A Cut Above screenshots and videos
-Indian Air Force: A Cut Above frequently asked questions (FAQs)
-Indian Air Force: A Cut Above community and forums
-Indian Air Force: A Cut Above bugs and issues report
-Indian Air Force: A Cut Above fan art and memes
-Indian Air Force: A Cut Above developer interview and insights

- -

Compared to other similar games such as Ace Combat or H.A.W.X., Indian Air Force: A Cut Above is more arcade-style than simulation-style. It does not have complex controls or physics, but it does have more variety and authenticity in terms of IAF scenarios and equipment.

-

Some tips and tricks to play the game better are:

- -

Conclusion

-

Indian Air Force: A Cut Above is a great game for anyone who loves air combat and adventure. It is a unique and innovative game that showcases the glory and valor of the Indian Air Force. It is also a game that can inspire and educate the youth about the IAF and its role in defending the nation.

-

If you want to download and play the game, you can visit the official website of the Indian Air Force or the Google Play Store or the App Store. The game is compatible with Android 5.0 and above and iOS 9.0 and above devices. The game size is about 300 MB and it requires about 1 GB of free space on your device.

-

So, what are you waiting for? Download Indian Air Force: A Cut Above today and experience the thrill of flying in the sky. You will not regret it!

-

FAQs

-

Here are some frequently asked questions about the game:

-
    -
  1. Is Indian Air Force: A Cut Above an official game of the IAF?
  2. -

    Yes, it is an official game of the IAF that was developed in collaboration with Threye Interactive, a Delhi-based game studio.

    -
  3. Is Indian Air Force: A Cut Above a free game?
  4. -

    Yes, it is a free game that does not have any in-app purchases or ads. However, it does require internet connection and permissions to access your photos, files, and phone calls.

    -
  5. Is Indian Air Force: A Cut Above a realistic game?
  6. -

    It is a realistic game in terms of the scenarios, aircraft, and weapons that are used by the IAF. However, it is not a simulation game that has complex controls or physics. It is more of an arcade-style game that is easy to play and enjoy.

    -
  7. Is Indian Air Force: A Cut Above a multiplayer game?
  8. -

    No, it is not a multiplayer game yet. It only has single-player modes such as Training, Single Player, and Free Flight. However, the developers have said that they are working on adding multiplayer and online features in the future.

    -
  9. How can I contact the developers of Indian Air Force: A Cut Above?
  10. -

    You can contact the developers of Indian Air Force: A Cut Above by visiting their website www.threye.com or by emailing them at support@threye.com. You can also follow them on Facebook, Twitter, Instagram, and YouTube for updates and news.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_inpaint_legacy.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_inpaint_legacy.py deleted file mode 100644 index eeec062eaa837626f5b4ec59014f9b3c33bd0486..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_inpaint_legacy.py +++ /dev/null @@ -1,477 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import paddle -import PIL - -from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTokenizer - -from ...fastdeploy_utils import FastDeployRuntimeModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import PIL_INTERPOLATION, logging -from . import StableDiffusionPipelineOutput - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess_image(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask, scale_factor=8): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"]) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - return mask - - -class FastDeployStableDiffusionInpaintPipelineLegacy(DiffusionPipeline): - r""" - Pipeline for text-guided image inpainting legacy using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving etc.) - - Args: - vae_encoder ([`FastDeployRuntimeModel`]): - Variational Auto-Encoder (VAE) Model to encode images to latent representations. - vae_decoder ([`FastDeployRuntimeModel`]): - Variational Auto-Encoder (VAE) Model to decode images from latent representations. - text_encoder ([`FastDeployRuntimeModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`FastDeployRuntimeModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] - or [`DPMSolverMultistepScheduler`]. - safety_checker ([`FastDeployRuntimeModel`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae_encoder: FastDeployRuntimeModel, - vae_decoder: FastDeployRuntimeModel, - text_encoder: FastDeployRuntimeModel, - tokenizer: CLIPTokenizer, - unet: FastDeployRuntimeModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: FastDeployRuntimeModel, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae_encoder=vae_encoder, - vae_decoder=vae_decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="np").input_ids - - if not np.array_equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0] - text_embeddings = np.repeat(text_embeddings, num_images_per_prompt, axis=0) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] * batch_size - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="np", - ) - uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int64))[0] - uncond_embeddings = np.repeat(uncond_embeddings, num_images_per_prompt, axis=0) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) - - return text_embeddings - - def run_safety_checker(self, image, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor( - self.numpy_to_pil(image), return_tensors="np" - ).pixel_values.astype(dtype) - # There will throw an error if use safety_checker batchsize>1 - images, has_nsfw_concept = [], [] - for i in range(image.shape[0]): - image_i, has_nsfw_concept_i = self.safety_checker( - clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1] - ) - images.append(image_i) - has_nsfw_concept.append(has_nsfw_concept_i[0]) - image = np.concatenate(images) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = np.concatenate( - [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])] - ) - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose([0, 2, 3, 1]) - return image - - def prepare_extra_step_kwargs(self, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - return extra_step_kwargs - - def check_inputs(self, prompt, strength, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def get_timesteps(self, num_inference_steps, strength): - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - - t_start = max(num_inference_steps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, generator=None, noise=None): - if generator is None: - generator = np.random - - image = image.astype(dtype) - init_latents = self.vae_encoder(sample=image)[0] - init_latents = 0.18215 * init_latents - init_latents = paddle.to_tensor(init_latents) - - # Expand init_latents for batch_size and num_images_per_prompt - init_latents = paddle.concat([init_latents] * batch_size * num_images_per_prompt, axis=0) - init_latents_orig = paddle.to_tensor(init_latents) - - # add noise to latents using the timesteps - if noise is None: - noise = paddle.to_tensor(generator.randn(*init_latents.shape).astype(dtype)) - elif list(noise.shape) != list(init_latents.shape): - raise ValueError(f"Unexpected noise shape, got {noise.shape}, expected {init_latents.shape}") - elif isinstance(noise, np.ndarray): - noise = paddle.to_tensor(noise, dtype=dtype) - - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - return latents, init_latents_orig, noise - - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[np.ndarray, PIL.Image.Image] = None, - mask_image: Union[np.ndarray, PIL.Image.Image] = None, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[np.random.RandomState] = None, - noise: Optional[np.ndarray] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, np.ndarray], None]] = None, - callback_steps: Optional[int] = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`nd.ndarray` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. This is the image whose masked region will be inpainted. - mask_image (`nd.ndarray` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a - PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should - contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.uu - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. - `image` will be used as a starting point, adding more noise to it the larger the `strength`. The - number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added - noise will be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (?) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`np.random.RandomState`, *optional*): - A np.random.RandomState to make generation deterministic. - noise (`np.ndarray`, *optional*): - Pre-generated noise tensor, sampled from a Gaussian distribution, to be used as inputs for image - generation. If not provided, a noise tensor will ge generated by sampling using the supplied random - `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 1. Check inputs - self.check_inputs(prompt, strength, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Preprocess image and mask - if isinstance(image, PIL.Image.Image): - image = preprocess_image(image) - - if isinstance(mask_image, PIL.Image.Image): - mask_image = preprocess_mask(mask_image) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength) - latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt]) - - # 6. Prepare latent variables - # encode the init image into latents and scale the latents - latents, init_latents_orig, noise = self.prepare_latents( - image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, generator, noise - ) - - # 7. Prepare mask latent - mask = paddle.to_tensor(mask_image, dtype=latents.dtype) - mask = paddle.concat([mask] * batch_size * num_images_per_prompt) - - # 8. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(eta) - - # 9. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - text_embeddings = paddle.to_tensor(text_embeddings, dtype="float32") - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - latent_model_input = latent_model_input - - # predict the noise residual - noise_pred = self.unet.zero_copy_infer( - sample=latent_model_input, timestep=t, encoder_hidden_states=text_embeddings - )[0] - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - scheduler_output = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs) - latents = scheduler_output.prev_sample - init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t) - - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 10. Post-processing - image = self.decode_latents(latents.numpy()) - - # 11. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype) - - # 12. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/modules.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/modules.py deleted file mode 100644 index 6b2c3dca2d168fb5fbaff5acc4b5a06280a496a7..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/modules.py +++ /dev/null @@ -1,1064 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange - -from audioldm.utils import instantiate_from_config -from audioldm.latent_diffusion.attention import LinearAttention - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0, 1, 0, 0)) - return emb - -def nonlinearity(x): - # swish - return x * torch.sigmoid(x) - - -def Normalize(in_channels, num_groups=32): - return torch.nn.GroupNorm( - num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True - ) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class UpsampleTimeStride4(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=5, stride=1, padding=2 - ) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=(4.0, 2.0), mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # Do time downsampling here - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=3, stride=2, padding=0 - ) - - def forward(self, x): - if self.with_conv: - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class DownsampleTimeStride4(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # Do time downsampling here - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=5, stride=(4, 2), padding=1 - ) - - def forward(self, x): - if self.with_conv: - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=(4, 2), stride=(4, 2)) - return x - - -class ResnetBlock(nn.Module): - def __init__( - self, - *, - in_channels, - out_channels=None, - conv_shortcut=False, - dropout, - temb_channels=512, - ): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d( - out_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - else: - self.nin_shortcut = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:, :, None, None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x + h - - -class LinAttnBlock(LinearAttention): - """to match AttnBlock usage""" - - def __init__(self, in_channels): - super().__init__(dim=in_channels, heads=1, dim_head=in_channels) - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h * w).contiguous() - q = q.permute(0, 2, 1).contiguous() # b,hw,c - k = k.reshape(b, c, h * w).contiguous() # b,c,hw - w_ = torch.bmm(q, k).contiguous() # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c) ** (-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h * w).contiguous() - w_ = w_.permute(0, 2, 1).contiguous() # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm( - v, w_ - ).contiguous() # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b, c, h, w).contiguous() - - h_ = self.proj_out(h_) - - return x + h_ - - -def make_attn(in_channels, attn_type="vanilla"): - assert attn_type in ["vanilla", "linear", "none"], f"attn_type {attn_type} unknown" - # print(f"making attention of type '{attn_type}' with {in_channels} in_channels") - if attn_type == "vanilla": - return AttnBlock(in_channels) - elif attn_type == "none": - return nn.Identity(in_channels) - else: - return LinAttnBlock(in_channels) - - -class Model(nn.Module): - def __init__( - self, - *, - ch, - out_ch, - ch_mult=(1, 2, 4, 8), - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - in_channels, - resolution, - use_timestep=True, - use_linear_attn=False, - attn_type="vanilla", - ): - super().__init__() - if use_linear_attn: - attn_type = "linear" - self.ch = ch - self.temb_ch = self.ch * 4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList( - [ - torch.nn.Linear(self.ch, self.temb_ch), - torch.nn.Linear(self.temb_ch, self.temb_ch), - ] - ) - - # downsampling - self.conv_in = torch.nn.Conv2d( - in_channels, self.ch, kernel_size=3, stride=1, padding=1 - ) - - curr_res = resolution - in_ch_mult = (1,) + tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch * in_ch_mult[i_level] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions - 1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch * ch_mult[i_level] - skip_in = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - if i_block == self.num_res_blocks: - skip_in = ch * in_ch_mult[i_level] - block.append( - ResnetBlock( - in_channels=block_in + skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_ch, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x, t=None, context=None): - # assert x.shape[2] == x.shape[3] == self.resolution - if context is not None: - # assume aligned context, cat along channel axis - x = torch.cat((x, context), dim=1) - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions - 1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb - ) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_last_layer(self): - return self.conv_out.weight - - -class Encoder(nn.Module): - def __init__( - self, - *, - ch, - out_ch, - ch_mult=(1, 2, 4, 8), - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - in_channels, - resolution, - z_channels, - double_z=True, - use_linear_attn=False, - attn_type="vanilla", - downsample_time_stride4_levels=[], - **ignore_kwargs, - ): - super().__init__() - if use_linear_attn: - attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.downsample_time_stride4_levels = downsample_time_stride4_levels - - if len(self.downsample_time_stride4_levels) > 0: - assert max(self.downsample_time_stride4_levels) < self.num_resolutions, ( - "The level to perform downsample 4 operation need to be smaller than the total resolution number %s" - % str(self.num_resolutions) - ) - - # downsampling - self.conv_in = torch.nn.Conv2d( - in_channels, self.ch, kernel_size=3, stride=1, padding=1 - ) - - curr_res = resolution - in_ch_mult = (1,) + tuple(ch_mult) - self.in_ch_mult = in_ch_mult - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch * in_ch_mult[i_level] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions - 1: - if i_level in self.downsample_time_stride4_levels: - down.downsample = DownsampleTimeStride4(block_in, resamp_with_conv) - else: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, - 2 * z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1, - ) - - def forward(self, x): - # timestep embedding - temb = None - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions - 1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__( - self, - *, - ch, - out_ch, - ch_mult=(1, 2, 4, 8), - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - in_channels, - resolution, - z_channels, - give_pre_end=False, - tanh_out=False, - use_linear_attn=False, - downsample_time_stride4_levels=[], - attn_type="vanilla", - **ignorekwargs, - ): - super().__init__() - if use_linear_attn: - attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - self.tanh_out = tanh_out - self.downsample_time_stride4_levels = downsample_time_stride4_levels - - if len(self.downsample_time_stride4_levels) > 0: - assert max(self.downsample_time_stride4_levels) < self.num_resolutions, ( - "The level to perform downsample 4 operation need to be smaller than the total resolution number %s" - % str(self.num_resolutions) - ) - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,) + tuple(ch_mult) - block_in = ch * ch_mult[self.num_resolutions - 1] - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.z_shape = (1, z_channels, curr_res, curr_res) - # print("Working with z of shape {} = {} dimensions.".format( - # self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d( - z_channels, block_in, kernel_size=3, stride=1, padding=1 - ) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - if i_level - 1 in self.downsample_time_stride4_levels: - up.upsample = UpsampleTimeStride4(block_in, resamp_with_conv) - else: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_ch, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, z): - # assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - if self.tanh_out: - h = torch.tanh(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList( - [ - nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock( - in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, - dropout=0.0, - ), - ResnetBlock( - in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, - dropout=0.0, - ), - ResnetBlock( - in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, - dropout=0.0, - ), - nn.Conv2d(2 * in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True), - ] - ) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1, 2, 3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - ch, - num_res_blocks, - resolution, - ch_mult=(2, 2), - dropout=0.0, - ): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_channels, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class LatentRescaler(nn.Module): - def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2): - super().__init__() - # residual block, interpolate, residual block - self.factor = factor - self.conv_in = nn.Conv2d( - in_channels, mid_channels, kernel_size=3, stride=1, padding=1 - ) - self.res_block1 = nn.ModuleList( - [ - ResnetBlock( - in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0, - ) - for _ in range(depth) - ] - ) - self.attn = AttnBlock(mid_channels) - self.res_block2 = nn.ModuleList( - [ - ResnetBlock( - in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0, - ) - for _ in range(depth) - ] - ) - - self.conv_out = nn.Conv2d( - mid_channels, - out_channels, - kernel_size=1, - ) - - def forward(self, x): - x = self.conv_in(x) - for block in self.res_block1: - x = block(x, None) - x = torch.nn.functional.interpolate( - x, - size=( - int(round(x.shape[2] * self.factor)), - int(round(x.shape[3] * self.factor)), - ), - ) - x = self.attn(x).contiguous() - for block in self.res_block2: - x = block(x, None) - x = self.conv_out(x) - return x - - -class MergedRescaleEncoder(nn.Module): - def __init__( - self, - in_channels, - ch, - resolution, - out_ch, - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - ch_mult=(1, 2, 4, 8), - rescale_factor=1.0, - rescale_module_depth=1, - ): - super().__init__() - intermediate_chn = ch * ch_mult[-1] - self.encoder = Encoder( - in_channels=in_channels, - num_res_blocks=num_res_blocks, - ch=ch, - ch_mult=ch_mult, - z_channels=intermediate_chn, - double_z=False, - resolution=resolution, - attn_resolutions=attn_resolutions, - dropout=dropout, - resamp_with_conv=resamp_with_conv, - out_ch=None, - ) - self.rescaler = LatentRescaler( - factor=rescale_factor, - in_channels=intermediate_chn, - mid_channels=intermediate_chn, - out_channels=out_ch, - depth=rescale_module_depth, - ) - - def forward(self, x): - x = self.encoder(x) - x = self.rescaler(x) - return x - - -class MergedRescaleDecoder(nn.Module): - def __init__( - self, - z_channels, - out_ch, - resolution, - num_res_blocks, - attn_resolutions, - ch, - ch_mult=(1, 2, 4, 8), - dropout=0.0, - resamp_with_conv=True, - rescale_factor=1.0, - rescale_module_depth=1, - ): - super().__init__() - tmp_chn = z_channels * ch_mult[-1] - self.decoder = Decoder( - out_ch=out_ch, - z_channels=tmp_chn, - attn_resolutions=attn_resolutions, - dropout=dropout, - resamp_with_conv=resamp_with_conv, - in_channels=None, - num_res_blocks=num_res_blocks, - ch_mult=ch_mult, - resolution=resolution, - ch=ch, - ) - self.rescaler = LatentRescaler( - factor=rescale_factor, - in_channels=z_channels, - mid_channels=tmp_chn, - out_channels=tmp_chn, - depth=rescale_module_depth, - ) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Upsampler(nn.Module): - def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2): - super().__init__() - assert out_size >= in_size - num_blocks = int(np.log2(out_size // in_size)) + 1 - factor_up = 1.0 + (out_size % in_size) - print( - f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}" - ) - self.rescaler = LatentRescaler( - factor=factor_up, - in_channels=in_channels, - mid_channels=2 * in_channels, - out_channels=in_channels, - ) - self.decoder = Decoder( - out_ch=out_channels, - resolution=out_size, - z_channels=in_channels, - num_res_blocks=2, - attn_resolutions=[], - in_channels=None, - ch=in_channels, - ch_mult=[ch_mult for _ in range(num_blocks)], - ) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Resize(nn.Module): - def __init__(self, in_channels=None, learned=False, mode="bilinear"): - super().__init__() - self.with_conv = learned - self.mode = mode - if self.with_conv: - print( - f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode" - ) - raise NotImplementedError() - assert in_channels is not None - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=4, stride=2, padding=1 - ) - - def forward(self, x, scale_factor=1.0): - if scale_factor == 1.0: - return x - else: - x = torch.nn.functional.interpolate( - x, mode=self.mode, align_corners=False, scale_factor=scale_factor - ) - return x - - -class FirstStagePostProcessor(nn.Module): - def __init__( - self, - ch_mult: list, - in_channels, - pretrained_model: nn.Module = None, - reshape=False, - n_channels=None, - dropout=0.0, - pretrained_config=None, - ): - super().__init__() - if pretrained_config is None: - assert ( - pretrained_model is not None - ), 'Either "pretrained_model" or "pretrained_config" must not be None' - self.pretrained_model = pretrained_model - else: - assert ( - pretrained_config is not None - ), 'Either "pretrained_model" or "pretrained_config" must not be None' - self.instantiate_pretrained(pretrained_config) - - self.do_reshape = reshape - - if n_channels is None: - n_channels = self.pretrained_model.encoder.ch - - self.proj_norm = Normalize(in_channels, num_groups=in_channels // 2) - self.proj = nn.Conv2d( - in_channels, n_channels, kernel_size=3, stride=1, padding=1 - ) - - blocks = [] - downs = [] - ch_in = n_channels - for m in ch_mult: - blocks.append( - ResnetBlock( - in_channels=ch_in, out_channels=m * n_channels, dropout=dropout - ) - ) - ch_in = m * n_channels - downs.append(Downsample(ch_in, with_conv=False)) - - self.model = nn.ModuleList(blocks) - self.downsampler = nn.ModuleList(downs) - - def instantiate_pretrained(self, config): - model = instantiate_from_config(config) - self.pretrained_model = model.eval() - # self.pretrained_model.train = False - for param in self.pretrained_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def encode_with_pretrained(self, x): - c = self.pretrained_model.encode(x) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - return c - - def forward(self, x): - z_fs = self.encode_with_pretrained(x) - z = self.proj_norm(z_fs) - z = self.proj(z) - z = nonlinearity(z) - - for submodel, downmodel in zip(self.model, self.downsampler): - z = submodel(z, temb=None) - z = downmodel(z) - - if self.do_reshape: - z = rearrange(z, "b c h w -> b (h w) c") - return z diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/contperceptual.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/contperceptual.py deleted file mode 100644 index 3e3018da79c5c24d85af1687f6f0875530dcc7c6..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/contperceptual.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import sys - -sys.path.insert(0, '.') # nopep8 -from ldm.modules.losses_audio.vqperceptual import * - - -class LPAPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPAPS().eval()# LPIPS用于日常图像,而LPAPS用于梅尔谱图 - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm, - ).apply(weights_init) - self.discriminator_iter_start = disc_start - if disc_loss == "hinge": - self.disc_loss = hinge_d_loss - elif disc_loss == "vanilla": - self.disc_loss = vanilla_d_loss - else: - raise ValueError(f"Unknown GAN loss '{disc_loss}'.") - print(f"LPAPSWithDiscriminator running with {disc_loss} loss.") - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - # print(f"p_loss {p_loss}") - rec_loss = rec_loss + self.perceptual_weight * p_loss - else: - p_loss = torch.tensor([0.0]) - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), - "{}/logvar".format(split): self.logvar.detach(), - "{}/kl_loss".format(split): kl_loss.detach().mean(), - "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log - - diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/text/text_encoder.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/text/text_encoder.py deleted file mode 100644 index 5f3fbd457f75a96b542f53668a9f2d289a6e674e..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/text/text_encoder.py +++ /dev/null @@ -1,272 +0,0 @@ -import json -import re -import six -from six.moves import range # pylint: disable=redefined-builtin - -PAD = "" -EOS = "" -UNK = "" -SEG = "|" -PUNCS = '!,.?;:' -RESERVED_TOKENS = [PAD, EOS, UNK] -NUM_RESERVED_TOKENS = len(RESERVED_TOKENS) -PAD_ID = RESERVED_TOKENS.index(PAD) # Normally 0 -EOS_ID = RESERVED_TOKENS.index(EOS) # Normally 1 -UNK_ID = RESERVED_TOKENS.index(UNK) # Normally 2 - -if six.PY2: - RESERVED_TOKENS_BYTES = RESERVED_TOKENS -else: - RESERVED_TOKENS_BYTES = [bytes(PAD, "ascii"), bytes(EOS, "ascii")] - -# Regular expression for unescaping token strings. -# '\u' is converted to '_' -# '\\' is converted to '\' -# '\213;' is converted to unichr(213) -_UNESCAPE_REGEX = re.compile(r"\\u|\\\\|\\([0-9]+);") -_ESCAPE_CHARS = set(u"\\_u;0123456789") - - -def strip_ids(ids, ids_to_strip): - """Strip ids_to_strip from the end ids.""" - ids = list(ids) - while ids and ids[-1] in ids_to_strip: - ids.pop() - return ids - - -class TextEncoder(object): - """Base class for converting from ints to/from human readable strings.""" - - def __init__(self, num_reserved_ids=NUM_RESERVED_TOKENS): - self._num_reserved_ids = num_reserved_ids - - @property - def num_reserved_ids(self): - return self._num_reserved_ids - - def encode(self, s): - """Transform a human-readable string into a sequence of int ids. - - The ids should be in the range [num_reserved_ids, vocab_size). Ids [0, - num_reserved_ids) are reserved. - - EOS is not appended. - - Args: - s: human-readable string to be converted. - - Returns: - ids: list of integers - """ - return [int(w) + self._num_reserved_ids for w in s.split()] - - def decode(self, ids, strip_extraneous=False): - """Transform a sequence of int ids into a human-readable string. - - EOS is not expected in ids. - - Args: - ids: list of integers to be converted. - strip_extraneous: bool, whether to strip off extraneous tokens - (EOS and PAD). - - Returns: - s: human-readable string. - """ - if strip_extraneous: - ids = strip_ids(ids, list(range(self._num_reserved_ids or 0))) - return " ".join(self.decode_list(ids)) - - def decode_list(self, ids): - """Transform a sequence of int ids into a their string versions. - - This method supports transforming individual input/output ids to their - string versions so that sequence to/from text conversions can be visualized - in a human readable format. - - Args: - ids: list of integers to be converted. - - Returns: - strs: list of human-readable string. - """ - decoded_ids = [] - for id_ in ids: - if 0 <= id_ < self._num_reserved_ids: - decoded_ids.append(RESERVED_TOKENS[int(id_)]) - else: - decoded_ids.append(id_ - self._num_reserved_ids) - return [str(d) for d in decoded_ids] - - @property - def vocab_size(self): - raise NotImplementedError() - - -class TokenTextEncoder(TextEncoder): - """Encoder based on a user-supplied vocabulary (file or list).""" - - def __init__(self, - vocab_filename, - reverse=False, - vocab_list=None, - replace_oov=None, - num_reserved_ids=NUM_RESERVED_TOKENS): - """Initialize from a file or list, one token per line. - - Handling of reserved tokens works as follows: - - When initializing from a list, we add reserved tokens to the vocab. - - When initializing from a file, we do not add reserved tokens to the vocab. - - When saving vocab files, we save reserved tokens to the file. - - Args: - vocab_filename: If not None, the full filename to read vocab from. If this - is not None, then vocab_list should be None. - reverse: Boolean indicating if tokens should be reversed during encoding - and decoding. - vocab_list: If not None, a list of elements of the vocabulary. If this is - not None, then vocab_filename should be None. - replace_oov: If not None, every out-of-vocabulary token seen when - encoding will be replaced by this string (which must be in vocab). - num_reserved_ids: Number of IDs to save for reserved tokens like . - """ - super(TokenTextEncoder, self).__init__(num_reserved_ids=num_reserved_ids) - self._reverse = reverse - self._replace_oov = replace_oov - if vocab_filename: - self._init_vocab_from_file(vocab_filename) - else: - assert vocab_list is not None - self._init_vocab_from_list(vocab_list) - self.pad_index = self.token_to_id[PAD] - self.eos_index = self.token_to_id[EOS] - self.unk_index = self.token_to_id[UNK] - self.seg_index = self.token_to_id[SEG] if SEG in self.token_to_id else self.eos_index - - def encode(self, s): - """Converts a space-separated string of tokens to a list of ids.""" - sentence = s - tokens = sentence.strip().split() - if self._replace_oov is not None: - tokens = [t if t in self.token_to_id else self._replace_oov - for t in tokens] - ret = [self.token_to_id[tok] for tok in tokens] - return ret[::-1] if self._reverse else ret - - def decode(self, ids, strip_eos=False, strip_padding=False): - if strip_padding and self.pad() in list(ids): - pad_pos = list(ids).index(self.pad()) - ids = ids[:pad_pos] - if strip_eos and self.eos() in list(ids): - eos_pos = list(ids).index(self.eos()) - ids = ids[:eos_pos] - return " ".join(self.decode_list(ids)) - - def decode_list(self, ids): - seq = reversed(ids) if self._reverse else ids - return [self._safe_id_to_token(i) for i in seq] - - @property - def vocab_size(self): - return len(self.id_to_token) - - def __len__(self): - return self.vocab_size - - def _safe_id_to_token(self, idx): - return self.id_to_token.get(idx, "ID_%d" % idx) - - def _init_vocab_from_file(self, filename): - """Load vocab from a file. - - Args: - filename: The file to load vocabulary from. - """ - with open(filename) as f: - tokens = [token.strip() for token in f.readlines()] - - def token_gen(): - for token in tokens: - yield token - - self._init_vocab(token_gen(), add_reserved_tokens=False) - - def _init_vocab_from_list(self, vocab_list): - """Initialize tokens from a list of tokens. - - It is ok if reserved tokens appear in the vocab list. They will be - removed. The set of tokens in vocab_list should be unique. - - Args: - vocab_list: A list of tokens. - """ - - def token_gen(): - for token in vocab_list: - if token not in RESERVED_TOKENS: - yield token - - self._init_vocab(token_gen()) - - def _init_vocab(self, token_generator, add_reserved_tokens=True): - """Initialize vocabulary with tokens from token_generator.""" - - self.id_to_token = {} - non_reserved_start_index = 0 - - if add_reserved_tokens: - self.id_to_token.update(enumerate(RESERVED_TOKENS)) - non_reserved_start_index = len(RESERVED_TOKENS) - - self.id_to_token.update( - enumerate(token_generator, start=non_reserved_start_index)) - - # _token_to_id is the reverse of _id_to_token - self.token_to_id = dict((v, k) for k, v in six.iteritems(self.id_to_token)) - - def pad(self): - return self.pad_index - - def eos(self): - return self.eos_index - - def unk(self): - return self.unk_index - - def seg(self): - return self.seg_index - - def store_to_file(self, filename): - """Write vocab file to disk. - - Vocab files have one token per line. The file ends in a newline. Reserved - tokens are written to the vocab file as well. - - Args: - filename: Full path of the file to store the vocab to. - """ - with open(filename, "w") as f: - for i in range(len(self.id_to_token)): - f.write(self.id_to_token[i] + "\n") - - def sil_phonemes(self): - return [p for p in self.id_to_token.values() if is_sil_phoneme(p)] - - # add by zhenhiye - def add_new_token(self, new_token): - assert new_token not in list(self.id_to_token.values()) - num_existing_tokens = len(self.id_to_token) - self.id_to_token[num_existing_tokens] = new_token - self.token_to_id = dict((v, k) for k, v in six.iteritems(self.id_to_token)) - print(f"Added {new_token} into the token dict!") - -def build_token_encoder(token_list_file): - token_list = json.load(open(token_list_file)) - return TokenTextEncoder(None, vocab_list=token_list, replace_oov='') - - -def is_sil_phoneme(p): - return p == '' or not p[0].isalpha() - # for aishell_notone_sing - # return p == '' or not p[0].isalpha() or p == 'breathe' diff --git a/spaces/AIGText/GlyphControl/ldm/models/diffusion/dpm_solver/__init__.py b/spaces/AIGText/GlyphControl/ldm/models/diffusion/dpm_solver/__init__.py deleted file mode 100644 index 7427f38c07530afbab79154ea8aaf88c4bf70a08..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/models/diffusion/dpm_solver/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .sampler import DPMSolverSampler \ No newline at end of file diff --git a/spaces/ALSv/FSW/roop/capturer.py b/spaces/ALSv/FSW/roop/capturer.py deleted file mode 100644 index 515fc8e54a9a3709ceee4c340f33e0b907416073..0000000000000000000000000000000000000000 --- a/spaces/ALSv/FSW/roop/capturer.py +++ /dev/null @@ -1,22 +0,0 @@ -from typing import Optional -import cv2 - -from roop.typing import Frame - - -def get_video_frame(video_path: str, frame_number: int = 0) -> Optional[Frame]: - capture = cv2.VideoCapture(video_path) - frame_total = capture.get(cv2.CAP_PROP_FRAME_COUNT) - capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1)) - has_frame, frame = capture.read() - capture.release() - if has_frame: - return frame - return None - - -def get_video_frame_total(video_path: str) -> int: - capture = cv2.VideoCapture(video_path) - video_frame_total = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - capture.release() - return video_frame_total diff --git a/spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/css/style.css b/spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/css/style.css deleted file mode 100644 index 6b1c841ced2c7527831adaf70ace34e6a5fd7536..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/css/style.css +++ /dev/null @@ -1,80 +0,0 @@ -body { - font-family: Verdana, Geneva, sans-serif; - font-size: 1.2em; - margin: 2%; - max-width: 100%; - padding: 80px 30px; - line-height: 1.65em; - background-image: url('https://huggingface.co/spaces/AchyuthGamer/Free-Accounts-Generator/resolve/main/img/minecraft.jpg'); - color: #fff; - font-weight: 300; - -} - -h1 { - text-align: center; - margin: 19% 0 5% 0; - font-size: 60px; - text-shadow: 0 0 28px #FF0000, 0 0 28px #008000; -} - -h4 { - text-align: center; - margin: 50% 0 5% 0; -} - -#wordbox { - /*opacity: 0;*/ - margin: 30px auto 0; - display: block; - width: 80%; - height: 50px; - font-size: 25px; - text-align: center; - background: #fff; - border-radius: 6px; - color: #black; - transition: 1s linear; -} - -#button { - -webkit-box-sizing: border-box; - -moz-box-sizing: border-box; - box-sizing: border-box; - background: #0b7fba; - border: 0; - color: #fff; - font-size: 20px; - padding: 1em 2em; - cursor: pointer; - margin: 0 auto 80px; - display: block; - text-align: center; - border-radius: 6px; - font-weight: bold; - transition: all 0.3s ease; - background-image: linear-gradient(to right, #25aae1, #4481eb, #04befe, #3f86ed); - box-shadow: 0 4px 15px 0 rgba(65, 132, 234, 0.75); -} - -#button:hover { - background-position: 100% 0; - -moz-transition: all 0.4s ease-in-out; - -o-transition: all 0.4s ease-in-out; - -webkit-transition: all 0.4s ease-in-out; - transition: all 0.4s ease-in-out; - transform: scale(1.2); - cursor: pointer; } - -#button:focus { - outline: none; - } - - - -span { - position: bottom; - top: 0; - left: 0; - margin: 40px; - } \ No newline at end of file diff --git a/spaces/Adapting/TrendFlow/mypages/__init__.py b/spaces/Adapting/TrendFlow/mypages/__init__.py deleted file mode 100644 index 5aa3390049b46b6cf6b3d1f6281a8ccc48485903..0000000000000000000000000000000000000000 --- a/spaces/Adapting/TrendFlow/mypages/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .welcome import welcome -from .home import home \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/reflection.py b/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/reflection.py deleted file mode 100644 index 6132ca24e1d6e8733d7a2839286da803f758f920..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/reflection.py +++ /dev/null @@ -1,330 +0,0 @@ -from __future__ import annotations -from typing import List, Union, Optional, Any, TYPE_CHECKING -from collections import defaultdict - -from pydantic import Field -import numpy as np -from datetime import datetime as dt - -import re - -from agentverse.llms.openai import get_embedding -from sklearn.metrics.pairwise import cosine_similarity - -from agentverse.message import Message -from agentverse.memory import BaseMemory - -from agentverse.logging import logger - -from . import memory_manipulator_registry -from .base import BaseMemoryManipulator - -if TYPE_CHECKING: - from agentverse.memory import VectorStoreMemory - from agentverse.agents.base import BaseAgent - - -IMPORTANCE_PROMPT = """On the scale of 1 to 10, where 1 is purely mundane \ -(e.g., brushing teeth, making bed) and 10 is \ -extremely poignant (e.g., a break up, college \ -acceptance), rate the likely poignancy of the \ -following piece of memory. \ -If you think it's too hard to rate it, you can give an inaccurate assessment. \ -The content or people mentioned is not real. You can hypothesis any reasonable context. \ -Please strictly only output one number. \ -Memory: {} \ -Rating: """ -IMMEDIACY_PROMPT = """On the scale of 1 to 10, where 1 is requiring no short time attention\ -(e.g., a bed is in the room) and 10 is \ -needing quick attention or immediate response(e.g., being required a reply by others), rate the likely immediacy of the \ -following statement. \ -If you think it's too hard to rate it, you can give an inaccurate assessment. \ -The content or people mentioned is not real. You can hypothesis any reasonable context. \ -Please strictly only output one number. \ -Memory: {} \ -Rating: """ -QUESTION_PROMPT = """Given only the information above, what are 3 most salient \ -high-level questions we can answer about the subjects in the statements?""" - -INSIGHT_PROMPT = """What at most 5 high-level insights can you infer from \ -the above statements? Only output insights with high confidence. -example format: insight (because of 1, 5, 3)""" - - -@memory_manipulator_registry.register("reflection") -class Reflection(BaseMemoryManipulator): - memory: VectorStoreMemory = None - agent: BaseAgent = None - - reflection: str = "" - - importance_threshold: int = 10 - accumulated_importance: int = 0 - - memory2importance: dict = {} - memory2immediacy: dict = {} - memory2time: defaultdict = Field(default=defaultdict(dict)) - - # TODO newly added func from generative agents - - def manipulate_memory(self) -> None: - # reflect here - if self.should_reflect(): - logger.debug( - f"Agent {self.agent.name} is now doing reflection since accumulated_importance={self.accumulated_importance} < reflection_threshold={self.importance_threshold}" - ) - self.reflection = self.reflect() - return self.reflection - else: - logger.debug( - f"Agent {self.agent.name} doesn't reflect since accumulated_importance={self.accumulated_importance} < reflection_threshold={self.importance_threshold}" - ) - - return "" - - def get_accumulated_importance(self): - accumulated_importance = 0 - - for memory in self.memory.messages: - if ( - memory.content not in self.memory2importance - or memory.content not in self.memory2immediacy - ): - self.memory2importance[memory.content] = self.get_importance( - memory.content - ) - self.memory2immediacy[memory.content] = self.get_immediacy( - memory.content - ) - - for score in self.memory2importance.values(): - accumulated_importance += score - - self.accumulated_importance = accumulated_importance - - return accumulated_importance - - def should_reflect(self): - if self.get_accumulated_importance() >= self.importance_threshold: - # double the importance_threshold - self.importance_threshold *= 2 - return True - else: - return False - - def get_questions(self, texts): - prompt = "\n".join(texts) + "\n" + QUESTION_PROMPT - result = self.agent.llm.generate_response(prompt) - result = result.content - questions = [q for q in result.split("\n") if len(q.strip()) > 0] - questions = questions[:3] - return questions - - def get_insights(self, statements): - prompt = "" - for i, st in enumerate(statements): - prompt += str(i + 1) + ". " + st + "\n" - prompt += INSIGHT_PROMPT - result = self.agent.llm.generate_response(prompt) - result = result.content - insights = [isg for isg in result.split("\n") if len(isg.strip()) > 0][:5] - insights = [".".join(i.split(".")[1:]) for i in insights] - # remove insight pointers for now - insights = [i.split("(")[0].strip() for i in insights] - return insights - - def get_importance(self, content: str): - """ - Exploit GPT to evaluate the importance of this memory - """ - prompt = IMPORTANCE_PROMPT.format(content) - result = self.memory.llm.generate_response(prompt) - - try: - score = int(re.findall(r"\s*(\d+)\s*", result.content)[0]) - except Exception as e: - logger.warn( - f"Found error {e} Abnormal result of importance rating '{result}'. Setting default value" - ) - score = 0 - return score - - def get_immediacy(self, content: str): - """ - Exploit GPT to evaluate the immediacy of this memory - """ - prompt = IMMEDIACY_PROMPT.format(content) - result = self.memory.llm.generate_response(prompt) - try: - score = int(re.findall(r"\s*(\d+)\s*", result.content)[0]) - except Exception as e: - logger.warn( - f"Found error {e} Abnormal result of importance rating '{result}'. Setting default value" - ) - score = 0 - return score - - def query_similarity( - self, - text: Union[str, List[str]], - k: int, - memory_bank: List, - current_time=dt.now(), - nms_threshold=0.99, - ) -> List[str]: - """ - get top-k entry based on recency, relevance, importance, immediacy - The query result can be Short-term or Long-term queried result. - formula is - `score= sim(q,v) *max(LTM_score, STM_score)` - `STM_score=time_score(createTime)*immediacy` - `LTM_score=time_score(accessTime)*importance` - time score is exponential decay weight. stm decays faster. - - The query supports querying based on multiple texts and only gives non-overlapping results - If nms_threshold is not 1, nms mechanism if activated. By default, - use soft nms with modified iou base(score starts to decay iff cos sim is higher than this value, - and decay weight at this value if 0. rather than 1-threshold). - - Args: - text: str - k: int - memory_bank: List - current_time: dt.now - nms_threshold: float = 0.99 - - - Returns: List[str] - """ - assert len(text) > 0 - texts = [text] if isinstance(text, str) else text - maximum_score = None - for text in texts: - embedding = get_embedding(text) - score = [] - for memory in memory_bank: - if memory.content not in self.memory2time: - self.memory2time[memory.content]["last_access_time"] = dt.now() - self.memory2time[memory.content]["create_time"] = dt.now() - - last_access_time_diff = ( - current_time - self.memory2time[memory.content]["last_access_time"] - ).total_seconds() // 3600 - recency = np.power( - 0.99, last_access_time_diff - ) # TODO: review the metaparameter 0.99 - - create_time_diff = ( - current_time - self.memory2time[memory.content]["create_time"] - ).total_seconds() // 60 - instancy = np.power( - 0.90, create_time_diff - ) # TODO: review the metaparameter 0.90 - - relevance = cosine_similarity( - np.array(embedding).reshape(1, -1), - np.array(self.memory.memory2embedding[memory.content]).reshape( - 1, -1 - ), - )[0][0] - - if ( - memory.content not in self.memory2importance - or memory.content not in self.memory2immediacy - ): - self.memory2importance[memory.content] = self.get_importance( - memory.content - ) - self.memory2immediacy[memory.content] = self.get_immediacy( - memory.content - ) - - importance = self.memory2importance[memory.content] / 10 - immediacy = self.memory2immediacy[memory.content] / 10 - - ltm_w = recency * importance - stm_w = instancy * immediacy - - score.append(relevance * np.maximum(ltm_w, stm_w)) - - score = np.array(score) - - if maximum_score is not None: - maximum_score = np.maximum(score, maximum_score) - else: - maximum_score = score - - if nms_threshold == 1.0: - # no nms is triggered - top_k_indices = np.argsort(maximum_score)[-k:][::-1] - else: - # TODO: soft-nms - assert 0 <= nms_threshold < 1 - top_k_indices = [] - while len(top_k_indices) < min(k, len(memory_bank)): - top_index = np.argmax(maximum_score) - top_k_indices.append(top_index) - maximum_score[top_index] = -1 # anything to prevent being chosen again - top_embedding = self.memory.memory2embedding[ - memory_bank[top_index].content - ] - cos_sim = cosine_similarity( - np.array(top_embedding).reshape(1, -1), - np.array( - [ - self.memory.memory2embedding[memory.content] - for memory in memory_bank - ] - ), - )[0] - score_weight = np.ones_like(maximum_score) - score_weight[cos_sim >= nms_threshold] -= ( - cos_sim[cos_sim >= nms_threshold] - nms_threshold - ) / (1 - nms_threshold) - maximum_score = maximum_score * score_weight - - # access them and refresh the access time - for i in top_k_indices: - self.memory2time[memory_bank[i].content]["last_access_time"] = current_time - # sort them in time periods. if the data tag is 'observation', ad time info output. - top_k_indices = sorted( - top_k_indices, - key=lambda x: self.memory2time[memory_bank[x].content]["create_time"], - ) - query_results = [] - for i in top_k_indices: - query_result = memory_bank[i].content - query_results.append(query_result) - - return query_results - - def get_memories_of_interest_oneself(self): - memories_of_interest = [] - for memory in self.memory.messages[-100:]: - if memory.sender == self.agent.name: - memories_of_interest.append(memory) - return memories_of_interest - - def reflect(self): - """ - initiate a reflection that inserts high level knowledge to memory - """ - memories_of_interest = self.get_memories_of_interest_oneself() - questions = self.get_questions([m.content for m in memories_of_interest]) - statements = self.query_similarity( - questions, len(questions) * 10, memories_of_interest - ) - insights = self.get_insights(statements) - logger.info(self.agent.name + f" Insights: {insights}") - for insight in insights: - # convert insight to messages - # TODO currently only oneself can see its own reflection - insight_message = Message( - content=insight, sender=self.agent.name, receiver={self.agent.name} - ) - self.memory.add_message([insight_message]) - reflection = "\n".join(insights) - return reflection - - def reset(self) -> None: - self.reflection = "" diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/csvscenario-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/csvscenario-plugin.js deleted file mode 100644 index b3a2b87e2e7a8e894c2d8caba0f69d6c8ac8a246..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/csvscenario-plugin.js +++ /dev/null @@ -1,18 +0,0 @@ -import CSVScenario from './csvscenario.js'; - -class CSVScenarioPlugin extends Phaser.Plugins.BasePlugin { - constructor(pluginManager) { - super(pluginManager); - } - - start() { - var eventEmitter = this.game.events; - eventEmitter.on('destroy', this.destroy, this); - } - - add(scene, config) { - return new CSVScenario(scene, config); - } -} - -export default CSVScenarioPlugin; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DataMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DataMethods.js deleted file mode 100644 index 48e81fbd79156a8b044683ad55a538fe3bf35b2a..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DataMethods.js +++ /dev/null @@ -1,25 +0,0 @@ -export default { - getData(key, defaultValue) { - return this.questionManager.getData(key, defaultValue); - }, - - setData(key, value) { - this.questionManager.setData(key, value); - return this; - }, - - incData(key, inc, defaultValue) { - this.questionManager.incData(key, inc, defaultValue); - return this; - }, - - mulData(key, mul, defaultValue) { - this.questionManager.mulData(key, mul, defaultValue); - return this; - }, - - clearData() { - this.questionManager.clearData(); - return this; - }, -}; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Factory.js deleted file mode 100644 index f2e16442a0a20e5034c0daf713e70ee1fd43dcd3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Checkbox from './Checkbox.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('checkbox', function (x, y, width, height, color, config) { - var gameObject = new Checkbox(this.scene, x, y, width, height, color, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.Checkbox', Checkbox); - -export default Checkbox; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/OverCell.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/OverCell.js deleted file mode 100644 index bc35edf6422c5bfb3a95316480eb14126a62ceb3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/OverCell.js +++ /dev/null @@ -1,30 +0,0 @@ -import EmitCellEvent from './EmitCellEvent.js'; - -var OverCell = function (table, tableConfig) { - table - .on('pointermove', OnMove, this) - .on('pointerover', OnMove, this) - .on('pointerout', OnOut, this) // pointer-up is included too -} - -var OnMove = function (pointer, localX, localY, event) { - var table = this.childrenMap.child; - var cellIndex = table.pointToCellIndex(pointer.worldX, pointer.worldY); - if (cellIndex === table.input.lastOverCellIndex) { - return; - } - - var preCellIndex = table.input.lastOverCellIndex; - table.input.lastOverCellIndex = cellIndex; - EmitCellEvent(this.eventEmitter, 'cell.out', table, preCellIndex, undefined, pointer, event); - EmitCellEvent(this.eventEmitter, 'cell.over', table, cellIndex, undefined, pointer, event); -} - -var OnOut = function (pointer, event) { - var table = this.childrenMap.child; - var cellIndex = table.input.lastOverCellIndex; - table.input.lastOverCellIndex = undefined; - EmitCellEvent(this.eventEmitter, 'cell.out', table, cellIndex, undefined, pointer, event); -} - -export default OverCell; \ No newline at end of file diff --git a/spaces/AlexZou/Deploy_Restoration/net/IntmdSequential.py b/spaces/AlexZou/Deploy_Restoration/net/IntmdSequential.py deleted file mode 100644 index b31e9e9524a974dcaa163bde24562c645fc874e7..0000000000000000000000000000000000000000 --- a/spaces/AlexZou/Deploy_Restoration/net/IntmdSequential.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch.nn as nn - - -class IntermediateSequential(nn.Sequential): - def __init__(self, *args, return_intermediate=False): - super().__init__(*args) - self.return_intermediate = return_intermediate - - def forward(self, input): - if not self.return_intermediate: - return super().forward(input) - - intermediate_outputs = {} - output = input - for name, module in self.named_children(): - output = intermediate_outputs[name] = module(output) - - return output, intermediate_outputs - diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/modules.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Alpaca233/SadTalker/src/audio2exp_models/audio2exp.py b/spaces/Alpaca233/SadTalker/src/audio2exp_models/audio2exp.py deleted file mode 100644 index 9e79a929560592687a505e13188796e2b0ca8772..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/audio2exp_models/audio2exp.py +++ /dev/null @@ -1,41 +0,0 @@ -from tqdm import tqdm -import torch -from torch import nn - - -class Audio2Exp(nn.Module): - def __init__(self, netG, cfg, device, prepare_training_loss=False): - super(Audio2Exp, self).__init__() - self.cfg = cfg - self.device = device - self.netG = netG.to(device) - - def test(self, batch): - - mel_input = batch['indiv_mels'] # bs T 1 80 16 - bs = mel_input.shape[0] - T = mel_input.shape[1] - - exp_coeff_pred = [] - - for i in tqdm(range(0, T, 10),'audio2exp:'): # every 10 frames - - current_mel_input = mel_input[:,i:i+10] - - #ref = batch['ref'][:, :, :64].repeat((1,current_mel_input.shape[1],1)) #bs T 64 - ref = batch['ref'][:, :, :64][:, i:i+10] - ratio = batch['ratio_gt'][:, i:i+10] #bs T - - audiox = current_mel_input.view(-1, 1, 80, 16) # bs*T 1 80 16 - - curr_exp_coeff_pred = self.netG(audiox, ref, ratio) # bs T 64 - - exp_coeff_pred += [curr_exp_coeff_pred] - - # BS x T x 64 - results_dict = { - 'exp_coeff_pred': torch.cat(exp_coeff_pred, axis=1) - } - return results_dict - - diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/eval.md b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/eval.md deleted file mode 100644 index dd1d9e257367b6422680966198646c45e5a2671d..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/eval.md +++ /dev/null @@ -1,31 +0,0 @@ -## Eval on ICCV2021-MFR - -coming soon. - - -## Eval IJBC -You can eval ijbc with pytorch or onnx. - - -1. Eval IJBC With Onnx -```shell -CUDA_VISIBLE_DEVICES=0 python onnx_ijbc.py --model-root ms1mv3_arcface_r50 --image-path IJB_release/IJBC --result-dir ms1mv3_arcface_r50 -``` - -2. Eval IJBC With Pytorch -```shell -CUDA_VISIBLE_DEVICES=0,1 python eval_ijbc.py \ ---model-prefix ms1mv3_arcface_r50/backbone.pth \ ---image-path IJB_release/IJBC \ ---result-dir ms1mv3_arcface_r50 \ ---batch-size 128 \ ---job ms1mv3_arcface_r50 \ ---target IJBC \ ---network iresnet50 -``` - -## Inference - -```shell -python inference.py --weight ms1mv3_arcface_r50/backbone.pth --network r50 -``` diff --git a/spaces/Amjadd/BookGPT/README.md b/spaces/Amjadd/BookGPT/README.md deleted file mode 100644 index 1560a63d889fdce9ab68e7d198a416dc7f3540b3..0000000000000000000000000000000000000000 --- a/spaces/Amjadd/BookGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BookGPT -emoji: 😻 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -duplicated_from: pritish/BookGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet3d-cond.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet3d-cond.md deleted file mode 100644 index 83dbb514c8dd2b92035d9a57b925b3bad9a08fec..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet3d-cond.md +++ /dev/null @@ -1,13 +0,0 @@ -# UNet3DConditionModel - -The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. - -The abstract from the paper is: - -*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.* - -## UNet3DConditionModel -[[autodoc]] UNet3DConditionModel - -## UNet3DConditionOutput -[[autodoc]] models.unet_3d_condition.UNet3DConditionOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/one_step_unet.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/one_step_unet.py deleted file mode 100644 index 7d34bfd83191d63483bc562cb54cc887660cdffa..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/one_step_unet.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python3 -import torch - -from diffusers import DiffusionPipeline - - -class UnetSchedulerOneForwardPipeline(DiffusionPipeline): - def __init__(self, unet, scheduler): - super().__init__() - - self.register_modules(unet=unet, scheduler=scheduler) - - def __call__(self): - image = torch.randn( - (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), - ) - timestep = 1 - - model_output = self.unet(image, timestep).sample - scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample - - result = scheduler_output - scheduler_output + torch.ones_like(scheduler_output) - - return result diff --git a/spaces/Andy1621/uniformer_image_detection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py deleted file mode 100644 index dedac3f46b4710d16a8bc66f00663e379b2ebdc7..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py +++ /dev/null @@ -1,50 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - neck=dict( - type='FPN_CARAFE', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5, - start_level=0, - end_level=-1, - norm_cfg=None, - act_cfg=None, - order=('conv', 'norm', 'act'), - upsample_cfg=dict( - type='carafe', - up_kernel=5, - up_group=1, - encoder_kernel=3, - encoder_dilation=1, - compressed_channels=64))) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_ssd300_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_ssd300_coco.py deleted file mode 100644 index b5cc006477eacaa9ab40d463312dc2156a59d634..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_ssd300_coco.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = '../ssd/ssd300_coco.py' - -model = dict( - bbox_head=dict(type='PISASSDHead'), - train_cfg=dict(isr=dict(k=2., bias=0.), carl=dict(k=1., bias=0.2))) - -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/reppoints_detector.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/reppoints_detector.py deleted file mode 100644 index a5f6be31e14488e4b8a006b7142a82c872388d82..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/reppoints_detector.py +++ /dev/null @@ -1,22 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class RepPointsDetector(SingleStageDetector): - """RepPoints: Point Set Representation for Object Detection. - - This detector is the implementation of: - - RepPoints detector (https://arxiv.org/pdf/1904.11490) - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(RepPointsDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ui.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ui.py deleted file mode 100644 index 77e56e92fa609cc25c72a750383f5e1d7974468c..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ui.py +++ /dev/null @@ -1,251 +0,0 @@ -import copy -from pathlib import Path - -import gradio as gr -import torch -import yaml - -from modules import shared - - -with open(Path(__file__).resolve().parent / '../css/NotoSans/stylesheet.css', 'r') as f: - css = f.read() -with open(Path(__file__).resolve().parent / '../css/main.css', 'r') as f: - css += f.read() -with open(Path(__file__).resolve().parent / '../js/main.js', 'r') as f: - js = f.read() -with open(Path(__file__).resolve().parent / '../js/save_files.js', 'r') as f: - save_files_js = f.read() -with open(Path(__file__).resolve().parent / '../js/switch_tabs.js', 'r') as f: - switch_tabs_js = f.read() -with open(Path(__file__).resolve().parent / '../js/show_controls.js', 'r') as f: - show_controls_js = f.read() - -refresh_symbol = '🔄' -delete_symbol = '🗑️' -save_symbol = '💾' - -theme = gr.themes.Default( - font=['Noto Sans', 'Helvetica', 'ui-sans-serif', 'system-ui', 'sans-serif'], - font_mono=['IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace'], -).set( - border_color_primary='#c5c5d2', - button_large_padding='6px 12px', - body_text_color_subdued='#484848', - background_fill_secondary='#eaeaea' -) - -if Path("notification.mp3").exists(): - audio_notification_js = "document.querySelector('#audio_notification audio')?.play();" -else: - audio_notification_js = "" - - -def list_model_elements(): - elements = [ - 'loader', - 'filter_by_loader', - 'cpu_memory', - 'auto_devices', - 'disk', - 'cpu', - 'bf16', - 'load_in_8bit', - 'trust_remote_code', - 'use_fast', - 'load_in_4bit', - 'compute_dtype', - 'quant_type', - 'use_double_quant', - 'wbits', - 'groupsize', - 'model_type', - 'pre_layer', - 'triton', - 'desc_act', - 'no_inject_fused_attention', - 'no_inject_fused_mlp', - 'no_use_cuda_fp16', - 'disable_exllama', - 'cfg_cache', - 'threads', - 'threads_batch', - 'n_batch', - 'no_mmap', - 'mlock', - 'mul_mat_q', - 'n_gpu_layers', - 'tensor_split', - 'n_ctx', - 'llama_cpp_seed', - 'gpu_split', - 'max_seq_len', - 'compress_pos_emb', - 'alpha_value', - 'rope_freq_base', - 'numa', - ] - - for i in range(torch.cuda.device_count()): - elements.append(f'gpu_memory_{i}') - - return elements - - -def list_interface_input_elements(): - elements = [ - 'max_new_tokens', - 'auto_max_new_tokens', - 'max_tokens_second', - 'seed', - 'temperature', - 'top_p', - 'top_k', - 'typical_p', - 'epsilon_cutoff', - 'eta_cutoff', - 'repetition_penalty', - 'repetition_penalty_range', - 'encoder_repetition_penalty', - 'no_repeat_ngram_size', - 'min_length', - 'do_sample', - 'penalty_alpha', - 'num_beams', - 'length_penalty', - 'early_stopping', - 'mirostat_mode', - 'mirostat_tau', - 'mirostat_eta', - 'grammar_string', - 'negative_prompt', - 'guidance_scale', - 'add_bos_token', - 'ban_eos_token', - 'custom_token_bans', - 'truncation_length', - 'custom_stopping_strings', - 'skip_special_tokens', - 'stream', - 'tfs', - 'top_a', - ] - - # Chat elements - elements += [ - 'textbox', - 'start_with', - 'character_menu', - 'history', - 'name1', - 'name2', - 'greeting', - 'context', - 'mode', - 'instruction_template', - 'name1_instruct', - 'name2_instruct', - 'context_instruct', - 'turn_template', - 'chat_style', - 'chat-instruct_command', - ] - - # Notebook/default elements - elements += [ - 'textbox-notebook', - 'textbox-default', - 'output_textbox', - 'prompt_menu-default', - 'prompt_menu-notebook', - ] - - # Model elements - elements += list_model_elements() - - return elements - - -def gather_interface_values(*args): - output = {} - for i, element in enumerate(list_interface_input_elements()): - output[element] = args[i] - - if not shared.args.multi_user: - shared.persistent_interface_state = output - - return output - - -def apply_interface_values(state, use_persistent=False): - if use_persistent: - state = shared.persistent_interface_state - - elements = list_interface_input_elements() - if len(state) == 0: - return [gr.update() for k in elements] # Dummy, do nothing - else: - return [state[k] if k in state else gr.update() for k in elements] - - -def save_settings(state, preset, instruction_template, extensions, show_controls): - output = copy.deepcopy(shared.settings) - exclude = ['name2', 'greeting', 'context', 'turn_template'] - for k in state: - if k in shared.settings and k not in exclude: - output[k] = state[k] - - output['preset'] = preset - output['prompt-default'] = state['prompt_menu-default'] - output['prompt-notebook'] = state['prompt_menu-notebook'] - output['character'] = state['character_menu'] - output['instruction_template'] = instruction_template - output['default_extensions'] = extensions - output['seed'] = int(output['seed']) - output['show_controls'] = show_controls - - return yaml.dump(output, sort_keys=False, width=float("inf")) - - -class ToolButton(gr.Button, gr.components.IOComponent): - """ - Small button with single emoji as text, fits inside gradio forms - Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui - """ - - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def get_block_name(self): - return "button" - - -def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class, interactive=True): - """ - Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui - """ - def refresh(): - refresh_method() - args = refreshed_args() if callable(refreshed_args) else refreshed_args - - for k, v in args.items(): - setattr(refresh_component, k, v) - - return gr.update(**(args or {})) - - refresh_button = ToolButton(value=refresh_symbol, elem_classes=elem_class, interactive=interactive) - refresh_button.click( - fn=refresh, - inputs=[], - outputs=[refresh_component] - ) - - return refresh_button - - -def create_delete_button(**kwargs): - return ToolButton(value=delete_symbol, **kwargs) - - -def create_save_button(**kwargs): - return ToolButton(value=save_symbol, **kwargs) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/exp/upernet_global_small/test.sh b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/exp/upernet_global_small/test.sh deleted file mode 100644 index d9a85e7a0d3b7c96b060f473d41254b37a382fcb..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/exp/upernet_global_small/test.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/test.py ${work_path}/test_config_h32.py \ - ${work_path}/ckpt/latest.pth \ - --launcher pytorch \ - --eval mIoU \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/utils/zoom_in_utils.py b/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/utils/zoom_in_utils.py deleted file mode 100644 index 75e8b9981fbfd5fae4838f3b8048516f6023ba02..0000000000000000000000000000000000000000 --- a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/utils/zoom_in_utils.py +++ /dev/null @@ -1,75 +0,0 @@ -import os - -import cv2 -import numpy as np -from PIL import Image - -os.environ["CUDA_VISIBLE_DEVICES"] = "0" - - -def write_video(file_path, frames, fps, reversed=True): - """ - Writes frames to an mp4 video file - :param file_path: Path to output video, must end with .mp4 - :param frames: List of PIL.Image objects - :param fps: Desired frame rate - :param reversed: if order of images to be reversed (default = True) - """ - if reversed == True: - frames.reverse() - - w, h = frames[0].size - fourcc = cv2.VideoWriter_fourcc("m", "p", "4", "v") - # fourcc = cv2.VideoWriter_fourcc(*'avc1') - writer = cv2.VideoWriter(file_path, fourcc, fps, (w, h)) - - for frame in frames: - np_frame = np.array(frame.convert("RGB")) - cv_frame = cv2.cvtColor(np_frame, cv2.COLOR_RGB2BGR) - writer.write(cv_frame) - - writer.release() - - -def image_grid(imgs, rows, cols): - assert len(imgs) == rows * cols - - w, h = imgs[0].size - grid = Image.new("RGB", size=(cols * w, rows * h)) - grid_w, grid_h = grid.size - - for i, img in enumerate(imgs): - grid.paste(img, box=(i % cols * w, i // cols * h)) - return grid - - -def shrink_and_paste_on_blank(current_image, mask_width): - """ - Decreases size of current_image by mask_width pixels from each side, - then adds a mask_width width transparent frame, - so that the image the function returns is the same size as the input. - :param current_image: input image to transform - :param mask_width: width in pixels to shrink from each side - """ - - height = current_image.height - width = current_image.width - - # shrink down by mask_width - prev_image = current_image.resize((height - 2 * mask_width, width - 2 * mask_width)) - prev_image = prev_image.convert("RGBA") - prev_image = np.array(prev_image) - - # create blank non-transparent image - blank_image = np.array(current_image.convert("RGBA")) * 0 - blank_image[:, :, 3] = 1 - - # paste shrinked onto blank - blank_image[mask_width : height - mask_width, mask_width : width - mask_width, :] = prev_image - prev_image = Image.fromarray(blank_image) - - return prev_image - - -def dummy(images, **kwargs): - return images, False diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/compatibility_tags.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/compatibility_tags.py deleted file mode 100644 index b6ed9a78e552806cb23d8ac48ada6d41db5b4de5..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/compatibility_tags.py +++ /dev/null @@ -1,165 +0,0 @@ -"""Generate and work with PEP 425 Compatibility Tags. -""" - -import re -from typing import List, Optional, Tuple - -from pip._vendor.packaging.tags import ( - PythonVersion, - Tag, - compatible_tags, - cpython_tags, - generic_tags, - interpreter_name, - interpreter_version, - mac_platforms, -) - -_osx_arch_pat = re.compile(r"(.+)_(\d+)_(\d+)_(.+)") - - -def version_info_to_nodot(version_info: Tuple[int, ...]) -> str: - # Only use up to the first two numbers. - return "".join(map(str, version_info[:2])) - - -def _mac_platforms(arch: str) -> List[str]: - match = _osx_arch_pat.match(arch) - if match: - name, major, minor, actual_arch = match.groups() - mac_version = (int(major), int(minor)) - arches = [ - # Since we have always only checked that the platform starts - # with "macosx", for backwards-compatibility we extract the - # actual prefix provided by the user in case they provided - # something like "macosxcustom_". It may be good to remove - # this as undocumented or deprecate it in the future. - "{}_{}".format(name, arch[len("macosx_") :]) - for arch in mac_platforms(mac_version, actual_arch) - ] - else: - # arch pattern didn't match (?!) - arches = [arch] - return arches - - -def _custom_manylinux_platforms(arch: str) -> List[str]: - arches = [arch] - arch_prefix, arch_sep, arch_suffix = arch.partition("_") - if arch_prefix == "manylinux2014": - # manylinux1/manylinux2010 wheels run on most manylinux2014 systems - # with the exception of wheels depending on ncurses. PEP 599 states - # manylinux1/manylinux2010 wheels should be considered - # manylinux2014 wheels: - # https://www.python.org/dev/peps/pep-0599/#backwards-compatibility-with-manylinux2010-wheels - if arch_suffix in {"i686", "x86_64"}: - arches.append("manylinux2010" + arch_sep + arch_suffix) - arches.append("manylinux1" + arch_sep + arch_suffix) - elif arch_prefix == "manylinux2010": - # manylinux1 wheels run on most manylinux2010 systems with the - # exception of wheels depending on ncurses. PEP 571 states - # manylinux1 wheels should be considered manylinux2010 wheels: - # https://www.python.org/dev/peps/pep-0571/#backwards-compatibility-with-manylinux1-wheels - arches.append("manylinux1" + arch_sep + arch_suffix) - return arches - - -def _get_custom_platforms(arch: str) -> List[str]: - arch_prefix, arch_sep, arch_suffix = arch.partition("_") - if arch.startswith("macosx"): - arches = _mac_platforms(arch) - elif arch_prefix in ["manylinux2014", "manylinux2010"]: - arches = _custom_manylinux_platforms(arch) - else: - arches = [arch] - return arches - - -def _expand_allowed_platforms(platforms: Optional[List[str]]) -> Optional[List[str]]: - if not platforms: - return None - - seen = set() - result = [] - - for p in platforms: - if p in seen: - continue - additions = [c for c in _get_custom_platforms(p) if c not in seen] - seen.update(additions) - result.extend(additions) - - return result - - -def _get_python_version(version: str) -> PythonVersion: - if len(version) > 1: - return int(version[0]), int(version[1:]) - else: - return (int(version[0]),) - - -def _get_custom_interpreter( - implementation: Optional[str] = None, version: Optional[str] = None -) -> str: - if implementation is None: - implementation = interpreter_name() - if version is None: - version = interpreter_version() - return f"{implementation}{version}" - - -def get_supported( - version: Optional[str] = None, - platforms: Optional[List[str]] = None, - impl: Optional[str] = None, - abis: Optional[List[str]] = None, -) -> List[Tag]: - """Return a list of supported tags for each version specified in - `versions`. - - :param version: a string version, of the form "33" or "32", - or None. The version will be assumed to support our ABI. - :param platform: specify a list of platforms you want valid - tags for, or None. If None, use the local system platform. - :param impl: specify the exact implementation you want valid - tags for, or None. If None, use the local interpreter impl. - :param abis: specify a list of abis you want valid - tags for, or None. If None, use the local interpreter abi. - """ - supported: List[Tag] = [] - - python_version: Optional[PythonVersion] = None - if version is not None: - python_version = _get_python_version(version) - - interpreter = _get_custom_interpreter(impl, version) - - platforms = _expand_allowed_platforms(platforms) - - is_cpython = (impl or interpreter_name()) == "cp" - if is_cpython: - supported.extend( - cpython_tags( - python_version=python_version, - abis=abis, - platforms=platforms, - ) - ) - else: - supported.extend( - generic_tags( - interpreter=interpreter, - abis=abis, - platforms=platforms, - ) - ) - supported.extend( - compatible_tags( - python_version=python_version, - interpreter=interpreter, - platforms=platforms, - ) - ) - - return supported diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/more_itertools/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/more_itertools/__init__.py deleted file mode 100644 index 19a169fc30183db91f931ad6ad04fbc0e16559b3..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/more_itertools/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .more import * # noqa -from .recipes import * # noqa - -__version__ = '8.8.0' diff --git a/spaces/Awesimo/jojogan/e4e/models/encoders/__init__.py b/spaces/Awesimo/jojogan/e4e/models/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md deleted file mode 100644 index 9bab709cae689ba3b92dd52f7fbcc0c6926f4a38..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md +++ /dev/null @@ -1,68 +0,0 @@ -# Contributing to detectron2 - -## Issues -We use GitHub issues to track public bugs and questions. -Please make sure to follow one of the -[issue templates](https://github.com/facebookresearch/detectron2/issues/new/choose) -when reporting any issues. - -Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## Pull Requests -We actively welcome pull requests. - -However, if you're adding any significant features (e.g. > 50 lines), please -make sure to discuss with maintainers about your motivation and proposals in an issue -before sending a PR. This is to save your time so you don't spend time on a PR that we'll not accept. - -We do not always accept new features, and we take the following -factors into consideration: - -1. Whether the same feature can be achieved without modifying detectron2. - Detectron2 is designed so that you can implement many extensions from the outside, e.g. - those in [projects](https://github.com/facebookresearch/detectron2/tree/master/projects). - * If some part of detectron2 is not extensible enough, you can also bring up a more general issue to - improve it. Such feature request may be useful to more users. -2. Whether the feature is potentially useful to a large audience (e.g. an impactful detection paper, a popular dataset, - a significant speedup, a widely useful utility), - or only to a small portion of users (e.g., a less-known paper, an improvement not in the object - detection field, a trick that's not very popular in the community, code to handle a non-standard type of data) - * Adoption of additional models, datasets, new task are by default not added to detectron2 before they - receive significant popularity in the community. - We sometimes accept such features in `projects/`, or as a link in `projects/README.md`. -3. Whether the proposed solution has a good design / interface. This can be discussed in the issue prior to PRs, or - in the form of a draft PR. -4. Whether the proposed solution adds extra mental/practical overhead to users who don't - need such feature. -5. Whether the proposed solution breaks existing APIs. - -To add a feature to an existing function/class `Func`, there are always two approaches: -(1) add new arguments to `Func`; (2) write a new `Func_with_new_feature`. -To meet the above criteria, we often prefer approach (2), because: - -1. It does not involve modifying or potentially breaking existing code. -2. It does not add overhead to users who do not need the new feature. -3. Adding new arguments to a function/class is not scalable w.r.t. all the possible new research ideas in the future. - -When sending a PR, please do: - -1. If a PR contains multiple orthogonal changes, split it to several PRs. -2. If you've added code that should be tested, add tests. -3. For PRs that need experiments (e.g. adding a new model or new methods), - you don't need to update model zoo, but do provide experiment results in the description of the PR. -4. If APIs are changed, update the documentation. -5. We use the [Google style docstrings](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) in python. -6. Make sure your code lints with `./dev/linter.sh`. - - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## License -By contributing to detectron2, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/Benson/text-generation/Examples/Car Simulator 9.md b/spaces/Benson/text-generation/Examples/Car Simulator 9.md deleted file mode 100644 index 397fef2c81c46f4c17264c856cac95d35b6ceac7..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Car Simulator 9.md +++ /dev/null @@ -1,113 +0,0 @@ - -

Simulador de coches 9: Una revisión del último juego de simulación de coches

-

Si usted es un fan de los juegos de simulación de coches, es posible que haya oído hablar de Car Simulator 9, la nueva entrega de la popular serie. Car Simulator 9 es un juego realista e inmersivo que te permite conducir varios coches en diferentes entornos y escenarios. Puede personalizar su coche, actualizar su garaje, desafíos completos, y más. En este artículo, revisaremos Car Simulator 9 y te daremos algunos consejos y trucos sobre cómo jugarlo.

-

¿Qué es el simulador de coche 9?

-

Car Simulator 9 es un juego de simulación de coches desarrollado por Red Dot Games y publicado por PlayWay S.A. Fue lanzado el 11 de agosto de 2021, para Windows, Mac, Linux, PlayStation 4, Xbox One y Nintendo Switch. Es el noveno juego de la serie Car Simulator, que comenzó en 2014 con Car Mechanic Simulator.

-

car simulator 9


DOWNLOAD >>> https://bltlly.com/2v6Mjj



-

Características y jugabilidad

-

Car Simulator 9 cuenta con más de 72 coches de varias marcas y modelos, cada uno con sus propias especificaciones y rendimiento. Usted puede elegir entre diferentes categorías de coches, tales como coches deportivos, coches del músculo, coches clásicos, coches eléctricos, y más. También puede modificar su automóvil con más de 4000 piezas únicas, como motores, neumáticos, frenos, suspensión, kits de carrocería, trabajos de pintura y más.

-

El juego ofrece una experiencia de conducción realista con física precisa y sistema de daños. Puede conducir su coche en diferentes condiciones de carretera, como asfalto, tierra, nieve, lluvia, etc. También puede explorar diferentes lugares, como ciudades, carreteras, campo, montañas, desiertos, etc. El juego tiene un clima dinámico y díaciclo nocturno que afecta la visibilidad y el manejo de su automóvil.

- -

Gráficos y sonido

-

Car Simulator 9 tiene gráficos impresionantes que crean un entorno realista e inmersivo. El juego tiene texturas de alta calidad, efectos de iluminación, sombras, reflejos, etc. El juego también tiene modelos detallados de coches y piezas que puedes inspeccionar de cerca. El juego también tiene efectos de sonido realistas que coinciden con los sonidos del motor, sonidos de neumáticos, sonidos de cuerno, etc. El juego también tiene una banda sonora que cuenta con varios géneros de música que se adaptan al estado de ánimo del juego.

-

Pros y contras

-

Car Simulator 9 es un juego divertido y adictivo que atraerá a los entusiastas del automóvil y a los jugadores casuales por igual. El juego tiene muchos profesionales, como:

-
    -
  • Tiene una gran variedad de coches y piezas que puede personalizar.
  • -
  • Tiene una experiencia de conducción realista con la física precisa y el sistema de daños.
  • -
  • Tiene diferentes ubicaciones y escenarios que puede explorar.
  • -
  • Tiene varios modos de juego y objetivos que puedes jugar.
  • -
  • Tiene impresionantes gráficos y efectos de sonido que crean un entorno realista.
  • -
-

Sin embargo, el juego también tiene algunos contras, como:

-
    -
  • Puede ser repetitivo y tedioso a veces.
  • -
  • Puede ser buggy y glitchy a veces.
  • -
  • Puede ser caro comprar todos los DLCs.
  • -
  • Puede ser difícil controlar el coche con algunos dispositivos o ajustes.
  • -
  • Puede ser demasiado fácil o demasiado difícil para algunos jugadores.
  • -
-

Cómo jugar coche

Cómo jugar simulador de coche 9?

-

Si usted está interesado en jugar Car Simulator 9, tendrá que cumplir con algunos requisitos y seguir algunos pasos. Aquí es cómo jugar Car Simulator 9:

-

Requisitos del sistema

-

Antes de comprar o descargar el juego, debe asegurarse de que su dispositivo pueda ejecutarlo sin problemas. Aquí están los requisitos mínimos y recomendados del sistema para Car Simulator 9:

-

- - -Mínimo -Recomendado - - -OS: Windows 7/8/10 (64 bits) - - - -Procesador: Intel Core i3 3.0 GHz o equivalente AMD -Procesador: Intel Core i5 3.4 GHz o equivalente AMD - - -Memoria: 4 GB de RAM -Memoria: 8 GB de RAM - - -Gráficos: Nvidia GeForce GTX 660 o AMD Radeon R9 270x -Gráficos: Nvidia GeForce GTX 970 o AMD Radeon RX 580 - - -Almacenamiento: 20 GB de espacio disponible -Almacenamiento: 20 GB de espacio disponible - - -Tarjeta de sonido: DirectX compatible -Tarjeta de sonido: DirectX compatible - - -

Descarga e instalación

-

Una vez que haya comprobado los requisitos del sistema, puede comprar o descargar el juego desde varias plataformas. El juego está disponible en Steam, GOG, Epic Games Store, PlayStation Store, Microsoft Store y Nintendo eShop. El precio del juego varía dependiendo de la plataforma y la región, pero por lo general es alrededor de $ 19.99 USD. También puede comprar los PLC por separado o como un paquete para contenido adicional y características.

-

Para instalar el juego, debe seguir las instrucciones de la plataforma que está utilizando. Por ejemplo, si estás usando Steam, necesitas crear una cuenta, iniciar sesión, añadir el juego a tu biblioteca y hacer clic en instalar. El proceso de instalación puede tardar algún tiempo dependiendo de la velocidad de Internet y el rendimiento del dispositivo.

-

Controles y ajustes

-

Después de instalar el juego, puede iniciarlo y ajustar los controles y ajustes a su preferencia. Puede elegir entre diferentes dispositivos de entrada, como teclado, ratón, gamepad, volante, etc. También puede personalizar las combinaciones de teclas, sensibilidad, vibración, etc. También puede cambiar la configuración de gráficos, sonido, juego, etc. Puede elegir entre diferentes resoluciones, niveles de calidad, niveles de volumen, niveles de dificultad, etc.

-

Consejos y trucos para el simulador de coche 9

-

Para disfrutar más del juego y mejorar tus habilidades, aquí hay algunos consejos y trucos para Car Simulator 9:

-

Cómo personalizar su coche

- -

Para comprar nuevas piezas o accesorios para su coche, debe ir a la tienda presionando S en el teclado o haciendo clic en el icono del carrito de compras en la pantalla. A continuación, puede navegar a través de diferentes artículos que están disponibles para la compra. También puedes filtrarlos por precio, marca, modelo, categoría, etc. Para comprar un artículo, necesitas tener suficiente dinero en tu cuenta. Puedes ganar dinero completando tareas o vendiendo autos.

-

Cómo ganar dinero y mejorar su garaje

-

Para comprar coches nuevos o piezas para su coche, necesita tener suficiente dinero en su cuenta. Hay varias maneras de ganar dinero en Car Simulator 9:

-
    -
  • Puedes trabajar como mecánico de coches y reparar coches para clientes en tu garaje. Usted recibirá los pedidos de los clientes que tienen problemas con sus coches. Tendrá que diagnosticar el problema utilizando herramientas como escáner o ruta de prueba. A continuación, tendrá que reemplazar o arreglar las piezas rotas mediante el uso de herramientas tales como llave o soldador. Recibirás puntos de dinero y reputación por cada pedido completado.
  • -
  • Usted puede comprar y vender coches en la casa de subastas o el depósito de chatarra. Usted puede encontrar coches baratos que están dañados o viejos y comprarlos a un precio bajo a un precio bajo. A continuación, puede repararlos y restaurarlos en su garaje y venderlos a un precio más alto. También puede encontrar coches raros o únicos que valen más dinero.
  • -
  • Usted puede poner a prueba sus habilidades de conducción en varios desafíos y logros que recompensa con puntos de dinero y reputación. Puedes acceder a los desafíos y logros pulsando C en tu teclado o haciendo clic en el icono del trofeo en la pantalla. A continuación, puede elegir entre diferentes tipos de desafíos, tales como velocidad, deriva, salto, carrera, etc. Tendrá que completar los objetivos dentro del límite de tiempo o el límite de puntuación.
  • - -
-

Para actualizar su garaje, necesita tener suficientes puntos de reputación en su cuenta. Los puntos de reputación se ganan completando tareas o ganando carreras. Puede acceder al menú de actualización del garaje pulsando G en el teclado o haciendo clic en el icono del martillo en la pantalla. A continuación, puede elegir entre diferentes actualizaciones, tales como herramientas, equipos, espacio, decoración, etc. Actualizar su garaje le permitirá trabajar más rápido, más fácil y mejor.

-

Cómo completar desafíos y logros

-

Uno de los aspectos divertidos de Car Simulator 9 es que puede completar varios desafíos y logros que ponen a prueba sus habilidades de conducción y conocimientos. Para acceder a los desafíos y logros, debe presionar C en el teclado o hacer clic en el icono del trofeo en la pantalla. A continuación, puede elegir entre diferentes tipos de desafíos, tales como velocidad, deriva, salto, carrera, etc. Tendrá que completar los objetivos dentro del límite de tiempo o el límite de puntuación.

-

Algunos de los desafíos y logros son fáciles y directos, mientras que otros son difíciles y difíciles. Aquí hay algunos consejos y trucos para completar algunos de los desafíos y logros:

-
    -
  • Para los desafíos de velocidad, es necesario conducir lo más rápido posible sin chocar o salir de la carretera. Puede utilizar propulsores de nitro o turbocompresores para aumentar su velocidad. También puede utilizar el deslizamiento o el dibujo para reducir la resistencia del aire y ganar velocidad. También debes evitar el tráfico y los obstáculos que pueden ralentizarte.
  • -
  • Para los desafíos de deriva, es necesario deslizar su coche de lado sin perder el control o girar hacia fuera. Puede utilizar el freno de mano o el embrague para iniciar una deriva. También puede utilizar el acelerador o la dirección para mantener o ajustar una deriva. También debe usar la dirección contraria o el bloqueo opuesto para equilibrar su automóvil y evitar el sobreviraje o subviraje.
  • - -
  • Para los desafíos de carrera, necesitas terminar primero entre otros competidores sin chocar o ser descalificado. Puede utilizar atajos o rutas alternativas para obtener una ventaja sobre otros corredores. También puede utilizar propulsores de nitro o turbocompresores para superar a otros corredores. También debe utilizar técnicas de frenado o curvas para navegar curvas y giros sin perder velocidad.
  • -
-

Conclusión

-

Resumen de la revisión

-

En conclusión, Car Simulator 9 es un juego de simulación de coches realista e inmersivo que te permite conducir varios coches en diferentes entornos y escenarios. Puede personalizar su coche, actualizar su garaje, desafíos completos, y más. El juego tiene muchos pros, como una gran variedad de coches y piezas, una experiencia de conducción realista, diferentes ubicaciones y escenarios, varios modos de juego y objetivos, gráficos impresionantes y efectos de sonido, etc. El juego también tiene algunos contras, como ser repetitivo y tedioso a veces, tener errores y problemas a veces, ser caro para comprar todos los DLC, ser difícil de controlar con algunos dispositivos o configuraciones, ser demasiado fácil o demasiado difícil para algunos jugadores, etc.

-

Clasificación y recomendación

-

Le damos a Car Simulator 9 una calificación de 4 de 5 estrellas basada en nuestra revisión. Recomendamos Car Simulator 9 a cualquiera que ame los juegos de simulación de coches o los coches en general. El juego es adecuado para jugadores de todas las edades y niveles de habilidad. El juego es divertido y adictivo y te mantendrá entretenido durante horas.

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Car Simulator 9:

-
    -
  • Q: ¿Cuánto cuesta Car Simulator 9?
  • -
  • A: Car Simulator 9 cuesta alrededor de $ 19.99 USD en varias plataformas, como Steam, GOG, Epic Games Store, PlayStation Store, Microsoft Store y Nintendo eShop. El precio puede variar dependiendo de la plataforma y la región. También puede comprar los DLC por separado o como un paquete para contenido adicional y características.
  • - -
  • A: Car Simulator 9 no tiene una longitud fija o una historia lineal. El juego es abierto y estilo sandbox, lo que significa que puedes jugar el tiempo que quieras y de la manera que quieras. Puedes crear tus propias metas y objetivos, o seguir los proporcionados por el juego. El juego tiene mucho contenido y valor de repetición, así que nunca te quedarás sin cosas que hacer.
  • -
  • Q: ¿Es multijugador Car Simulator 9?
  • -
  • A: Sí, Car Simulator 9 tiene un modo multijugador que le permite jugar con otros jugadores en línea o fuera de línea. Puede unirse o crear un vestíbulo e invitar a otros jugadores a unirse. También puede elegir la ubicación, el coche y las reglas de la carrera. Puede competir con otros jugadores en carreras o carreras de arrastre, o cooperar con ellos en la reparación de coches o completar desafíos.
  • -
  • Q: ¿Es realista Car Simulator 9?
  • -
  • A: Sí, Car Simulator 9 es realista en términos de gráficos, sonido, física y sistema de daños. El juego tiene texturas de alta calidad, efectos de iluminación, sombras, reflejos, etc. El juego también tiene modelos detallados de coches y piezas que puedes inspeccionar de cerca. El juego también tiene efectos de sonido realistas que coinciden con los sonidos del motor, sonidos de neumáticos, sonidos de cuerno, etc. El juego también tiene la física precisa y sistema de daños que afectan a la visibilidad y el manejo de su coche.
  • -
  • Q: ¿Es divertido Car Simulator 9?
  • -
  • A: Sí, Car Simulator 9 es divertido y adictivo para cualquier persona que ama los juegos de simulación de coches o coches en general. El juego tiene muchas características y opciones de juego que te mantendrán entretenido durante horas. Puede personalizar su coche, actualizar su garaje, desafíos completos, y más. El juego también tiene impresionantes gráficos y efectos de sonido que crean un entorno realista.
  • -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Carx Street 0.9.1.md b/spaces/Benson/text-generation/Examples/Descargar Carx Street 0.9.1.md deleted file mode 100644 index a34177db025075d859db12bc2a68821d0a8371d5..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Carx Street 0.9.1.md +++ /dev/null @@ -1,77 +0,0 @@ -
-

Cómo descargar CarX Street 0.9.1 y convertirse en una leyenda de carreras callejeras

-

Si eres un fan de los juegos de carreras de coches, es posible que hayas oído hablar de CarX Street, el último juego de los creadores de CarX Drift Racing 2. CarX Street es un juego de carreras de mundo abierto realista y dinámico que te permite abrazar la libertad de ser un corredor de la calle en la ciudad de Sunset City. Puede elegir entre una variedad de coches, personalizarlos y competir en carreteras y calles de la ciudad, así como la deriva en las pistas de alta velocidad.

-

CarX Street se encuentra actualmente en pruebas beta abiertas, lo que significa que puede descargarlo de forma gratuita y disfrutar de sus características antes de que se lance oficialmente. La última versión del juego es 0.9.1, que se actualizó el 7 de junio de 2023, y trae algunas mejoras y correcciones de errores al juego.

-

descargar carx street 0.9.1


Download Ziphttps://bltlly.com/2v6M1K



-

En este artículo, le diremos qué es CarX Street 0.9.1, por qué debe jugarlo y cómo descargarlo en su dispositivo Android o iOS. También te daremos algunos consejos y trucos para ayudarte a convertirte en una leyenda de las carreras callejeras en CarX Street 0.9.1.

-

¿Qué es CarX Street 0.9.1?

-

CarX Street 0.9.1 es la última versión de la prueba beta abierta de CarX Street, un juego de carreras de mundo abierto realista y dinámico desarrollado por CarX Technologies, LLC. El juego está disponible para dispositivos Android e iOS, y requiere una conexión a Internet para jugar.

-

CarX Street 0.9.1 le ofrece una variedad de características que lo convierten en uno de los mejores juegos de carreras de coches en dispositivos móviles. Estos son algunos de ellos:

-

Características de CarX Street 0.9.1

-
    -
  • Un mapa de mundo abierto grande y detallado que cubre carreteras, calles de ciudades, zonas industriales, suburbios y más.
  • -
  • Una lista diversa de autos que incluye autos deportivos, autos musculares, supercoches, autos clásicos y más.
  • -
  • Un sistema de ajuste de coche detallado que le permite intercambiar piezas, actualizar su motor, transmisión, cuerpo, suspensión, neumáticos y más.
  • - -
  • Un motor de física realista que simula el comportamiento del automóvil basado en la tecnología CarX.
  • -
  • Un dinámico ciclo día/noche que cambia la iluminación y la atmósfera del juego.
  • -
  • Un modo de carrera que te desafía a unirte a clubes, derrotar jefes y convertirte en la leyenda de Sunset City.
  • -
  • Una variedad de modos de carrera que incluyen carreras de velocidad, carreras de deriva, carreras de arrastre, pruebas de tiempo y más.
  • -
  • Un modo multijugador online que te permite competir con otros jugadores de todo el mundo.
  • -
-

Cómo descargar CarX Street 0.9.1 en dispositivos Android e iOS

-

Descargar CarX Street 0.9.1 es fácil y gratuito. Todo lo que necesitas hacer es seguir estos pasos:

-
    -
  1. Ir a la Google Play Store o la App Store en su dispositivo.
  2. -
  3. Buscar CarX Street o utilizar este enlace: [CarX Street - Aplicaciones en Google Play]( 1 ).
  4. -
  5. Toque en el botón Instalar y espere a que termine la descarga.
  6. -
  7. Iniciar el juego y disfrutar!
  8. -
Por qué deberías jugar CarX Street 0.9.1 -

CarX Street 0.9.1 no es solo otro juego de carreras de coches. Es un juego que te da la oportunidad de experimentar la emoción y la emoción de ser un corredor de la calle en un mundo abierto realista y dinámico. Aquí hay algunas razones por las que deberías jugar CarX Street 0.9.1:

-

Carreras realistas y dinámicas de mundo abierto

-

Una de las principales atracciones de CarX Street 0.9.1 es su mapa de mundo abierto grande y detallado que cubre carreteras, calles de la ciudad, zonas industriales, suburbios y más. Puede explorar el mapa libremente y encontrar diferentes lugares, lugares de interés y secretos. También puedes encontrar diferentes eventos, desafíos y carreras que pondrán a prueba tus habilidades y te recompensarán con dinero y reputación.

-

- -

Coches personalizables y sintonizables

-

Otra razón para jugar CarX Street 0.9.1 es su diversa lista de autos que incluye autos deportivos, autos musculares, supercoches, autos clásicos y más. Puedes elegir entre más de 50 coches que tienen diferentes características, como velocidad, aceleración, manejo, deriva y más. También puede personalizar sus coches con diferentes partes, colores, pegatinas y calcomanías.

-

Pero la personalización no es suficiente. También necesitas afinar tus coches para que funcionen mejor en la carretera. CarX Street 0.9.1 le ofrece un sistema de ajuste detallado del automóvil que le permite intercambiar piezas, actualizar su motor, transmisión, cuerpo, suspensión, neumáticos y más. También puede ajustar la configuración de su automóvil, como el ángulo de curvatura, el ángulo del dedo del pie, la presión de los neumáticos, la relación de transmisión y más. El ajuste de su coche lo hará más rápido, más sensible, y más estable en el camino.

-

Modo de carrera desafiante y gratificante

-

La última razón para jugar CarX Street 0.9.1 es su desafiante y gratificante modo de carrera que te reta a unirte a clubes, derrotar jefes y convertirte en la leyenda de Sunset City. El modo carrera consiste en más de 100 misiones que te llevarán a diferentes lugares y modos de carrera en el juego. También conocerás diferentes personajes que te ayudarán o te obstaculizarán en tu viaje.

-

El modo carrera también le presentará el sistema de clubes en el juego. Los clubes son grupos de corredores que tienen sus propios territorios, reglas y reputaciones en Sunset City. Puede unirse a uno de los clubes o crear su propio club y reclutar a otros jugadores. También puedes retar a otros clubes por sus territorios y recursos. El sistema de clubes añade un elemento social y competitivo al juego que lo hace más divertido y atractivo.

-

Consejos y trucos para jugar CarX Street 0.9.1

- -

Elige el coche adecuado para cada modo de carrera

-

CarX Street 0.9.1 le ofrece una variedad de modos de carrera que incluyen carreras de velocidad, carreras de deriva, carreras de arrastre, pruebas de tiempo y más. Cada modo de carrera requiere un tipo diferente de coche que se adapte a sus condiciones y objetivos. Por ejemplo, las carreras de velocidad requieren coches rápidos y ágiles que pueden acelerar rápidamente y maniobrar fácilmente en la carretera. Las carreras de deriva requieren coches potentes y estables que puedan deslizarse suavemente en las esquinas y mantener una alta velocidad.

-

Por lo tanto, es necesario elegir el coche adecuado para cada modo de carrera en función de sus características y rendimiento. Puedes consultar las estadísticas de cada coche en el menú del garaje antes de seleccionarlo para una carrera. También puede comparar diferentes coches tocando en ellos en el menú del garaje. Elegir el coche adecuado para cada modo de carrera te dará una ventaja sobre tus oponentes y aumentará tus posibilidades de ganar.

-

Actualizar las piezas de su coche y el motor

-

A medida que avanzas en el juego, ganarás dinero y reputación que puedes usar para comprar autos nuevos o mejorar los existentes. La mejora de las piezas y el motor de su automóvil mejorará su rendimiento en la carretera al aumentar su velocidad, aceleración, manejo, deriva y más. Puede actualizar las piezas de su automóvil y el motor en el menú del garaje tocando el botón de actualización junto a cada pieza o motor. También puede ver el efecto de cada actualización en las estadísticas de su coche mirando las barras y los números en la pantalla.

-

La actualización de las piezas y el motor de su automóvil también desbloqueará nuevas opciones de ajuste que le permitirán ajustar la configuración de su automóvil, como el ángulo de curvatura, el ángulo del dedo del pie, la presión de los neumáticos, la relación de engranajes y más. El ajuste de su coche hará que sea más adecuado para diferentes modos de carrera y condiciones. Puede sintonizar su coche en el menú del garaje pulsando en el botón de sintonía junto a cada parte o motor. También puede ver el efecto de cada opción de ajuste en las estadísticas de su coche mirando las barras y los números en la pantalla.

- -

CarX Street 0.9.1 es un juego que requiere que domines dos habilidades esenciales: deriva y velocidad. La deriva es el arte de deslizar su coche de lado en las esquinas y mantener la alta velocidad. La velocidad es la capacidad de acelerar rápidamente y alcanzar alta velocidad en las carreteras rectas. Ambas habilidades son importantes para diferentes modos de carrera y situaciones en el juego.

-

Para dominar la habilidad de deriva, necesitas practicar usando los controles de freno, acelerador y dirección para iniciar, mantener y salir de una deriva. También necesitas aprender a controlar el ángulo, la velocidad y la dirección de tu coche mientras vas a la deriva. Puedes practicar la deriva en el modo de deriva o en cualquier pista que tenga curvas y giros. También puede ver tutoriales y consejos sobre cómo desplazarse en el menú del juego o en línea.

-

Para dominar la habilidad de velocidad, necesitas practicar usando el nitro, el control de lanzamiento y los controles de cambio para aumentar tu aceleración y velocidad. También debe aprender a evitar obstáculos, tráfico y colisiones mientras acelera. Puede practicar el exceso de velocidad en el modo de velocidad o en cualquier pista que tenga carreteras y autopistas rectas. También puedes ver tutoriales y consejos sobre cómo acelerar en el menú del juego o en línea.

-

Únete a clubes y compite con otros jugadores

-

CarX Street 0.9.1 no es solo un juego para un jugador. También es un juego multijugador que te permite unirte a clubes y competir con otros jugadores de todo el mundo. Los clubes son grupos de corredores que tienen sus propios territorios, reglas y reputaciones en Sunset City. Puedes unirte a uno de los clubes o crear tu propio club y reclutar otros jugadores.

-

Unirse a un club le dará acceso a eventos exclusivos, desafíos, recompensas y salas de chat. También puede cooperar con los miembros de su club para desafiar a otros clubes por sus territorios y recursos. Competir con otros clubes aumentará el rango y la reputación de tu club en Sunset City.

- -

Conclusión

-

CarX Street 0.9.1 es un juego de carreras de mundo abierto realista y dinámico que te permite abrazar la libertad de ser un corredor de la calle en Sunset City. Usted puede elegir entre una variedad de coches, personalizarlos, sintonizarlos, y la raza en las carreteras y calles de la ciudad, así como la deriva en las pistas de alta velocidad.

-

CarX Street 0.9.1 está actualmente en pruebas beta abiertas, lo que significa que puede descargarlo de forma gratuita y disfrutar de sus características antes de que se lance oficialmente. La última versión del juego es 0.9.1, que se actualizó el 7 de junio de 2023, y trae algunas mejoras y correcciones de errores al juego.

-

En este artículo, te hemos dicho lo que es CarX Street 0.9.1, por qué deberías jugarlo, cómo descargarlo en tu dispositivo Android o iOS, y algunos consejos y trucos para ayudarte a convertirte en una leyenda de las carreras callejeras en CarX Street 0.9.1.

-

Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario sobre CarX Street 0.9.1 o este artículo, no dude en dejar un comentario a continuación o contáctenos a través de nuestro sitio web o canales de redes sociales.

-

¡Gracias por leer este artículo y feliz carrera!

-

Preguntas frecuentes

-
    -
  • Q: ¿Cuánto espacio requiere CarX Street 0.9.1 en mi dispositivo?
  • -
  • A: CarX Street 0.9.1 requiere aproximadamente 2 GB de espacio libre en su dispositivo.
  • -
  • Q: ¿Cómo puedo obtener más dinero y reputación en CarX Street 0.9.1?
  • -
  • A: Puedes obtener más dinero y reputación completando misiones, eventos, desafíos y carreras en el juego. También puede obtener más dinero y reputación uniéndose a clubes y compitiendo con otros jugadores.
  • -
  • Q: ¿Cómo puedo cambiar la vista de la cámara en CarX Street 0.9.1?
  • -
  • A: Puede cambiar la vista de la cámara en CarX Street 0.9.1 tocando el icono de la cámara en la esquina superior derecha de la pantalla. Puede elegir entre diferentes vistas de cámara, como primera persona, tercera persona, capucha, parachoques y más.
  • - -
  • A: Puede compartir sus capturas de pantalla y vídeos de CarX Street 0.9.1 tocando el icono de compartir en la esquina superior izquierda de la pantalla. Puedes elegir entre diferentes opciones, como guardar en tu dispositivo, subir a YouTube o compartir en plataformas de redes sociales.
  • -
  • Q: ¿Cómo puedo contactar a los desarrolladores de CarX Street 0.9.1?
  • -
  • A: Puede ponerse en contacto con los desarrolladores de CarX Street 0.9.1 visitando su sitio web o siguiendo sus canales de redes sociales. También puede enviarles un correo electrónico a support@carx-tech.com o dejar un comentario en la Google Play Store o en la App Store.
  • -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis De Taxi Driver Juego.md b/spaces/Benson/text-generation/Examples/Descargar Gratis De Taxi Driver Juego.md deleted file mode 100644 index 3cac40e0ea582b4a1fcb2d0f5ba30a34f80b568c..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gratis De Taxi Driver Juego.md +++ /dev/null @@ -1,60 +0,0 @@ -
-

Cómo descargar el gambito final gratis

-

Si eres un fan del misterio, el suspense y el romance, quizás te interese leer The Final Gambit de Jennifer Lynn Barnes. Este es el tercer y último libro de la exitosa serie The Inheritance Games, que sigue a Avery Grambs, un adolescente que hereda miles de millones de un benefactor misterioso, y los hermanos Hawthorne, que están decididos a ganar su corazón y descubrir los secretos detrás de la voluntad de su abuelo. Pero, ¿cómo se puede descargar The Final Gambit gratis sin violar la ley o arriesgar su dispositivo? En este artículo, te mostraremos algunas de las mejores maneras de encontrar y disfrutar de esta emocionante novela gratis.

-

¿Qué es el gambito final?

-

Un breve resumen del libro

-

The Final Gambit es la tercera y última entrega de la trilogía de The Inheritance Games de Jennifer Lynn Barnes. Fue publicado el 30 de agosto de 2022 por Little, Brown Books for Young Readers. El libro retoma donde quedó el segundo libro, The Hawthorne Legacy, con Avery Grambs y los hermanos Hawthorne enfrentando un nuevo desafío: un visitante misterioso que afirma estar relacionado con su difunto abuelo y que podría cambiarlo todo. A medida que el reloj marca el momento en que Avery se convertirá en el adolescente más rico del planeta, ella y los Hawthorne deben resolver un último rompecabezas y jugar un juego peligroso contra un enemigo desconocido y poderoso. Secretos sobre secretos. Acertijos sobre acertijos. En este juego, hay corazones y vidas en juego, y no hay nada más Hawthorne que ganar.

-

descargar gratis de taxi driver juego


Download Zip 🆗 https://bltlly.com/2v6LrN



-

Por qué deberías leerlo

- -

Dónde encontrar el gambito final libre

-

Amazon Kindle

-

Una de las formas más fáciles de descargar The Final Gambit gratis es usar Amazon Kindle. Si tiene un dispositivo o aplicación Kindle, puede acceder a miles de libros de forma gratuita con Kindle Unlimited, un servicio de suscripción que cuesta $ 9.99 por mes. También puede obtener una prueba gratuita durante 30 días si es un usuario nuevo. Con Kindle Unlimited, puedes leer The Final Gambit y otros libros de Jennifer Lynn Barnes sin pagar nada. También puede tomar prestados libros de su biblioteca local utilizando aplicaciones OverDrive o Libby, que son compatibles con los dispositivos y aplicaciones de Kindle. Para saber más sobre Amazon Kindle, visite .

-

Libros de Google

-

Otra opción para descargar The Final Gambit gratis es utilizar Google Books. Google Books es un servicio que te permite buscar, previsualizar y leer millones de libros en línea. Algunos libros están disponibles en texto completo de forma gratuita, mientras que otros solo están disponibles en fragmentos o vistas previas. También puedes comprar o alquilar libros de Google Play Books, que está integrado con Google Books. Para leer The Final Gambit gratis en Google Books, puedes buscarlo usando palabras clave o navegar por categorías. También puedes usar filtros para reducir tus resultados por idioma, fecha, formato, etc. Para acceder a Google Books, visita .

-

Scribd

-

Scribd es otro servicio que te permite descargar The Final Gambit gratis. Scribd es una biblioteca digital que ofrece acceso ilimitado a libros, audiolibros, revistas, podcasts y más por una tarifa mensual de $9.99. También puede obtener una prueba gratuita durante 30 días si es un usuario nuevo. Con Scribd, puedes leer The Final Gambit y otros libros de Jennifer Lynn Barnes en cualquier dispositivo, online o offline. También puede compartir sus pensamientos y opiniones con otros lectores, y descubrir nuevos libros basados en sus preferencias. Para unirse a Scribd, vaya a .

-

Yumpu

- -

AudioBB

-

Si prefieres escuchar The Final Gambit en lugar de leerlo, puedes descargarlo gratis desde AudioBB. AudioBB es un sitio web que ofrece audiolibros gratuitos en varios géneros e idiomas. Puedes encontrar The Final Gambit y otros libros de Jennifer Lynn Barnes en AudioBB buscándolos o navegando por categorías. También puede solicitar audiolibros que no están disponibles en el sitio. Para descargar The Final Gambit de AudioBB, vaya a .

-

Cómo evitar estafas y virus al descargar el gambito final libre

-

Comprobar la fuente y comentarios

-

Antes de descargar The Final Gambit desde cualquier sitio web, asegúrese de que la fuente es confiable y confiable. Puede hacer esto comprobando el nombre de dominio, el diseño, el contenido y las revisiones del sitio web. Evite los sitios web que tienen nombres de dominio sospechosos o mal escritos, diseño pobre o desactualizado, contenido irrelevante o de baja calidad, y críticas negativas o falsas. También puedes usar herramientas como Whois o Scamadvisor para comprobar la reputación y legitimidad del sitio web.

-

Utilice un software VPN y antivirus

-

Otra forma de protegerse de estafas y virus al descargar The Final Gambit es usar una VPN y un software antivirus. Una VPN (red privada virtual) es un servicio que encripta su tráfico de Internet y oculta su dirección IP, lo que lo hace anónimo y seguro en línea. Una VPN puede ayudarlo a evitar las restricciones geográficas, acceder a sitios web bloqueados y evitar los ataques de malware y phishing. Un software antivirus es un programa que detecta y elimina virus, gusanos, troyanos, spyware, adware, ransomware y otro software malicioso desde su dispositivo. Un software antivirus puede ayudarlo a escanear y limpiar su dispositivo, evitar el acceso no autorizado y bloquear las descargas dañinas. Puede encontrar muchas VPN gratuitas y de pago y software antivirus en línea.

-

Cuidado con los enlaces falsos y pop-ups

- -

Conclusión

-

The Final Gambit es un libro increíble que no debes perderte si te gusta el misterio, el suspense y el romance. Es el tercer y último libro de la serie The Inheritance Games de Jennifer Lynn Barnes, que sigue a Avery Grambs y los hermanos Hawthorne mientras desentrañan los secretos detrás de la voluntad de su abuelo y se enfrentan a una nueva amenaza. Puede descargar The Final Gambit gratis desde varias fuentes en línea, como Amazon Kindle, Google Books, Scribd, Yumpu y AudioBB. Sin embargo, también debe tener cuidado con las estafas y los virus al descargar el libro, y usar precauciones como verificar la fuente y las revisiones, usar una VPN y un software antivirus, y tener cuidado con los enlaces falsos y las ventanas emergentes. Esperamos que este artículo te haya ayudado a encontrar y descargar The Final Gambit gratis y disfrutar de este libro increíble. ¡Feliz lectura!

-

Preguntas frecuentes

-

Aquí están algunas de las preguntas más frecuentes sobre la descarga de The Final Gambit free:

-

- - -Pregunta -Respuesta - - -¿Es The Final Gambit el último libro de la serie? -Sí, The Final Gambit es el tercer y último libro de la trilogía de The Inheritance Games de Jennifer Lynn Barnes. Concluye la historia de Avery Grambs y los hermanos Hawthorne. - - -¿Puedo descargar The Final Gambit gratis legalmente? -Sí, puede descargar The Final Gambit gratis legalmente desde varias fuentes en línea, como Amazon Kindle, Google Books, Scribd, Yumpu y AudioBB. Sin embargo, también debe tener cuidado con las estafas y los virus al descargar el libro, y usar precauciones como verificar la fuente y las revisiones, usar una VPN y un software antivirus, y tener cuidado con los enlaces falsos y las ventanas emergentes. - - -¿Puedo leer The Final Gambit sin leer los libros anteriores? - - - -¿Cuáles son algunos otros libros similares a The Final Gambit? -Si te gusta The Final Gambit, también te pueden gustar otros libros de Jennifer Lynn Barnes, como The Naturals, The Fixer y Deadly Little Scandals. Puede que también te gusten otros libros del género de misterio y romance, como One of Us is Lying de Karen M. McManus, Truly Devious de Maureen Johnson, y A Good Girl’s Guide to Murder de Holly Jackson. - - -¿Cómo puedo contactar a Jennifer Lynn Barnes? -Puede ponerse en contacto con Jennifer Lynn Barnes a través de su sitio web , su Twitter , o su Instagram . También puede enviar su correo de fans a su agente en: Jennifer Lynn Barnes c/o Elizabeth Harding Curtis Brown Ltd. 10 Astor Place New York NY 10003 USA - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_resources/_legacy.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_resources/_legacy.py deleted file mode 100644 index 1d5d3f1fbb1f6c69d0da2a50e1d4492ad3378f17..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_resources/_legacy.py +++ /dev/null @@ -1,121 +0,0 @@ -import functools -import os -import pathlib -import types -import warnings - -from typing import Union, Iterable, ContextManager, BinaryIO, TextIO, Any - -from . import _common - -Package = Union[types.ModuleType, str] -Resource = str - - -def deprecated(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - warnings.warn( - f"{func.__name__} is deprecated. Use files() instead. " - "Refer to https://importlib-resources.readthedocs.io" - "/en/latest/using.html#migrating-from-legacy for migration advice.", - DeprecationWarning, - stacklevel=2, - ) - return func(*args, **kwargs) - - return wrapper - - -def normalize_path(path): - # type: (Any) -> str - """Normalize a path by ensuring it is a string. - - If the resulting string contains path separators, an exception is raised. - """ - str_path = str(path) - parent, file_name = os.path.split(str_path) - if parent: - raise ValueError(f'{path!r} must be only a file name') - return file_name - - -@deprecated -def open_binary(package: Package, resource: Resource) -> BinaryIO: - """Return a file-like object opened for binary reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open('rb') - - -@deprecated -def read_binary(package: Package, resource: Resource) -> bytes: - """Return the binary contents of the resource.""" - return (_common.files(package) / normalize_path(resource)).read_bytes() - - -@deprecated -def open_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> TextIO: - """Return a file-like object opened for text reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open( - 'r', encoding=encoding, errors=errors - ) - - -@deprecated -def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> str: - """Return the decoded string of the resource. - - The decoding-related arguments have the same semantics as those of - bytes.decode(). - """ - with open_text(package, resource, encoding, errors) as fp: - return fp.read() - - -@deprecated -def contents(package: Package) -> Iterable[str]: - """Return an iterable of entries in `package`. - - Note that not all entries are resources. Specifically, directories are - not considered resources. Use `is_resource()` on each entry returned here - to check if it is a resource or not. - """ - return [path.name for path in _common.files(package).iterdir()] - - -@deprecated -def is_resource(package: Package, name: str) -> bool: - """True if `name` is a resource inside `package`. - - Directories are *not* resources. - """ - resource = normalize_path(name) - return any( - traversable.name == resource and traversable.is_file() - for traversable in _common.files(package).iterdir() - ) - - -@deprecated -def path( - package: Package, - resource: Resource, -) -> ContextManager[pathlib.Path]: - """A context manager providing a file path object to the resource. - - If the resource does not already exist on its own on the file system, - a temporary file will be created. If the file was created, the file - will be deleted upon exiting the context manager (no exception is - raised if the file was deleted prior to the context manager - exiting). - """ - return _common.as_file(_common.files(package) / normalize_path(resource)) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/version.py deleted file mode 100644 index de9a09a4ed3b078b37e7490a6686f660ae935aca..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/version.py +++ /dev/null @@ -1,504 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import collections -import itertools -import re -import warnings -from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union - -from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType - -__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"] - -InfiniteTypes = Union[InfinityType, NegativeInfinityType] -PrePostDevType = Union[InfiniteTypes, Tuple[str, int]] -SubLocalType = Union[InfiniteTypes, int, str] -LocalType = Union[ - NegativeInfinityType, - Tuple[ - Union[ - SubLocalType, - Tuple[SubLocalType, str], - Tuple[NegativeInfinityType, SubLocalType], - ], - ..., - ], -] -CmpKey = Tuple[ - int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType -] -LegacyCmpKey = Tuple[int, Tuple[str, ...]] -VersionComparisonMethod = Callable[ - [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool -] - -_Version = collections.namedtuple( - "_Version", ["epoch", "release", "dev", "pre", "post", "local"] -) - - -def parse(version: str) -> Union["LegacyVersion", "Version"]: - """ - Parse the given version string and return either a :class:`Version` object - or a :class:`LegacyVersion` object depending on if the given version is - a valid PEP 440 version or a legacy version. - """ - try: - return Version(version) - except InvalidVersion: - return LegacyVersion(version) - - -class InvalidVersion(ValueError): - """ - An invalid version was found, users should refer to PEP 440. - """ - - -class _BaseVersion: - _key: Union[CmpKey, LegacyCmpKey] - - def __hash__(self) -> int: - return hash(self._key) - - # Please keep the duplicated `isinstance` check - # in the six comparisons hereunder - # unless you find a way to avoid adding overhead function calls. - def __lt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key < other._key - - def __le__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key <= other._key - - def __eq__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key == other._key - - def __ge__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key >= other._key - - def __gt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key > other._key - - def __ne__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key != other._key - - -class LegacyVersion(_BaseVersion): - def __init__(self, version: str) -> None: - self._version = str(version) - self._key = _legacy_cmpkey(self._version) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def __str__(self) -> str: - return self._version - - def __repr__(self) -> str: - return f"" - - @property - def public(self) -> str: - return self._version - - @property - def base_version(self) -> str: - return self._version - - @property - def epoch(self) -> int: - return -1 - - @property - def release(self) -> None: - return None - - @property - def pre(self) -> None: - return None - - @property - def post(self) -> None: - return None - - @property - def dev(self) -> None: - return None - - @property - def local(self) -> None: - return None - - @property - def is_prerelease(self) -> bool: - return False - - @property - def is_postrelease(self) -> bool: - return False - - @property - def is_devrelease(self) -> bool: - return False - - -_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE) - -_legacy_version_replacement_map = { - "pre": "c", - "preview": "c", - "-": "final-", - "rc": "c", - "dev": "@", -} - - -def _parse_version_parts(s: str) -> Iterator[str]: - for part in _legacy_version_component_re.split(s): - part = _legacy_version_replacement_map.get(part, part) - - if not part or part == ".": - continue - - if part[:1] in "0123456789": - # pad for numeric comparison - yield part.zfill(8) - else: - yield "*" + part - - # ensure that alpha/beta/candidate are before final - yield "*final" - - -def _legacy_cmpkey(version: str) -> LegacyCmpKey: - - # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch - # greater than or equal to 0. This will effectively put the LegacyVersion, - # which uses the defacto standard originally implemented by setuptools, - # as before all PEP 440 versions. - epoch = -1 - - # This scheme is taken from pkg_resources.parse_version setuptools prior to - # it's adoption of the packaging library. - parts: List[str] = [] - for part in _parse_version_parts(version.lower()): - if part.startswith("*"): - # remove "-" before a prerelease tag - if part < "*final": - while parts and parts[-1] == "*final-": - parts.pop() - - # remove trailing zeros from each series of numeric parts - while parts and parts[-1] == "00000000": - parts.pop() - - parts.append(part) - - return epoch, tuple(parts) - - -# Deliberately not anchored to the start and end of the string, to make it -# easier for 3rd party code to reuse -VERSION_PATTERN = r""" - v? - (?: - (?:(?P[0-9]+)!)? # epoch - (?P[0-9]+(?:\.[0-9]+)*) # release segment - (?P
                                          # pre-release
-            [-_\.]?
-            (?P(a|b|c|rc|alpha|beta|pre|preview))
-            [-_\.]?
-            (?P[0-9]+)?
-        )?
-        (?P                                         # post release
-            (?:-(?P[0-9]+))
-            |
-            (?:
-                [-_\.]?
-                (?Ppost|rev|r)
-                [-_\.]?
-                (?P[0-9]+)?
-            )
-        )?
-        (?P                                          # dev release
-            [-_\.]?
-            (?Pdev)
-            [-_\.]?
-            (?P[0-9]+)?
-        )?
-    )
-    (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))?       # local version
-"""
-
-
-class Version(_BaseVersion):
-
-    _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
-
-    def __init__(self, version: str) -> None:
-
-        # Validate the version and parse it into pieces
-        match = self._regex.search(version)
-        if not match:
-            raise InvalidVersion(f"Invalid version: '{version}'")
-
-        # Store the parsed out pieces of the version
-        self._version = _Version(
-            epoch=int(match.group("epoch")) if match.group("epoch") else 0,
-            release=tuple(int(i) for i in match.group("release").split(".")),
-            pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
-            post=_parse_letter_version(
-                match.group("post_l"), match.group("post_n1") or match.group("post_n2")
-            ),
-            dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
-            local=_parse_local_version(match.group("local")),
-        )
-
-        # Generate a key which will be used for sorting
-        self._key = _cmpkey(
-            self._version.epoch,
-            self._version.release,
-            self._version.pre,
-            self._version.post,
-            self._version.dev,
-            self._version.local,
-        )
-
-    def __repr__(self) -> str:
-        return f""
-
-    def __str__(self) -> str:
-        parts = []
-
-        # Epoch
-        if self.epoch != 0:
-            parts.append(f"{self.epoch}!")
-
-        # Release segment
-        parts.append(".".join(str(x) for x in self.release))
-
-        # Pre-release
-        if self.pre is not None:
-            parts.append("".join(str(x) for x in self.pre))
-
-        # Post-release
-        if self.post is not None:
-            parts.append(f".post{self.post}")
-
-        # Development release
-        if self.dev is not None:
-            parts.append(f".dev{self.dev}")
-
-        # Local version segment
-        if self.local is not None:
-            parts.append(f"+{self.local}")
-
-        return "".join(parts)
-
-    @property
-    def epoch(self) -> int:
-        _epoch: int = self._version.epoch
-        return _epoch
-
-    @property
-    def release(self) -> Tuple[int, ...]:
-        _release: Tuple[int, ...] = self._version.release
-        return _release
-
-    @property
-    def pre(self) -> Optional[Tuple[str, int]]:
-        _pre: Optional[Tuple[str, int]] = self._version.pre
-        return _pre
-
-    @property
-    def post(self) -> Optional[int]:
-        return self._version.post[1] if self._version.post else None
-
-    @property
-    def dev(self) -> Optional[int]:
-        return self._version.dev[1] if self._version.dev else None
-
-    @property
-    def local(self) -> Optional[str]:
-        if self._version.local:
-            return ".".join(str(x) for x in self._version.local)
-        else:
-            return None
-
-    @property
-    def public(self) -> str:
-        return str(self).split("+", 1)[0]
-
-    @property
-    def base_version(self) -> str:
-        parts = []
-
-        # Epoch
-        if self.epoch != 0:
-            parts.append(f"{self.epoch}!")
-
-        # Release segment
-        parts.append(".".join(str(x) for x in self.release))
-
-        return "".join(parts)
-
-    @property
-    def is_prerelease(self) -> bool:
-        return self.dev is not None or self.pre is not None
-
-    @property
-    def is_postrelease(self) -> bool:
-        return self.post is not None
-
-    @property
-    def is_devrelease(self) -> bool:
-        return self.dev is not None
-
-    @property
-    def major(self) -> int:
-        return self.release[0] if len(self.release) >= 1 else 0
-
-    @property
-    def minor(self) -> int:
-        return self.release[1] if len(self.release) >= 2 else 0
-
-    @property
-    def micro(self) -> int:
-        return self.release[2] if len(self.release) >= 3 else 0
-
-
-def _parse_letter_version(
-    letter: str, number: Union[str, bytes, SupportsInt]
-) -> Optional[Tuple[str, int]]:
-
-    if letter:
-        # We consider there to be an implicit 0 in a pre-release if there is
-        # not a numeral associated with it.
-        if number is None:
-            number = 0
-
-        # We normalize any letters to their lower case form
-        letter = letter.lower()
-
-        # We consider some words to be alternate spellings of other words and
-        # in those cases we want to normalize the spellings to our preferred
-        # spelling.
-        if letter == "alpha":
-            letter = "a"
-        elif letter == "beta":
-            letter = "b"
-        elif letter in ["c", "pre", "preview"]:
-            letter = "rc"
-        elif letter in ["rev", "r"]:
-            letter = "post"
-
-        return letter, int(number)
-    if not letter and number:
-        # We assume if we are given a number, but we are not given a letter
-        # then this is using the implicit post release syntax (e.g. 1.0-1)
-        letter = "post"
-
-        return letter, int(number)
-
-    return None
-
-
-_local_version_separators = re.compile(r"[\._-]")
-
-
-def _parse_local_version(local: str) -> Optional[LocalType]:
-    """
-    Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
-    """
-    if local is not None:
-        return tuple(
-            part.lower() if not part.isdigit() else int(part)
-            for part in _local_version_separators.split(local)
-        )
-    return None
-
-
-def _cmpkey(
-    epoch: int,
-    release: Tuple[int, ...],
-    pre: Optional[Tuple[str, int]],
-    post: Optional[Tuple[str, int]],
-    dev: Optional[Tuple[str, int]],
-    local: Optional[Tuple[SubLocalType]],
-) -> CmpKey:
-
-    # When we compare a release version, we want to compare it with all of the
-    # trailing zeros removed. So we'll use a reverse the list, drop all the now
-    # leading zeros until we come to something non zero, then take the rest
-    # re-reverse it back into the correct order and make it a tuple and use
-    # that for our sorting key.
-    _release = tuple(
-        reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
-    )
-
-    # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
-    # We'll do this by abusing the pre segment, but we _only_ want to do this
-    # if there is not a pre or a post segment. If we have one of those then
-    # the normal sorting rules will handle this case correctly.
-    if pre is None and post is None and dev is not None:
-        _pre: PrePostDevType = NegativeInfinity
-    # Versions without a pre-release (except as noted above) should sort after
-    # those with one.
-    elif pre is None:
-        _pre = Infinity
-    else:
-        _pre = pre
-
-    # Versions without a post segment should sort before those with one.
-    if post is None:
-        _post: PrePostDevType = NegativeInfinity
-
-    else:
-        _post = post
-
-    # Versions without a development segment should sort after those with one.
-    if dev is None:
-        _dev: PrePostDevType = Infinity
-
-    else:
-        _dev = dev
-
-    if local is None:
-        # Versions without a local segment should sort before those with one.
-        _local: LocalType = NegativeInfinity
-    else:
-        # Versions with a local segment need that segment parsed to implement
-        # the sorting rules in PEP440.
-        # - Alpha numeric segments sort before numeric segments
-        # - Alpha numeric segments sort lexicographically
-        # - Numeric segments sort numerically
-        # - Shorter versions sort before longer versions when the prefixes
-        #   match exactly
-        _local = tuple(
-            (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
-        )
-
-    return epoch, _release, _pre, _post, _dev, _local
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/archive_util.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/archive_util.py
deleted file mode 100644
index d8e10c13e154802f4a742ed4904f0071369aa2ad..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/archive_util.py
+++ /dev/null
@@ -1,213 +0,0 @@
-"""Utilities for extracting common archive formats"""
-
-import zipfile
-import tarfile
-import os
-import shutil
-import posixpath
-import contextlib
-from distutils.errors import DistutilsError
-
-from ._path import ensure_directory
-
-__all__ = [
-    "unpack_archive", "unpack_zipfile", "unpack_tarfile", "default_filter",
-    "UnrecognizedFormat", "extraction_drivers", "unpack_directory",
-]
-
-
-class UnrecognizedFormat(DistutilsError):
-    """Couldn't recognize the archive type"""
-
-
-def default_filter(src, dst):
-    """The default progress/filter callback; returns True for all files"""
-    return dst
-
-
-def unpack_archive(
-        filename, extract_dir, progress_filter=default_filter,
-        drivers=None):
-    """Unpack `filename` to `extract_dir`, or raise ``UnrecognizedFormat``
-
-    `progress_filter` is a function taking two arguments: a source path
-    internal to the archive ('/'-separated), and a filesystem path where it
-    will be extracted.  The callback must return the desired extract path
-    (which may be the same as the one passed in), or else ``None`` to skip
-    that file or directory.  The callback can thus be used to report on the
-    progress of the extraction, as well as to filter the items extracted or
-    alter their extraction paths.
-
-    `drivers`, if supplied, must be a non-empty sequence of functions with the
-    same signature as this function (minus the `drivers` argument), that raise
-    ``UnrecognizedFormat`` if they do not support extracting the designated
-    archive type.  The `drivers` are tried in sequence until one is found that
-    does not raise an error, or until all are exhausted (in which case
-    ``UnrecognizedFormat`` is raised).  If you do not supply a sequence of
-    drivers, the module's ``extraction_drivers`` constant will be used, which
-    means that ``unpack_zipfile`` and ``unpack_tarfile`` will be tried, in that
-    order.
-    """
-    for driver in drivers or extraction_drivers:
-        try:
-            driver(filename, extract_dir, progress_filter)
-        except UnrecognizedFormat:
-            continue
-        else:
-            return
-    else:
-        raise UnrecognizedFormat(
-            "Not a recognized archive type: %s" % filename
-        )
-
-
-def unpack_directory(filename, extract_dir, progress_filter=default_filter):
-    """"Unpack" a directory, using the same interface as for archives
-
-    Raises ``UnrecognizedFormat`` if `filename` is not a directory
-    """
-    if not os.path.isdir(filename):
-        raise UnrecognizedFormat("%s is not a directory" % filename)
-
-    paths = {
-        filename: ('', extract_dir),
-    }
-    for base, dirs, files in os.walk(filename):
-        src, dst = paths[base]
-        for d in dirs:
-            paths[os.path.join(base, d)] = src + d + '/', os.path.join(dst, d)
-        for f in files:
-            target = os.path.join(dst, f)
-            target = progress_filter(src + f, target)
-            if not target:
-                # skip non-files
-                continue
-            ensure_directory(target)
-            f = os.path.join(base, f)
-            shutil.copyfile(f, target)
-            shutil.copystat(f, target)
-
-
-def unpack_zipfile(filename, extract_dir, progress_filter=default_filter):
-    """Unpack zip `filename` to `extract_dir`
-
-    Raises ``UnrecognizedFormat`` if `filename` is not a zipfile (as determined
-    by ``zipfile.is_zipfile()``).  See ``unpack_archive()`` for an explanation
-    of the `progress_filter` argument.
-    """
-
-    if not zipfile.is_zipfile(filename):
-        raise UnrecognizedFormat("%s is not a zip file" % (filename,))
-
-    with zipfile.ZipFile(filename) as z:
-        _unpack_zipfile_obj(z, extract_dir, progress_filter)
-
-
-def _unpack_zipfile_obj(zipfile_obj, extract_dir, progress_filter=default_filter):
-    """Internal/private API used by other parts of setuptools.
-    Similar to ``unpack_zipfile``, but receives an already opened :obj:`zipfile.ZipFile`
-    object instead of a filename.
-    """
-    for info in zipfile_obj.infolist():
-        name = info.filename
-
-        # don't extract absolute paths or ones with .. in them
-        if name.startswith('/') or '..' in name.split('/'):
-            continue
-
-        target = os.path.join(extract_dir, *name.split('/'))
-        target = progress_filter(name, target)
-        if not target:
-            continue
-        if name.endswith('/'):
-            # directory
-            ensure_directory(target)
-        else:
-            # file
-            ensure_directory(target)
-            data = zipfile_obj.read(info.filename)
-            with open(target, 'wb') as f:
-                f.write(data)
-        unix_attributes = info.external_attr >> 16
-        if unix_attributes:
-            os.chmod(target, unix_attributes)
-
-
-def _resolve_tar_file_or_dir(tar_obj, tar_member_obj):
-    """Resolve any links and extract link targets as normal files."""
-    while tar_member_obj is not None and (
-            tar_member_obj.islnk() or tar_member_obj.issym()):
-        linkpath = tar_member_obj.linkname
-        if tar_member_obj.issym():
-            base = posixpath.dirname(tar_member_obj.name)
-            linkpath = posixpath.join(base, linkpath)
-            linkpath = posixpath.normpath(linkpath)
-        tar_member_obj = tar_obj._getmember(linkpath)
-
-    is_file_or_dir = (
-        tar_member_obj is not None and
-        (tar_member_obj.isfile() or tar_member_obj.isdir())
-    )
-    if is_file_or_dir:
-        return tar_member_obj
-
-    raise LookupError('Got unknown file type')
-
-
-def _iter_open_tar(tar_obj, extract_dir, progress_filter):
-    """Emit member-destination pairs from a tar archive."""
-    # don't do any chowning!
-    tar_obj.chown = lambda *args: None
-
-    with contextlib.closing(tar_obj):
-        for member in tar_obj:
-            name = member.name
-            # don't extract absolute paths or ones with .. in them
-            if name.startswith('/') or '..' in name.split('/'):
-                continue
-
-            prelim_dst = os.path.join(extract_dir, *name.split('/'))
-
-            try:
-                member = _resolve_tar_file_or_dir(tar_obj, member)
-            except LookupError:
-                continue
-
-            final_dst = progress_filter(name, prelim_dst)
-            if not final_dst:
-                continue
-
-            if final_dst.endswith(os.sep):
-                final_dst = final_dst[:-1]
-
-            yield member, final_dst
-
-
-def unpack_tarfile(filename, extract_dir, progress_filter=default_filter):
-    """Unpack tar/tar.gz/tar.bz2 `filename` to `extract_dir`
-
-    Raises ``UnrecognizedFormat`` if `filename` is not a tarfile (as determined
-    by ``tarfile.open()``).  See ``unpack_archive()`` for an explanation
-    of the `progress_filter` argument.
-    """
-    try:
-        tarobj = tarfile.open(filename)
-    except tarfile.TarError as e:
-        raise UnrecognizedFormat(
-            "%s is not a compressed or uncompressed tar file" % (filename,)
-        ) from e
-
-    for member, final_dst in _iter_open_tar(
-            tarobj, extract_dir, progress_filter,
-    ):
-        try:
-            # XXX Ugh
-            tarobj._extract_member(member, final_dst)
-        except tarfile.ExtractError:
-            # chown/chmod/mkfifo/mknode/makedev failed
-            pass
-
-    return True
-
-
-extraction_drivers = unpack_directory, unpack_zipfile, unpack_tarfile
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/data_loading.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/data_loading.md
deleted file mode 100644
index 3a0e189ab9601f09f389d2a2096e30ef3fd48813..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/data_loading.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-# Use Custom Dataloaders
-
-## How the Existing Dataloader Works
-
-Detectron2 contains a builtin data loading pipeline.
-It's good to understand how it works, in case you need to write a custom one.
-
-Detectron2 provides two functions
-[build_detection_{train,test}_loader](../modules/data.html#detectron2.data.build_detection_train_loader)
-that create a default data loader from a given config.
-Here is how `build_detection_{train,test}_loader` work:
-
-1. It takes the name of a registered dataset (e.g., "coco_2017_train") and loads a `list[dict]` representing the dataset items
-   in a lightweight, canonical format. These dataset items are not yet ready to be used by the model (e.g., images are
-   not loaded into memory, random augmentations have not been applied, etc.).
-   Details about the dataset format and dataset registration can be found in
-   [datasets](datasets.html).
-2. Each dict in this list is mapped by a function ("mapper"):
-   * Users can customize this mapping function by specifying the "mapper" argument in
-        `build_detection_{train,test}_loader`. The default mapper is [DatasetMapper]( ../modules/data.html#detectron2.data.DatasetMapper).
-   * The output format of such function can be arbitrary, as long as it is accepted by the consumer of this data loader (usually the model).
-     The outputs of the default mapper, after batching, follow the default model input format documented in
-     [Use Models](https://detectron2.readthedocs.io/tutorials/models.html#model-input-format).
-   * The role of the mapper is to transform the lightweight, canonical representation of a dataset item into a format
-     that is ready for the model to consume (including, e.g., read images, perform random data augmentation and convert to torch Tensors).
-     If you would like to perform custom transformations to data, you often want a custom mapper.
-3. The outputs of the mapper are batched (simply into a list).
-4. This batched data is the output of the data loader. Typically, it's also the input of
-   `model.forward()`.
-
-
-## Write a Custom Dataloader
-
-Using a different "mapper" with `build_detection_{train,test}_loader(mapper=)` works for most use cases
-of custom data loading.
-For example, if you want to resize all images to a fixed size for Mask R-CNN training, write this:
-
-```python
-from detectron2.data import build_detection_train_loader
-from detectron2.data import transforms as T
-from detectron2.data import detection_utils as utils
-
-def mapper(dataset_dict):
-	# Implement a mapper, similar to the default DatasetMapper, but with your own customizations
-	dataset_dict = copy.deepcopy(dataset_dict)  # it will be modified by code below
-	image = utils.read_image(dataset_dict["file_name"], format="BGR")
-	image, transforms = T.apply_transform_gens([T.Resize((800, 800))], image)
-	dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
-
-	annos = [
-		utils.transform_instance_annotations(obj, transforms, image.shape[:2])
-		for obj in dataset_dict.pop("annotations")
-		if obj.get("iscrowd", 0) == 0
-	]
-	instances = utils.annotations_to_instances(annos, image.shape[:2])
-	dataset_dict["instances"] = utils.filter_empty_instances(instances)
-	return dataset_dict
-
-data_loader = build_detection_train_loader(cfg, mapper=mapper)
-# use this dataloader instead of the default
-```
-Refer to [API documentation of detectron2.data](../modules/data.html) for details.
-
-If you want to change not only the mapper (e.g., to write different sampling or batching logic),
-you can write your own data loader. The data loader is simply a
-python iterator that produces [the format](models.html) your model accepts.
-You can implement it using any tools you like.
-
-## Use a Custom Dataloader
-
-If you use [DefaultTrainer](../modules/engine.html#detectron2.engine.defaults.DefaultTrainer),
-you can overwrite its `build_{train,test}_loader` method to use your own dataloader.
-See the [densepose dataloader](../../projects/DensePose/train_net.py)
-for an example.
-
-If you write your own training loop, you can plug in your data loader easily.
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/malloc_and_free.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/malloc_and_free.h
deleted file mode 100644
index 3d72381b5b5e3be37526000b9e2e637f0817f368..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/malloc_and_free.h
+++ /dev/null
@@ -1,104 +0,0 @@
-/*
- *  Copyright 2008-2013 NVIDIA Corporation
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-#pragma once
-
-#include 
-
-#include 
-#include 
-#include 
-#include 
-#include 
-#ifdef THRUST_CACHING_DEVICE_MALLOC
-#include 
-#endif
-#include 
-#include 
-#include 
-
-namespace thrust
-{
-namespace cuda_cub {
-
-#ifdef THRUST_CACHING_DEVICE_MALLOC
-#define __CUB_CACHING_MALLOC
-#ifndef __CUDA_ARCH__
-inline cub::CachingDeviceAllocator &get_allocator()
-{
-  static cub::CachingDeviceAllocator g_allocator(true);
-  return g_allocator;
-}
-#endif
-#endif
-
-
-// note that malloc returns a raw pointer to avoid
-// depending on the heavyweight thrust/system/cuda/memory.h header
-template
-__host__ __device__
-void *malloc(execution_policy &, std::size_t n)
-{
-  void *result = 0;
-
-  if (THRUST_IS_HOST_CODE) {
-    #if THRUST_INCLUDE_HOST_CODE
-      #ifdef __CUB_CACHING_MALLOC
-        cub::CachingDeviceAllocator &alloc = get_allocator();
-        cudaError_t status = alloc.DeviceAllocate(&result, n);
-      #else
-        cudaError_t status = cudaMalloc(&result, n);
-      #endif
-
-      if(status != cudaSuccess)
-      {
-        cudaGetLastError(); // Clear global CUDA error state.
-        throw thrust::system::detail::bad_alloc(thrust::cuda_category().message(status).c_str());
-      }
-    #endif
-  } else {
-    #if THRUST_INCLUDE_DEVICE_CODE
-      result = thrust::raw_pointer_cast(thrust::malloc(thrust::seq, n));
-    #endif
-  }
-
-  return result;
-} // end malloc()
-
-
-template
-__host__ __device__
-void free(execution_policy &, Pointer ptr)
-{
-  if (THRUST_IS_HOST_CODE) {
-    #if THRUST_INCLUDE_HOST_CODE
-      #ifdef __CUB_CACHING_MALLOC
-        cub::CachingDeviceAllocator &alloc = get_allocator();
-        cudaError_t status = alloc.DeviceFree(thrust::raw_pointer_cast(ptr));
-      #else
-        cudaError_t status = cudaFree(thrust::raw_pointer_cast(ptr));
-      #endif
-      cuda_cub::throw_on_error(status, "device free failed");
-    #endif
-  } else {
-    #if THRUST_INCLUDE_DEVICE_CODE
-      thrust::free(thrust::seq, ptr);
-    #endif
-  }
-} // end free()
-
-}    // namespace cuda_cub
-} // end namespace thrust
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/logical.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/logical.h
deleted file mode 100644
index bdaad4d293a2695f0a1218e0cf828eb12406ad16..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/logical.h
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- *  Copyright 2008-2013 NVIDIA Corporation
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a fill of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-#pragma once
-
-#include 
-
-// the purpose of this header is to #include the logical.h header
-// of the sequential, host, and device systems. It should be #included in any
-// code which uses adl to dispatch logical
-
-#include 
-
-// SCons can't see through the #defines below to figure out what this header
-// includes, so we fake it out by specifying all possible files we might end up
-// including inside an #if 0.
-#if 0
-#include 
-#include 
-#include 
-#include 
-#endif
-
-#define __THRUST_HOST_SYSTEM_LOGICAL_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/logical.h>
-#include __THRUST_HOST_SYSTEM_LOGICAL_HEADER
-#undef __THRUST_HOST_SYSTEM_LOGICAL_HEADER
-
-#define __THRUST_DEVICE_SYSTEM_LOGICAL_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/logical.h>
-#include __THRUST_DEVICE_SYSTEM_LOGICAL_HEADER
-#undef __THRUST_DEVICE_SYSTEM_LOGICAL_HEADER
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/deformable/deform_conv.h b/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/deformable/deform_conv.h
deleted file mode 100644
index ec8c6c2fdb0274aefb86523894174f9ca58bbb43..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/deformable/deform_conv.h
+++ /dev/null
@@ -1,377 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-#include 
-
-namespace detectron2 {
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-int deform_conv_forward_cuda(
-    at::Tensor input,
-    at::Tensor weight,
-    at::Tensor offset,
-    at::Tensor output,
-    at::Tensor columns,
-    at::Tensor ones,
-    int kW,
-    int kH,
-    int dW,
-    int dH,
-    int padW,
-    int padH,
-    int dilationW,
-    int dilationH,
-    int group,
-    int deformable_group,
-    int im2col_step);
-
-int deform_conv_backward_input_cuda(
-    at::Tensor input,
-    at::Tensor offset,
-    at::Tensor gradOutput,
-    at::Tensor gradInput,
-    at::Tensor gradOffset,
-    at::Tensor weight,
-    at::Tensor columns,
-    int kW,
-    int kH,
-    int dW,
-    int dH,
-    int padW,
-    int padH,
-    int dilationW,
-    int dilationH,
-    int group,
-    int deformable_group,
-    int im2col_step);
-
-int deform_conv_backward_parameters_cuda(
-    at::Tensor input,
-    at::Tensor offset,
-    at::Tensor gradOutput,
-    at::Tensor gradWeight, // at::Tensor gradBias,
-    at::Tensor columns,
-    at::Tensor ones,
-    int kW,
-    int kH,
-    int dW,
-    int dH,
-    int padW,
-    int padH,
-    int dilationW,
-    int dilationH,
-    int group,
-    int deformable_group,
-    float scale,
-    int im2col_step);
-
-void modulated_deform_conv_cuda_forward(
-    at::Tensor input,
-    at::Tensor weight,
-    at::Tensor bias,
-    at::Tensor ones,
-    at::Tensor offset,
-    at::Tensor mask,
-    at::Tensor output,
-    at::Tensor columns,
-    int kernel_h,
-    int kernel_w,
-    const int stride_h,
-    const int stride_w,
-    const int pad_h,
-    const int pad_w,
-    const int dilation_h,
-    const int dilation_w,
-    const int group,
-    const int deformable_group,
-    const bool with_bias);
-
-void modulated_deform_conv_cuda_backward(
-    at::Tensor input,
-    at::Tensor weight,
-    at::Tensor bias,
-    at::Tensor ones,
-    at::Tensor offset,
-    at::Tensor mask,
-    at::Tensor columns,
-    at::Tensor grad_input,
-    at::Tensor grad_weight,
-    at::Tensor grad_bias,
-    at::Tensor grad_offset,
-    at::Tensor grad_mask,
-    at::Tensor grad_output,
-    int kernel_h,
-    int kernel_w,
-    int stride_h,
-    int stride_w,
-    int pad_h,
-    int pad_w,
-    int dilation_h,
-    int dilation_w,
-    int group,
-    int deformable_group,
-    const bool with_bias);
-
-#endif
-
-inline int deform_conv_forward(
-    at::Tensor input,
-    at::Tensor weight,
-    at::Tensor offset,
-    at::Tensor output,
-    at::Tensor columns,
-    at::Tensor ones,
-    int kW,
-    int kH,
-    int dW,
-    int dH,
-    int padW,
-    int padH,
-    int dilationW,
-    int dilationH,
-    int group,
-    int deformable_group,
-    int im2col_step) {
-  if (input.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-    TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
-    TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
-    return deform_conv_forward_cuda(
-        input,
-        weight,
-        offset,
-        output,
-        columns,
-        ones,
-        kW,
-        kH,
-        dW,
-        dH,
-        padW,
-        padH,
-        dilationW,
-        dilationH,
-        group,
-        deformable_group,
-        im2col_step);
-#else
-    AT_ERROR("Not compiled with GPU support");
-#endif
-  }
-  AT_ERROR("Not implemented on the CPU");
-}
-
-inline int deform_conv_backward_input(
-    at::Tensor input,
-    at::Tensor offset,
-    at::Tensor gradOutput,
-    at::Tensor gradInput,
-    at::Tensor gradOffset,
-    at::Tensor weight,
-    at::Tensor columns,
-    int kW,
-    int kH,
-    int dW,
-    int dH,
-    int padW,
-    int padH,
-    int dilationW,
-    int dilationH,
-    int group,
-    int deformable_group,
-    int im2col_step) {
-  if (gradOutput.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-    TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
-    TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
-    TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
-    return deform_conv_backward_input_cuda(
-        input,
-        offset,
-        gradOutput,
-        gradInput,
-        gradOffset,
-        weight,
-        columns,
-        kW,
-        kH,
-        dW,
-        dH,
-        padW,
-        padH,
-        dilationW,
-        dilationH,
-        group,
-        deformable_group,
-        im2col_step);
-#else
-    AT_ERROR("Not compiled with GPU support");
-#endif
-  }
-  AT_ERROR("Not implemented on the CPU");
-}
-
-inline int deform_conv_backward_filter(
-    at::Tensor input,
-    at::Tensor offset,
-    at::Tensor gradOutput,
-    at::Tensor gradWeight, // at::Tensor gradBias,
-    at::Tensor columns,
-    at::Tensor ones,
-    int kW,
-    int kH,
-    int dW,
-    int dH,
-    int padW,
-    int padH,
-    int dilationW,
-    int dilationH,
-    int group,
-    int deformable_group,
-    float scale,
-    int im2col_step) {
-  if (gradOutput.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-    TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
-    TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
-    return deform_conv_backward_parameters_cuda(
-        input,
-        offset,
-        gradOutput,
-        gradWeight,
-        columns,
-        ones,
-        kW,
-        kH,
-        dW,
-        dH,
-        padW,
-        padH,
-        dilationW,
-        dilationH,
-        group,
-        deformable_group,
-        scale,
-        im2col_step);
-#else
-    AT_ERROR("Not compiled with GPU support");
-#endif
-  }
-  AT_ERROR("Not implemented on the CPU");
-}
-
-inline void modulated_deform_conv_forward(
-    at::Tensor input,
-    at::Tensor weight,
-    at::Tensor bias,
-    at::Tensor ones,
-    at::Tensor offset,
-    at::Tensor mask,
-    at::Tensor output,
-    at::Tensor columns,
-    int kernel_h,
-    int kernel_w,
-    const int stride_h,
-    const int stride_w,
-    const int pad_h,
-    const int pad_w,
-    const int dilation_h,
-    const int dilation_w,
-    const int group,
-    const int deformable_group,
-    const bool with_bias) {
-  if (input.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-    TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
-    TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!");
-    TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
-    return modulated_deform_conv_cuda_forward(
-        input,
-        weight,
-        bias,
-        ones,
-        offset,
-        mask,
-        output,
-        columns,
-        kernel_h,
-        kernel_w,
-        stride_h,
-        stride_w,
-        pad_h,
-        pad_w,
-        dilation_h,
-        dilation_w,
-        group,
-        deformable_group,
-        with_bias);
-#else
-    AT_ERROR("Not compiled with GPU support");
-#endif
-  }
-  AT_ERROR("Not implemented on the CPU");
-}
-
-inline void modulated_deform_conv_backward(
-    at::Tensor input,
-    at::Tensor weight,
-    at::Tensor bias,
-    at::Tensor ones,
-    at::Tensor offset,
-    at::Tensor mask,
-    at::Tensor columns,
-    at::Tensor grad_input,
-    at::Tensor grad_weight,
-    at::Tensor grad_bias,
-    at::Tensor grad_offset,
-    at::Tensor grad_mask,
-    at::Tensor grad_output,
-    int kernel_h,
-    int kernel_w,
-    int stride_h,
-    int stride_w,
-    int pad_h,
-    int pad_w,
-    int dilation_h,
-    int dilation_w,
-    int group,
-    int deformable_group,
-    const bool with_bias) {
-  if (grad_output.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-    TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
-    TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
-    TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!");
-    TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
-    return modulated_deform_conv_cuda_backward(
-        input,
-        weight,
-        bias,
-        ones,
-        offset,
-        mask,
-        columns,
-        grad_input,
-        grad_weight,
-        grad_bias,
-        grad_offset,
-        grad_mask,
-        grad_output,
-        kernel_h,
-        kernel_w,
-        stride_h,
-        stride_w,
-        pad_h,
-        pad_w,
-        dilation_h,
-        dilation_w,
-        group,
-        deformable_group,
-        with_bias);
-#else
-    AT_ERROR("Not compiled with GPU support");
-#endif
-  }
-  AT_ERROR("Not implemented on the CPU");
-}
-
-} // namespace detectron2
diff --git a/spaces/CVPR/regionclip-demo/detectron2/structures/tsv_file.py b/spaces/CVPR/regionclip-demo/detectron2/structures/tsv_file.py
deleted file mode 100644
index 207affe2c42020954925db07a33f8fe6688ed498..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/structures/tsv_file.py
+++ /dev/null
@@ -1,352 +0,0 @@
-import logging
-import os
-import json
-import os.path as op
-import numpy as np
-from typing import List, Union
-from collections import OrderedDict
-
-def generate_lineidx(filein, idxout):
-    idxout_tmp = idxout + '.tmp'
-    with open(filein, 'r') as tsvin, open(idxout_tmp,'w') as tsvout:
-        fsize = os.fstat(tsvin.fileno()).st_size
-        fpos = 0
-        while fpos!=fsize:
-            tsvout.write(str(fpos)+"\n")
-            tsvin.readline()
-            fpos = tsvin.tell()
-    os.rename(idxout_tmp, idxout)
-
-
-def read_to_character(fp, c):
-    result = []
-    while True:
-        s = fp.read(32)
-        assert s != ''
-        if c in s:
-            result.append(s[: s.index(c)])
-            break
-        else:
-            result.append(s)
-    return ''.join(result)
-
-
-class TSVFile(object):
-    def __init__(self, tsv_file, generate_lineidx=False):
-        self.tsv_file = tsv_file
-        self.lineidx = op.splitext(tsv_file)[0] + '.lineidx'
-        self._fp = None
-        self._lineidx = None
-        # the process always keeps the process which opens the file. 
-        # If the pid is not equal to the currrent pid, we will re-open the file.
-        self.pid = None
-        # generate lineidx if not exist
-        if not op.isfile(self.lineidx) and generate_lineidx:
-            generate_lineidx(self.tsv_file, self.lineidx)
-
-    def __del__(self):
-        if self._fp:
-            self._fp.close()
-
-    def __str__(self):
-        return "TSVFile(tsv_file='{}')".format(self.tsv_file)
-
-    def __repr__(self):
-        return str(self)
-
-    def num_rows(self):
-        self._ensure_lineidx_loaded()
-        return len(self._lineidx)
-
-    def seek(self, idx):
-        self._ensure_tsv_opened()
-        self._ensure_lineidx_loaded()
-        try:
-            pos = self._lineidx[idx]
-        except:
-            logging.info('{}-{}'.format(self.tsv_file, idx))
-            raise
-        self._fp.seek(pos)
-        return [s.strip() for s in self._fp.readline().split('\t')]
-
-    def seek_first_column(self, idx):
-        self._ensure_tsv_opened()
-        self._ensure_lineidx_loaded()
-        pos = self._lineidx[idx]
-        self._fp.seek(pos)
-        return read_to_character(self._fp, '\t')
-
-    def get_key(self, idx):
-        return self.seek_first_column(idx)
-
-    def __getitem__(self, index):
-        return self.seek(index)
-
-    def __len__(self):
-        return self.num_rows()
-
-    def _ensure_lineidx_loaded(self):
-        if self._lineidx is None:
-            # print('loading lineidx: {}'.format(self.lineidx))
-
-            with open(self.lineidx, 'r') as fp:
-                self._lineidx = [int(i.strip()) for i in fp.readlines()]
-
-    def _ensure_tsv_opened(self):
-        if self._fp is None:
-            self._fp = open(self.tsv_file, 'r')
-            self.pid = os.getpid()
-
-        if self.pid != os.getpid():
-            # print('re-open {} because the process id changed'.format(self.tsv_file))
-            self._fp = open(self.tsv_file, 'r')
-            self.pid = os.getpid()
-
-class TSVFileNew(object):
-    def __init__(self,
-                 tsv_file: str,
-                 if_generate_lineidx: bool = False,
-                 lineidx: str = None,
-                 class_selector: List[str] = None):
-        self.tsv_file = tsv_file
-        self.lineidx = op.splitext(tsv_file)[0] + '.lineidx' \
-            if not lineidx else lineidx
-        self.linelist = op.splitext(tsv_file)[0] + '.linelist'
-        self.chunks = op.splitext(tsv_file)[0] + '.chunks'
-        self._fp = None
-        self._lineidx = None
-        self._sample_indices = None
-        self._class_boundaries = None
-        self._class_selector = class_selector
-        # the process always keeps the process which opens the file.
-        # If the pid is not equal to the currrent pid, we will re-open the file.
-        self.pid = None
-        # generate lineidx if not exist
-        if not op.isfile(self.lineidx) and if_generate_lineidx:
-            generate_lineidx(self.tsv_file, self.lineidx)
-
-    def __del__(self):
-        if self._fp:
-            self._fp.close()
-
-    def __str__(self):
-        return "TSVFile(tsv_file='{}')".format(self.tsv_file)
-
-    def __repr__(self):
-        return str(self)
-
-    def get_class_boundaries(self):
-        return self._class_boundaries
-
-    def num_rows(self):
-        self._ensure_lineidx_loaded()
-        return len(self._sample_indices)
-
-    def seek(self, idx: int):
-        self._ensure_tsv_opened()
-        self._ensure_lineidx_loaded()
-        try:
-            pos = self._lineidx[self._sample_indices[idx]]
-        except:
-            logging.info('=> {}-{}'.format(self.tsv_file, idx))
-            raise
-        self._fp.seek(pos)
-        return [s.strip() for s in self._fp.readline().split('\t')]
-
-    def seek_first_column(self, idx: int):
-        self._ensure_tsv_opened()
-        self._ensure_lineidx_loaded()
-        pos = self._lineidx[idx]
-        self._fp.seek(pos)
-        return read_to_character(self._fp, '\t')
-
-    def get_key(self, idx: int):
-        return self.seek_first_column(idx)
-
-    def __getitem__(self, index: int):
-        return self.seek(index)
-
-    def __len__(self):
-        return self.num_rows()
-
-    def _ensure_lineidx_loaded(self):
-        if self._lineidx is None:
-            # print('=> loading lineidx: {}'.format(self.lineidx))
-            with open(self.lineidx, 'r') as fp:
-                lines = fp.readlines()
-                lines = [line.strip() for line in lines]
-                self._lineidx = [int(line) for line in lines]
-            # except:
-            #     print("error in loading lineidx file {}, regenerate it".format(self.lineidx))
-            #     generate_lineidx(self.tsv_file, self.lineidx)
-            #     with open(self.lineidx, 'r') as fp:
-            #         lines = fp.readlines()
-            #         lines = [line.strip() for line in lines]
-            #         self._lineidx = [int(line) for line in lines]                
-            # read the line list if exists
-            linelist = None
-            if op.isfile(self.linelist):
-                with open(self.linelist, 'r') as fp:
-                    linelist = sorted(
-                        [
-                            int(line.strip())
-                            for line in fp.readlines()
-                        ]
-                    )
-            if op.isfile(self.chunks) and self._class_selector:
-                self._sample_indices = []
-                self._class_boundaries = []
-                class_boundaries = json.load(open(self.chunks, 'r'))
-                for class_name, boundary in class_boundaries.items():
-                    start = len(self._sample_indices)
-                    if class_name in self._class_selector:
-                        for idx in range(boundary[0], boundary[1] + 1):
-                            # NOTE: potentially slow when linelist is long, try to speed it up
-                            if linelist and idx not in linelist:
-                                continue
-                            self._sample_indices.append(idx)
-                    end = len(self._sample_indices)
-                    self._class_boundaries.append((start, end))
-            else:
-                if linelist:
-                    self._sample_indices = linelist
-                else:
-                    self._sample_indices = list(range(len(self._lineidx)))
-
-    def _ensure_tsv_opened(self):
-        if self._fp is None:
-            self._fp = open(self.tsv_file, 'r')
-            self.pid = os.getpid()
-
-        if self.pid != os.getpid():
-            logging.debug('=> re-open {} because the process id changed'.format(self.tsv_file))
-            self._fp = open(self.tsv_file, 'r')
-            self.pid = os.getpid()
-
-class LRU(OrderedDict):
-    """Limit size, evicting the least recently looked-up key when full.
-    https://docs.python.org/3/library/collections.html#collections.OrderedDict
-    """
-
-    def __init__(self, maxsize=4, *args, **kwds):
-        self.maxsize = maxsize
-        super().__init__(*args, **kwds)
-
-    def __getitem__(self, key):
-        value = super().__getitem__(key)
-        self.move_to_end(key)
-        return value
-
-    def __setitem__(self, key, value):
-        if key in self:
-            self.move_to_end(key)
-        super().__setitem__(key, value)
-        if len(self) > self.maxsize:
-            oldest = next(iter(self))
-            del self[oldest]
-
-
-class CompositeTSVFile:
-    def __init__(self,
-                 file_list: Union[str, list],
-                 root: str = '.',
-                 class_selector: List[str] = None):
-        if isinstance(file_list, str):
-            self.file_list = load_list_file(file_list)
-        else:
-            assert isinstance(file_list, list)
-            self.file_list = file_list
-
-        self.root = root
-        self.cache = LRU()
-        self.tsvs = None
-        self.chunk_sizes = None
-        self.accum_chunk_sizes = None
-        self._class_selector = class_selector
-        self._class_boundaries = None
-        self.initialized = False
-        self.initialize()
-
-    def get_key(self, index: int):
-        idx_source, idx_row = self._calc_chunk_idx_row(index)
-        k = self.tsvs[idx_source].get_key(idx_row)
-        return '_'.join([self.file_list[idx_source], k])
-
-    def get_class_boundaries(self):
-        return self._class_boundaries
-
-    def get_chunk_size(self):
-        return self.chunk_sizes
-
-    def num_rows(self):
-        return sum(self.chunk_sizes)
-
-    def _calc_chunk_idx_row(self, index: int):
-        idx_chunk = 0
-        idx_row = index
-        while index >= self.accum_chunk_sizes[idx_chunk]:
-            idx_chunk += 1
-            idx_row = index - self.accum_chunk_sizes[idx_chunk-1]
-        return idx_chunk, idx_row
-
-
-    def __getitem__(self, index: int):
-        idx_source, idx_row = self._calc_chunk_idx_row(index)
-        if idx_source not in self.cache:
-            self.cache[idx_source] = TSVFileNew(
-                op.join(self.root, self.file_list[idx_source]),
-                class_selector=self._class_selector
-            )
-        return self.cache[idx_source].seek(idx_row)
-
-    def __len__(self):
-        return sum(self.chunk_sizes)
-
-    def initialize(self):
-        """
-        this function has to be called in init function if cache_policy is
-        enabled. Thus, let's always call it in init funciton to make it simple.
-        """
-        if self.initialized:
-            return
-        tsvs = [
-            TSVFileNew(
-                op.join(self.root, f),
-                class_selector=self._class_selector
-            ) for f in self.file_list
-        ]
-        logging.info("Calculating chunk sizes ...")
-        self.chunk_sizes = [len(tsv) for tsv in tsvs]
-
-        self.accum_chunk_sizes = [0]
-        for size in self.chunk_sizes:
-            self.accum_chunk_sizes += [self.accum_chunk_sizes[-1] + size]
-        self.accum_chunk_sizes = self.accum_chunk_sizes[1:]
-
-        if (
-            self._class_selector
-            and all([tsv.get_class_boundaries() for tsv in tsvs])
-        ):
-            """
-            Note: When using CompositeTSVFile, make sure that the classes contained in each
-            tsv file do not overlap. Otherwise, the class boundaries won't be correct.
-            """
-            self._class_boundaries = []
-            offset = 0
-            for tsv in tsvs:
-                boundaries = tsv.get_class_boundaries()
-                for bound in boundaries:
-                    self._class_boundaries.append((bound[0] + offset, bound[1] + offset))
-                offset += len(tsv)
-        # NOTE: in current setting, get_key is not used during training, so we remove tsvs for saving memory cost
-        del tsvs
-        self.initialized = True
-
-
-def load_list_file(fname: str) -> List[str]:
-    with open(fname, 'r') as fp:
-        lines = fp.readlines()
-    result = [line.strip() for line in lines]
-    if len(result) > 0 and result[-1] == '':
-        result = result[:-1]
-    return result
diff --git a/spaces/CVPR/transfiner/configs/common/coco_schedule.py b/spaces/CVPR/transfiner/configs/common/coco_schedule.py
deleted file mode 100644
index 355e66a1d213cb599a7ffe55089d854089c8ead2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/transfiner/configs/common/coco_schedule.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from fvcore.common.param_scheduler import MultiStepParamScheduler
-
-from detectron2.config import LazyCall as L
-from detectron2.solver import WarmupParamScheduler
-
-
-def default_X_scheduler(num_X):
-    """
-    Returns the config for a default multi-step LR scheduler such as "1x", "3x",
-    commonly referred to in papers, where every 1x has the total length of 1440k
-    training images (~12 COCO epochs). LR is decayed twice at the end of training
-    following the strategy defined in "Rethinking ImageNet Pretraining", Sec 4.
-
-    Args:
-        num_X: a positive real number
-
-    Returns:
-        DictConfig: configs that define the multiplier for LR during training
-    """
-    # total number of iterations assuming 16 batch size, using 1440000/16=90000
-    total_steps_16bs = num_X * 90000
-
-    if num_X <= 2:
-        scheduler = L(MultiStepParamScheduler)(
-            values=[1.0, 0.1, 0.01],
-            # note that scheduler is scale-invariant. This is equivalent to
-            # milestones=[6, 8, 9]
-            milestones=[60000, 80000, 90000],
-        )
-    else:
-        scheduler = L(MultiStepParamScheduler)(
-            values=[1.0, 0.1, 0.01],
-            milestones=[total_steps_16bs - 60000, total_steps_16bs - 20000, total_steps_16bs],
-        )
-    return L(WarmupParamScheduler)(
-        scheduler=scheduler,
-        warmup_length=1000 / total_steps_16bs,
-        warmup_method="linear",
-        warmup_factor=0.001,
-    )
-
-
-lr_multiplier_1x = default_X_scheduler(1)
-lr_multiplier_2x = default_X_scheduler(2)
-lr_multiplier_3x = default_X_scheduler(3)
-lr_multiplier_6x = default_X_scheduler(6)
-lr_multiplier_9x = default_X_scheduler(9)
diff --git a/spaces/Cahlil/Speech-Recognition-with-Speaker-Segmentation/README.md b/spaces/Cahlil/Speech-Recognition-with-Speaker-Segmentation/README.md
deleted file mode 100644
index 27033aa0cf87aa17bf6e8746ed30984e349aee41..0000000000000000000000000000000000000000
--- a/spaces/Cahlil/Speech-Recognition-with-Speaker-Segmentation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Speech Recognition with Speaker Segmentation
-emoji: 🚀
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 2.8.14
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/inference.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/inference.py
deleted file mode 100644
index 8168b96ca51e6e494c7c675c2f4a610e21b095d6..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/inference.py
+++ /dev/null
@@ -1,98 +0,0 @@
-from typing import Tuple, List
-
-import cv2
-import numpy as np
-import supervision as sv
-import torch
-from PIL import Image
-from torchvision.ops import box_convert
-
-import groundingdino.datasets.transforms as T
-from groundingdino.models import build_model
-from groundingdino.util.misc import clean_state_dict
-from groundingdino.util.slconfig import SLConfig
-from groundingdino.util.utils import get_phrases_from_posmap
-
-
-def preprocess_caption(caption: str) -> str:
-    result = caption.lower().strip()
-    if result.endswith("."):
-        return result
-    return result + "."
-
-
-def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"):
-    args = SLConfig.fromfile(model_config_path)
-    args.device = device
-    model = build_model(args)
-    checkpoint = torch.load(model_checkpoint_path, map_location="cpu")
-    model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False)
-    model.eval()
-    return model
-
-
-def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]:
-    transform = T.Compose(
-        [
-            T.RandomResize([800], max_size=1333),
-            T.ToTensor(),
-            T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
-        ]
-    )
-    image_source = Image.open(image_path).convert("RGB")
-    image = np.asarray(image_source)
-    image_transformed, _ = transform(image_source, None)
-    return image, image_transformed
-
-
-def predict(
-        model,
-        image: torch.Tensor,
-        caption: str,
-        box_threshold: float,
-        text_threshold: float,
-        device: str = "cuda"
-) -> Tuple[torch.Tensor, torch.Tensor, List[str]]:
-    caption = preprocess_caption(caption=caption)
-
-    model = model.to(device)
-    image = image.to(device)
-
-    with torch.no_grad():
-        outputs = model(image[None], captions=[caption])
-
-    prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0]  # prediction_logits.shape = (nq, 256)
-    prediction_boxes = outputs["pred_boxes"].cpu()[0]  # prediction_boxes.shape = (nq, 4)
-
-    mask = prediction_logits.max(dim=1)[0] > box_threshold
-    logits = prediction_logits[mask]  # logits.shape = (n, 256)
-    boxes = prediction_boxes[mask]  # boxes.shape = (n, 4)
-
-    tokenizer = model.tokenizer
-    tokenized = tokenizer(caption)
-
-    phrases = [
-        get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '')
-        for logit
-        in logits
-    ]
-
-    return boxes, logits.max(dim=1)[0], phrases
-
-
-def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray:
-    h, w, _ = image_source.shape
-    boxes = boxes * torch.Tensor([w, h, w, h])
-    xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()
-    detections = sv.Detections(xyxy=xyxy)
-
-    labels = [
-        f"{phrase} {logit:.2f}"
-        for phrase, logit
-        in zip(phrases, logits)
-    ]
-
-    box_annotator = sv.BoxAnnotator()
-    annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR)
-    annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels)
-    return annotated_frame
diff --git a/spaces/ChandraMohanNayal/AutoGPT/ui/utils.py b/spaces/ChandraMohanNayal/AutoGPT/ui/utils.py
deleted file mode 100644
index 71703e2009afac0582300f5d99a91ddec4119e04..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/ui/utils.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import os
-import re
-
-def format_directory(directory):
-    output = []
-    def helper(directory, level, output):
-        files = os.listdir(directory)
-        for i, item in enumerate(files):
-            is_folder = os.path.isdir(os.path.join(directory, item))
-            joiner = "├── " if i < len(files) - 1 else "└── "
-            item_html = item + "/" if is_folder else f"{item}"
-            output.append("│   " * level + joiner + item_html)
-            if is_folder:
-                helper(os.path.join(directory, item), level + 1, output)
-    output.append(os.path.basename(directory) + "/")
-    helper(directory, 1, output)
-    return "\n".join(output)
-
-DOWNLOAD_OUTPUTS_JS = """
-() => {
-  const a = document.createElement('a');
-  a.href = 'file=outputs.zip';
-  a.download = 'outputs.zip';
-  document.body.appendChild(a);
-  a.click();
-  document.body.removeChild(a);
-}"""
-
-def remove_color(text):
-    ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
-    return ansi_escape.sub('', text)
\ No newline at end of file
diff --git a/spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/templates.py b/spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/templates.py
deleted file mode 100644
index 2d2ac355cce9be1709715c1a714f3e336ac9c0d2..0000000000000000000000000000000000000000
--- a/spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/templates.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatPromptTemplate, PromptTemplate
-
-# prompt templates
-system_template_output_format = PromptTemplate(
-    input_variables = ['html_content'],
-    template='''You are a helpful assitant that helps people extract JSON information from HTML content.
-
-    The input is a HTML content. 
-
-    The expected output is a JSON with a relevant information in the following html: {html_content}
-
-    Try to extract as much information as possible. Including images, links, etc.
-
-    The assitant answer should ONLY contain the JSON information without any aditional word or character.
-
-    The JSON output must have 1 depth level as much.
-
-    The expected output format is an array of objects.
-    
-    ''')
-
-human_template_output_format = PromptTemplate(
-    input_variables = ['html_content'], 
-    template='this is the html content: {html_content}'
-)
-
-# chat prompts objects
-system_message_prompt = SystemMessagePromptTemplate.from_template(system_template_output_format.template)
-human_message_prompt = HumanMessagePromptTemplate.from_template(human_template_output_format.template)
-output_format_chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
diff --git a/spaces/Cong723/gpt-academic-public/docs/waifu_plugin/live2d.js b/spaces/Cong723/gpt-academic-public/docs/waifu_plugin/live2d.js
deleted file mode 100644
index 2cf559be672c438dfbd35db61eea12465ed0dffb..0000000000000000000000000000000000000000
--- a/spaces/Cong723/gpt-academic-public/docs/waifu_plugin/live2d.js
+++ /dev/null
@@ -1,4238 +0,0 @@
-!
-function(t) {
-	function i(r) {
-		if (e[r]) return e[r].exports;
-		var o = e[r] = {
-			i: r,
-			l: !1,
-			exports: {}
-		};
-		return t[r].call(o.exports, o, o.exports, i), o.l = !0, o.exports
-	}
-	var e = {};
-	i.m = t, i.c = e, i.d = function(t, e, r) {
-		i.o(t, e) || Object.defineProperty(t, e, {
-			configurable: !1,
-			enumerable: !0,
-			get: r
-		})
-	}, i.n = function(t) {
-		var e = t && t.__esModule ?
-		function() {
-			return t.
-		default
-		} : function() {
-			return t
-		};
-		return i.d(e, "a", e), e
-	}, i.o = function(t, i) {
-		return Object.prototype.hasOwnProperty.call(t, i)
-	}, i.p = "", i(i.s = 4)
-}([function(t, i, e) {
-	"use strict";
-
-	function r() {
-		this.live2DModel = null, this.modelMatrix = null, this.eyeBlink = null, this.physics = null, this.pose = null, this.debugMode = !1, this.initialized = !1, this.updating = !1, this.alpha = 1, this.accAlpha = 0, this.lipSync = !1, this.lipSyncValue = 0, this.accelX = 0, this.accelY = 0, this.accelZ = 0, this.dragX = 0, this.dragY = 0, this.startTimeMSec = null, this.mainMotionManager = new h, this.expressionManager = new h, this.motions = {}, this.expressions = {}, this.isTexLoaded = !1
-	}
-	function o() {
-		AMotion.prototype.constructor.call(this), this.paramList = new Array
-	}
-	function n() {
-		this.id = "", this.type = -1, this.value = null
-	}
-	function s() {
-		this.nextBlinkTime = null, this.stateStartTime = null, this.blinkIntervalMsec = null, this.eyeState = g.STATE_FIRST, this.blinkIntervalMsec = 4e3, this.closingMotionMsec = 100, this.closedMotionMsec = 50, this.openingMotionMsec = 150, this.closeIfZero = !0, this.eyeID_L = "PARAM_EYE_L_OPEN", this.eyeID_R = "PARAM_EYE_R_OPEN"
-	}
-	function _() {
-		this.tr = new Float32Array(16), this.identity()
-	}
-	function a(t, i) {
-		_.prototype.constructor.call(this), this.width = t, this.height = i
-	}
-	function h() {
-		MotionQueueManager.prototype.constructor.call(this), this.currentPriority = null, this.reservePriority = null, this.super = MotionQueueManager.prototype
-	}
-	function l() {
-		this.physicsList = new Array, this.startTimeMSec = UtSystem.getUserTimeMSec()
-	}
-	function $() {
-		this.lastTime = 0, this.lastModel = null, this.partsGroups = new Array
-	}
-	function u(t) {
-		this.paramIndex = -1, this.partsIndex = -1, this.link = null, this.id = t
-	}
-	function p() {
-		this.EPSILON = .01, this.faceTargetX = 0, this.faceTargetY = 0, this.faceX = 0, this.faceY = 0, this.faceVX = 0, this.faceVY = 0, this.lastTimeSec = 0
-	}
-	function f() {
-		_.prototype.constructor.call(this), this.screenLeft = null, this.screenRight = null, this.screenTop = null, this.screenBottom = null, this.maxLeft = null, this.maxRight = null, this.maxTop = null, this.maxBottom = null, this.max = Number.MAX_VALUE, this.min = 0
-	}
-	function c() {}
-	var d = 0;
-	r.prototype.getModelMatrix = function() {
-		return this.modelMatrix
-	}, r.prototype.setAlpha = function(t) {
-		t > .999 && (t = 1), t < .001 && (t = 0), this.alpha = t
-	}, r.prototype.getAlpha = function() {
-		return this.alpha
-	}, r.prototype.isInitialized = function() {
-		return this.initialized
-	}, r.prototype.setInitialized = function(t) {
-		this.initialized = t
-	}, r.prototype.isUpdating = function() {
-		return this.updating
-	}, r.prototype.setUpdating = function(t) {
-		this.updating = t
-	}, r.prototype.getLive2DModel = function() {
-		return this.live2DModel
-	}, r.prototype.setLipSync = function(t) {
-		this.lipSync = t
-	}, r.prototype.setLipSyncValue = function(t) {
-		this.lipSyncValue = t
-	}, r.prototype.setAccel = function(t, i, e) {
-		this.accelX = t, this.accelY = i, this.accelZ = e
-	}, r.prototype.setDrag = function(t, i) {
-		this.dragX = t, this.dragY = i
-	}, r.prototype.getMainMotionManager = function() {
-		return this.mainMotionManager
-	}, r.prototype.getExpressionManager = function() {
-		return this.expressionManager
-	}, r.prototype.loadModelData = function(t, i) {
-		var e = c.getPlatformManager();
-		this.debugMode && e.log("Load model : " + t);
-		var r = this;
-		e.loadLive2DModel(t, function(t) {
-			if (r.live2DModel = t, r.live2DModel.saveParam(), 0 != Live2D.getError()) return void console.error("Error : Failed to loadModelData().");
-			r.modelMatrix = new a(r.live2DModel.getCanvasWidth(), r.live2DModel.getCanvasHeight()), r.modelMatrix.setWidth(2), r.modelMatrix.setCenterPosition(0, 0), i(r.live2DModel)
-		})
-	}, r.prototype.loadTexture = function(t, i, e) {
-		d++;
-		var r = c.getPlatformManager();
-		this.debugMode && r.log("Load Texture : " + i);
-		var o = this;
-		r.loadTexture(this.live2DModel, t, i, function() {
-			d--, 0 == d && (o.isTexLoaded = !0), "function" == typeof e && e()
-		})
-	}, r.prototype.loadMotion = function(t, i, e) {
-		var r = c.getPlatformManager();
-		this.debugMode && r.log("Load Motion : " + i);
-		var o = null,
-			n = this;
-		r.loadBytes(i, function(i) {
-			o = Live2DMotion.loadMotion(i), null != t && (n.motions[t] = o), e(o)
-		})
-	}, r.prototype.loadExpression = function(t, i, e) {
-		var r = c.getPlatformManager();
-		this.debugMode && r.log("Load Expression : " + i);
-		var n = this;
-		r.loadBytes(i, function(i) {
-			null != t && (n.expressions[t] = o.loadJson(i)), "function" == typeof e && e()
-		})
-	}, r.prototype.loadPose = function(t, i) {
-		var e = c.getPlatformManager();
-		this.debugMode && e.log("Load Pose : " + t);
-		var r = this;
-		try {
-			e.loadBytes(t, function(t) {
-				r.pose = $.load(t), "function" == typeof i && i()
-			})
-		} catch (t) {
-			console.warn(t)
-		}
-	}, r.prototype.loadPhysics = function(t) {
-		var i = c.getPlatformManager();
-		this.debugMode && i.log("Load Physics : " + t);
-		var e = this;
-		try {
-			i.loadBytes(t, function(t) {
-				e.physics = l.load(t)
-			})
-		} catch (t) {
-			console.warn(t)
-		}
-	}, r.prototype.hitTestSimple = function(t, i, e) {
-		if (null === this.live2DModel) return !1;
-		var r = this.live2DModel.getDrawDataIndex(t);
-		if (r < 0) return !1;
-		for (var o = this.live2DModel.getTransformedPoints(r), n = this.live2DModel.getCanvasWidth(), s = 0, _ = this.live2DModel.getCanvasHeight(), a = 0, h = 0; h < o.length; h += 2) {
-			var l = o[h],
-				$ = o[h + 1];
-			l < n && (n = l), l > s && (s = l), $ < _ && (_ = $), $ > a && (a = $)
-		}
-		var u = this.modelMatrix.invertTransformX(i),
-			p = this.modelMatrix.invertTransformY(e);
-		return n <= u && u <= s && _ <= p && p <= a
-	}, r.prototype.hitTestSimpleCustom = function(t, i, e, r) {
-		return null !== this.live2DModel && (e >= t[0] && e <= i[0] && r <= t[1] && r >= i[1])
-	}, o.prototype = new AMotion, o.EXPRESSION_DEFAULT = "DEFAULT", o.TYPE_SET = 0, o.TYPE_ADD = 1, o.TYPE_MULT = 2, o.loadJson = function(t) {
-		var i = new o,
-			e = c.getPlatformManager(),
-			r = e.jsonParseFromBytes(t);
-		if (i.setFadeIn(parseInt(r.fade_in) > 0 ? parseInt(r.fade_in) : 1e3), i.setFadeOut(parseInt(r.fade_out) > 0 ? parseInt(r.fade_out) : 1e3), null == r.params) return i;
-		var s = r.params,
-			_ = s.length;
-		i.paramList = [];
-		for (var a = 0; a < _; a++) {
-			var h = s[a],
-				l = h.id.toString(),
-				$ = parseFloat(h.val),
-				u = o.TYPE_ADD,
-				p = null != h.calc ? h.calc.toString() : "add";
-			if ((u = "add" === p ? o.TYPE_ADD : "mult" === p ? o.TYPE_MULT : "set" === p ? o.TYPE_SET : o.TYPE_ADD) == o.TYPE_ADD) {
-				var f = null == h.def ? 0 : parseFloat(h.def);
-				$ -= f
-			} else if (u == o.TYPE_MULT) {
-				var f = null == h.def ? 1 : parseFloat(h.def);
-				0 == f && (f = 1), $ /= f
-			}
-			var d = new n;
-			d.id = l, d.type = u, d.value = $, i.paramList.push(d)
-		}
-		return i
-	}, o.prototype.updateParamExe = function(t, i, e, r) {
-		for (var n = this.paramList.length - 1; n >= 0; --n) {
-			var s = this.paramList[n];
-			s.type == o.TYPE_ADD ? t.addToParamFloat(s.id, s.value, e) : s.type == o.TYPE_MULT ? t.multParamFloat(s.id, s.value, e) : s.type == o.TYPE_SET && t.setParamFloat(s.id, s.value, e)
-		}
-	}, s.prototype.calcNextBlink = function() {
-		return UtSystem.getUserTimeMSec() + Math.random() * (2 * this.blinkIntervalMsec - 1)
-	}, s.prototype.setInterval = function(t) {
-		this.blinkIntervalMsec = t
-	}, s.prototype.setEyeMotion = function(t, i, e) {
-		this.closingMotionMsec = t, this.closedMotionMsec = i, this.openingMotionMsec = e
-	}, s.prototype.updateParam = function(t) {
-		var i, e = UtSystem.getUserTimeMSec(),
-			r = 0;
-		switch (this.eyeState) {
-		case g.STATE_CLOSING:
-			r = (e - this.stateStartTime) / this.closingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_CLOSED, this.stateStartTime = e), i = 1 - r;
-			break;
-		case g.STATE_CLOSED:
-			r = (e - this.stateStartTime) / this.closedMotionMsec, r >= 1 && (this.eyeState = g.STATE_OPENING, this.stateStartTime = e), i = 0;
-			break;
-		case g.STATE_OPENING:
-			r = (e - this.stateStartTime) / this.openingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink()), i = r;
-			break;
-		case g.STATE_INTERVAL:
-			this.nextBlinkTime < e && (this.eyeState = g.STATE_CLOSING, this.stateStartTime = e), i = 1;
-			break;
-		case g.STATE_FIRST:
-		default:
-			this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink(), i = 1
-		}
-		this.closeIfZero || (i = -i), t.setParamFloat(this.eyeID_L, i), t.setParamFloat(this.eyeID_R, i)
-	};
-	var g = function() {};
-	g.STATE_FIRST = "STATE_FIRST", g.STATE_INTERVAL = "STATE_INTERVAL", g.STATE_CLOSING = "STATE_CLOSING", g.STATE_CLOSED = "STATE_CLOSED", g.STATE_OPENING = "STATE_OPENING", _.mul = function(t, i, e) {
-		var r, o, n, s = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
-		for (r = 0; r < 4; r++) for (o = 0; o < 4; o++) for (n = 0; n < 4; n++) s[r + 4 * o] += t[r + 4 * n] * i[n + 4 * o];
-		for (r = 0; r < 16; r++) e[r] = s[r]
-	}, _.prototype.identity = function() {
-		for (var t = 0; t < 16; t++) this.tr[t] = t % 5 == 0 ? 1 : 0
-	}, _.prototype.getArray = function() {
-		return this.tr
-	}, _.prototype.getCopyMatrix = function() {
-		return new Float32Array(this.tr)
-	}, _.prototype.setMatrix = function(t) {
-		if (null != this.tr && this.tr.length == this.tr.length) for (var i = 0; i < 16; i++) this.tr[i] = t[i]
-	}, _.prototype.getScaleX = function() {
-		return this.tr[0]
-	}, _.prototype.getScaleY = function() {
-		return this.tr[5]
-	}, _.prototype.transformX = function(t) {
-		return this.tr[0] * t + this.tr[12]
-	}, _.prototype.transformY = function(t) {
-		return this.tr[5] * t + this.tr[13]
-	}, _.prototype.invertTransformX = function(t) {
-		return (t - this.tr[12]) / this.tr[0]
-	}, _.prototype.invertTransformY = function(t) {
-		return (t - this.tr[13]) / this.tr[5]
-	}, _.prototype.multTranslate = function(t, i) {
-		var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1];
-		_.mul(e, this.tr, this.tr)
-	}, _.prototype.translate = function(t, i) {
-		this.tr[12] = t, this.tr[13] = i
-	}, _.prototype.translateX = function(t) {
-		this.tr[12] = t
-	}, _.prototype.translateY = function(t) {
-		this.tr[13] = t
-	}, _.prototype.multScale = function(t, i) {
-		var e = [t, 0, 0, 0, 0, i, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];
-		_.mul(e, this.tr, this.tr)
-	}, _.prototype.scale = function(t, i) {
-		this.tr[0] = t, this.tr[5] = i
-	}, a.prototype = new _, a.prototype.setPosition = function(t, i) {
-		this.translate(t, i)
-	}, a.prototype.setCenterPosition = function(t, i) {
-		var e = this.width * this.getScaleX(),
-			r = this.height * this.getScaleY();
-		this.translate(t - e / 2, i - r / 2)
-	}, a.prototype.top = function(t) {
-		this.setY(t)
-	}, a.prototype.bottom = function(t) {
-		var i = this.height * this.getScaleY();
-		this.translateY(t - i)
-	}, a.prototype.left = function(t) {
-		this.setX(t)
-	}, a.prototype.right = function(t) {
-		var i = this.width * this.getScaleX();
-		this.translateX(t - i)
-	}, a.prototype.centerX = function(t) {
-		var i = this.width * this.getScaleX();
-		this.translateX(t - i / 2)
-	}, a.prototype.centerY = function(t) {
-		var i = this.height * this.getScaleY();
-		this.translateY(t - i / 2)
-	}, a.prototype.setX = function(t) {
-		this.translateX(t)
-	}, a.prototype.setY = function(t) {
-		this.translateY(t)
-	}, a.prototype.setHeight = function(t) {
-		var i = t / this.height,
-			e = -i;
-		this.scale(i, e)
-	}, a.prototype.setWidth = function(t) {
-		var i = t / this.width,
-			e = -i;
-		this.scale(i, e)
-	}, h.prototype = new MotionQueueManager, h.prototype.getCurrentPriority = function() {
-		return this.currentPriority
-	}, h.prototype.getReservePriority = function() {
-		return this.reservePriority
-	}, h.prototype.reserveMotion = function(t) {
-		return !(this.reservePriority >= t) && (!(this.currentPriority >= t) && (this.reservePriority = t, !0))
-	}, h.prototype.setReservePriority = function(t) {
-		this.reservePriority = t
-	}, h.prototype.updateParam = function(t) {
-		var i = MotionQueueManager.prototype.updateParam.call(this, t);
-		return this.isFinished() && (this.currentPriority = 0), i
-	}, h.prototype.startMotionPrio = function(t, i) {
-		return i == this.reservePriority && (this.reservePriority = 0), this.currentPriority = i, this.startMotion(t, !1)
-	}, l.load = function(t) {
-		for (var i = new l, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.physics_hair, n = o.length, s = 0; s < n; s++) {
-			var _ = o[s],
-				a = new PhysicsHair,
-				h = _.setup,
-				$ = parseFloat(h.length),
-				u = parseFloat(h.regist),
-				p = parseFloat(h.mass);
-			a.setup($, u, p);
-			for (var f = _.src, d = f.length, g = 0; g < d; g++) {
-				var y = f[g],
-					m = y.id,
-					T = PhysicsHair.Src.SRC_TO_X,
-					P = y.ptype;
-				"x" === P ? T = PhysicsHair.Src.SRC_TO_X : "y" === P ? T = PhysicsHair.Src.SRC_TO_Y : "angle" === P ? T = PhysicsHair.Src.SRC_TO_G_ANGLE : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Src");
-				var S = parseFloat(y.scale),
-					v = parseFloat(y.weight);
-				a.addSrcParam(T, m, S, v)
-			}
-			for (var L = _.targets, M = L.length, g = 0; g < M; g++) {
-				var E = L[g],
-					m = E.id,
-					T = PhysicsHair.Target.TARGET_FROM_ANGLE,
-					P = E.ptype;
-				"angle" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE : "angle_v" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE_V : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Target");
-				var S = parseFloat(E.scale),
-					v = parseFloat(E.weight);
-				a.addTargetParam(T, m, S, v)
-			}
-			i.physicsList.push(a)
-		}
-		return i
-	}, l.prototype.updateParam = function(t) {
-		for (var i = UtSystem.getUserTimeMSec() - this.startTimeMSec, e = 0; e < this.physicsList.length; e++) this.physicsList[e].update(t, i)
-	}, $.load = function(t) {
-		for (var i = new $, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.parts_visible, n = o.length, s = 0; s < n; s++) {
-			for (var _ = o[s], a = _.group, h = a.length, l = new Array, p = 0; p < h; p++) {
-				var f = a[p],
-					d = new u(f.id);
-				if (l[p] = d, null != f.link) {
-					var g = f.link,
-						y = g.length;
-					d.link = new Array;
-					for (var m = 0; m < y; m++) {
-						var T = new u(g[m]);
-						d.link.push(T)
-					}
-				}
-			}
-			i.partsGroups.push(l)
-		}
-		return i
-	}, $.prototype.updateParam = function(t) {
-		if (null != t) {
-			t != this.lastModel && this.initParam(t), this.lastModel = t;
-			var i = UtSystem.getUserTimeMSec(),
-				e = 0 == this.lastTime ? 0 : (i - this.lastTime) / 1e3;
-			this.lastTime = i, e < 0 && (e = 0);
-			for (var r = 0; r < this.partsGroups.length; r++) this.normalizePartsOpacityGroup(t, this.partsGroups[r], e), this.copyOpacityOtherParts(t, this.partsGroups[r])
-		}
-	}, $.prototype.initParam = function(t) {
-		if (null != t) for (var i = 0; i < this.partsGroups.length; i++) for (var e = this.partsGroups[i], r = 0; r < e.length; r++) {
-			e[r].initIndex(t);
-			var o = e[r].partsIndex,
-				n = e[r].paramIndex;
-			if (!(o < 0)) {
-				var s = 0 != t.getParamFloat(n);
-				if (t.setPartsOpacity(o, s ? 1 : 0), t.setParamFloat(n, s ? 1 : 0), null != e[r].link) for (var _ = 0; _ < e[r].link.length; _++) e[r].link[_].initIndex(t)
-			}
-		}
-	}, $.prototype.normalizePartsOpacityGroup = function(t, i, e) {
-		for (var r = -1, o = 1, n = 0; n < i.length; n++) {
-			var s = i[n].partsIndex,
-				_ = i[n].paramIndex;
-			if (!(s < 0) && 0 != t.getParamFloat(_)) {
-				if (r >= 0) break;
-				r = n, o = t.getPartsOpacity(s), o += e / .5, o > 1 && (o = 1)
-			}
-		}
-		r < 0 && (r = 0, o = 1);
-		for (var n = 0; n < i.length; n++) {
-			var s = i[n].partsIndex;
-			if (!(s < 0)) if (r == n) t.setPartsOpacity(s, o);
-			else {
-				var a, h = t.getPartsOpacity(s);
-				a = o < .5 ? -.5 * o / .5 + 1 : .5 * (1 - o) / .5;
-				var l = (1 - a) * (1 - o);
-				l > .15 && (a = 1 - .15 / (1 - o)), h > a && (h = a), t.setPartsOpacity(s, h)
-			}
-		}
-	}, $.prototype.copyOpacityOtherParts = function(t, i) {
-		for (var e = 0; e < i.length; e++) {
-			var r = i[e];
-			if (null != r.link && !(r.partsIndex < 0)) for (var o = t.getPartsOpacity(r.partsIndex), n = 0; n < r.link.length; n++) {
-				var s = r.link[n];
-				s.partsIndex < 0 || t.setPartsOpacity(s.partsIndex, o)
-			}
-		}
-	}, u.prototype.initIndex = function(t) {
-		this.paramIndex = t.getParamIndex("VISIBLE:" + this.id), this.partsIndex = t.getPartsDataIndex(PartsDataID.getID(this.id)), t.setParamFloat(this.paramIndex, 1)
-	}, p.FRAME_RATE = 30, p.prototype.setPoint = function(t, i) {
-		this.faceTargetX = t, this.faceTargetY = i
-	}, p.prototype.getX = function() {
-		return this.faceX
-	}, p.prototype.getY = function() {
-		return this.faceY
-	}, p.prototype.update = function() {
-		var t = 40 / 7.5 / p.FRAME_RATE;
-		if (0 == this.lastTimeSec) return void(this.lastTimeSec = UtSystem.getUserTimeMSec());
-		var i = UtSystem.getUserTimeMSec(),
-			e = (i - this.lastTimeSec) * p.FRAME_RATE / 1e3;
-		this.lastTimeSec = i;
-		var r = .15 * p.FRAME_RATE,
-			o = e * t / r,
-			n = this.faceTargetX - this.faceX,
-			s = this.faceTargetY - this.faceY;
-		if (!(Math.abs(n) <= this.EPSILON && Math.abs(s) <= this.EPSILON)) {
-			var _ = Math.sqrt(n * n + s * s),
-				a = t * n / _,
-				h = t * s / _,
-				l = a - this.faceVX,
-				$ = h - this.faceVY,
-				u = Math.sqrt(l * l + $ * $);
-			(u < -o || u > o) && (l *= o / u, $ *= o / u, u = o), this.faceVX += l, this.faceVY += $;
-			var f = .5 * (Math.sqrt(o * o + 16 * o * _ - 8 * o * _) - o),
-				c = Math.sqrt(this.faceVX * this.faceVX + this.faceVY * this.faceVY);
-			c > f && (this.faceVX *= f / c, this.faceVY *= f / c), this.faceX += this.faceVX, this.faceY += this.faceVY
-		}
-	}, f.prototype = new _, f.prototype.getMaxScale = function() {
-		return this.max
-	}, f.prototype.getMinScale = function() {
-		return this.min
-	}, f.prototype.setMaxScale = function(t) {
-		this.max = t
-	}, f.prototype.setMinScale = function(t) {
-		this.min = t
-	}, f.prototype.isMaxScale = function() {
-		return this.getScaleX() == this.max
-	}, f.prototype.isMinScale = function() {
-		return this.getScaleX() == this.min
-	}, f.prototype.adjustTranslate = function(t, i) {
-		this.tr[0] * this.maxLeft + (this.tr[12] + t) > this.screenLeft && (t = this.screenLeft - this.tr[0] * this.maxLeft - this.tr[12]), this.tr[0] * this.maxRight + (this.tr[12] + t) < this.screenRight && (t = this.screenRight - this.tr[0] * this.maxRight - this.tr[12]), this.tr[5] * this.maxTop + (this.tr[13] + i) < this.screenTop && (i = this.screenTop - this.tr[5] * this.maxTop - this.tr[13]), this.tr[5] * this.maxBottom + (this.tr[13] + i) > this.screenBottom && (i = this.screenBottom - this.tr[5] * this.maxBottom - this.tr[13]);
-		var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1];
-		_.mul(e, this.tr, this.tr)
-	}, f.prototype.adjustScale = function(t, i, e) {
-		var r = e * this.tr[0];
-		r < this.min ? this.tr[0] > 0 && (e = this.min / this.tr[0]) : r > this.max && this.tr[0] > 0 && (e = this.max / this.tr[0]);
-		var o = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1],
-			n = [e, 0, 0, 0, 0, e, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1],
-			s = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -t, -i, 0, 1];
-		_.mul(s, this.tr, this.tr), _.mul(n, this.tr, this.tr), _.mul(o, this.tr, this.tr)
-	}, f.prototype.setScreenRect = function(t, i, e, r) {
-		this.screenLeft = t, this.screenRight = i, this.screenTop = r, this.screenBottom = e
-	}, f.prototype.setMaxScreenRect = function(t, i, e, r) {
-		this.maxLeft = t, this.maxRight = i, this.maxTop = r, this.maxBottom = e
-	}, f.prototype.getScreenLeft = function() {
-		return this.screenLeft
-	}, f.prototype.getScreenRight = function() {
-		return this.screenRight
-	}, f.prototype.getScreenBottom = function() {
-		return this.screenBottom
-	}, f.prototype.getScreenTop = function() {
-		return this.screenTop
-	}, f.prototype.getMaxLeft = function() {
-		return this.maxLeft
-	}, f.prototype.getMaxRight = function() {
-		return this.maxRight
-	}, f.prototype.getMaxBottom = function() {
-		return this.maxBottom
-	}, f.prototype.getMaxTop = function() {
-		return this.maxTop
-	}, c.platformManager = null, c.getPlatformManager = function() {
-		return c.platformManager
-	}, c.setPlatformManager = function(t) {
-		c.platformManager = t
-	}, t.exports = {
-		L2DTargetPoint: p,
-		Live2DFramework: c,
-		L2DViewMatrix: f,
-		L2DPose: $,
-		L2DPartsParam: u,
-		L2DPhysics: l,
-		L2DMotionManager: h,
-		L2DModelMatrix: a,
-		L2DMatrix44: _,
-		EYE_STATE: g,
-		L2DEyeBlink: s,
-		L2DExpressionParam: n,
-		L2DExpressionMotion: o,
-		L2DBaseModel: r
-	}
-}, function(t, i, e) {
-	"use strict";
-	var r = {
-		DEBUG_LOG: !1,
-		DEBUG_MOUSE_LOG: !1,
-		DEBUG_DRAW_HIT_AREA: !1,
-		DEBUG_DRAW_ALPHA_MODEL: !1,
-		VIEW_MAX_SCALE: 2,
-		VIEW_MIN_SCALE: .8,
-		VIEW_LOGICAL_LEFT: -1,
-		VIEW_LOGICAL_RIGHT: 1,
-		VIEW_LOGICAL_MAX_LEFT: -2,
-		VIEW_LOGICAL_MAX_RIGHT: 2,
-		VIEW_LOGICAL_MAX_BOTTOM: -2,
-		VIEW_LOGICAL_MAX_TOP: 2,
-		PRIORITY_NONE: 0,
-		PRIORITY_IDLE: 1,
-		PRIORITY_SLEEPY: 2,
-		PRIORITY_NORMAL: 3,
-		PRIORITY_FORCE: 4,
-		MOTION_GROUP_IDLE: "idle",
-		MOTION_GROUP_SLEEPY: "sleepy",
-		MOTION_GROUP_TAP_BODY: "tap_body",
-		MOTION_GROUP_FLICK_HEAD: "flick_head",
-		MOTION_GROUP_PINCH_IN: "pinch_in",
-		MOTION_GROUP_PINCH_OUT: "pinch_out",
-		MOTION_GROUP_SHAKE: "shake",
-		HIT_AREA_HEAD: "head",
-		HIT_AREA_BODY: "body"
-	};
-	t.exports = r
-}, function(t, i, e) {
-	"use strict";
-
-	function r(t) {
-		n = t
-	}
-	function o() {
-		return n
-	}
-	Object.defineProperty(i, "__esModule", {
-		value: !0
-	}), i.setContext = r, i.getContext = o;
-	var n = void 0
-}, function(t, i, e) {
-	"use strict";
-
-	function r() {}
-	r.matrixStack = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.depth = 0, r.currentMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.tmp = new Array(16), r.reset = function() {
-		this.depth = 0
-	}, r.loadIdentity = function() {
-		for (var t = 0; t < 16; t++) this.currentMatrix[t] = t % 5 == 0 ? 1 : 0
-	}, r.push = function() {
-		var t = (this.depth, 16 * (this.depth + 1));
-		this.matrixStack.length < t + 16 && (this.matrixStack.length = t + 16);
-		for (var i = 0; i < 16; i++) this.matrixStack[t + i] = this.currentMatrix[i];
-		this.depth++
-	}, r.pop = function() {
-		--this.depth < 0 && (myError("Invalid matrix stack."), this.depth = 0);
-		for (var t = 16 * this.depth, i = 0; i < 16; i++) this.currentMatrix[i] = this.matrixStack[t + i]
-	}, r.getMatrix = function() {
-		return this.currentMatrix
-	}, r.multMatrix = function(t) {
-		var i, e, r;
-		for (i = 0; i < 16; i++) this.tmp[i] = 0;
-		for (i = 0; i < 4; i++) for (e = 0; e < 4; e++) for (r = 0; r < 4; r++) this.tmp[i + 4 * e] += this.currentMatrix[i + 4 * r] * t[r + 4 * e];
-		for (i = 0; i < 16; i++) this.currentMatrix[i] = this.tmp[i]
-	}, t.exports = r
-}, function(t, i, e) {
-	t.exports = e(5)
-}, function(t, i, e) {
-	"use strict";
-
-	function r(t) {
-		return t && t.__esModule ? t : {
-		default:
-			t
-		}
-	}
-	function o(t) {
-		C = document.getElementById(t), C.addEventListener && (window.addEventListener("click", g), window.addEventListener("mousedown", g), window.addEventListener("mousemove", g), window.addEventListener("mouseup", g), document.addEventListener("mouseout", g), window.addEventListener("touchstart", y), window.addEventListener("touchend", y), window.addEventListener("touchmove", y))
-	}
-	function n(t) {
-		var i = C.width,
-			e = C.height;
-		N = new M.L2DTargetPoint;
-		var r = e / i,
-			o = w.
-		default.VIEW_LOGICAL_LEFT,
-			n = w.
-		default.VIEW_LOGICAL_RIGHT,
-			_ = -r,
-			h = r;
-		if (window.Live2D.captureFrame = !1, B = new M.L2DViewMatrix, B.setScreenRect(o, n, _, h), B.setMaxScreenRect(w.
-	default.VIEW_LOGICAL_MAX_LEFT, w.
-	default.VIEW_LOGICAL_MAX_RIGHT, w.
-	default.VIEW_LOGICAL_MAX_BOTTOM, w.
-	default.VIEW_LOGICAL_MAX_TOP), B.setMaxScale(w.
-	default.VIEW_MAX_SCALE), B.setMinScale(w.
-	default.VIEW_MIN_SCALE), U = new M.L2DMatrix44, U.multScale(1, i / e), G = new M.L2DMatrix44, G.multTranslate(-i / 2, -e / 2), G.multScale(2 / i, -2 / i), F = v(), (0, D.setContext)(F), !F) return console.error("Failed to create WebGL context."), void(window.WebGLRenderingContext && console.error("Your browser don't support WebGL, check https://get.webgl.org/ for futher information."));
-		window.Live2D.setGL(F), F.clearColor(0, 0, 0, 0), a(t), s()
-	}
-	function s() {
-		b || (b = !0, function t() {
-			_();
-			var i = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame || window.msRequestAnimationFrame;
-			if (window.Live2D.captureFrame) {
-				window.Live2D.captureFrame = !1;
-				var e = document.createElement("a");
-				document.body.appendChild(e), e.setAttribute("type", "hidden"), e.href = C.toDataURL(), e.download = window.Live2D.captureName || "live2d.png", e.click()
-			}
-			i(t, C)
-		}())
-	}
-	function _() {
-		O.
-	default.reset(), O.
-	default.loadIdentity(), N.update(), R.setDrag(N.getX(), N.getY()), F.clear(F.COLOR_BUFFER_BIT), O.
-	default.multMatrix(U.getArray()), O.
-	default.multMatrix(B.getArray()), O.
-	default.push();
-		for (var t = 0; t < R.numModels(); t++) {
-			var i = R.getModel(t);
-			if (null == i) return;
-			i.initialized && !i.updating && (i.update(), i.draw(F))
-		}
-		O.
-	default.pop()
-	}
-	function a(t) {
-		R.reloadFlg = !0, R.count++, R.changeModel(F, t)
-	}
-	function h(t, i) {
-		return t.x * i.x + t.y * i.y
-	}
-	function l(t, i) {
-		var e = Math.sqrt(t * t + i * i);
-		return {
-			x: t / e,
-			y: i / e
-		}
-	}
-	function $(t, i, e) {
-		function r(t, i) {
-			return 180 * Math.acos(h({
-				x: 0,
-				y: 1
-			}, l(t, i))) / Math.PI
-		}
-		if (i.x < e.left + e.width && i.y < e.top + e.height && i.x > e.left && i.y > e.top) return i;
-		var o = t.x - i.x,
-			n = t.y - i.y,
-			s = r(o, n);
-		i.x < t.x && (s = 360 - s);
-		var _ = 360 - r(e.left - t.x, -1 * (e.top - t.y)),
-			a = 360 - r(e.left - t.x, -1 * (e.top + e.height - t.y)),
-			$ = r(e.left + e.width - t.x, -1 * (e.top - t.y)),
-			u = r(e.left + e.width - t.x, -1 * (e.top + e.height - t.y)),
-			p = n / o,
-			f = {};
-		if (s < $) {
-			var c = e.top - t.y,
-				d = c / p;
-			f = {
-				y: t.y + c,
-				x: t.x + d
-			}
-		} else if (s < u) {
-			var g = e.left + e.width - t.x,
-				y = g * p;
-			f = {
-				y: t.y + y,
-				x: t.x + g
-			}
-		} else if (s < a) {
-			var m = e.top + e.height - t.y,
-				T = m / p;
-			f = {
-				y: t.y + m,
-				x: t.x + T
-			}
-		} else if (s < _) {
-			var P = t.x - e.left,
-				S = P * p;
-			f = {
-				y: t.y - S,
-				x: t.x - P
-			}
-		} else {
-			var v = e.top - t.y,
-				L = v / p;
-			f = {
-				y: t.y + v,
-				x: t.x + L
-			}
-		}
-		return f
-	}
-	function u(t) {
-		Y = !0;
-		var i = C.getBoundingClientRect(),
-			e = P(t.clientX - i.left),
-			r = S(t.clientY - i.top),
-			o = $({
-				x: i.left + i.width / 2,
-				y: i.top + i.height * X
-			}, {
-				x: t.clientX,
-				y: t.clientY
-			}, i),
-			n = m(o.x - i.left),
-			s = T(o.y - i.top);
-		w.
-	default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, N.setPoint(n, s)
-	}
-	function p(t) {
-		Y = !0;
-		var i = C.getBoundingClientRect(),
-			e = P(t.clientX - i.left),
-			r = S(t.clientY - i.top),
-			o = $({
-				x: i.left + i.width / 2,
-				y: i.top + i.height * X
-			}, {
-				x: t.clientX,
-				y: t.clientY
-			}, i),
-			n = m(o.x - i.left),
-			s = T(o.y - i.top);
-		w.
-	default.DEBUG_MOUSE_LOG && console.log("onMouseDown device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, R.tapEvent(n, s)
-	}
-	function f(t) {
-		var i = C.getBoundingClientRect(),
-			e = P(t.clientX - i.left),
-			r = S(t.clientY - i.top),
-			o = $({
-				x: i.left + i.width / 2,
-				y: i.top + i.height * X
-			}, {
-				x: t.clientX,
-				y: t.clientY
-			}, i),
-			n = m(o.x - i.left),
-			s = T(o.y - i.top);
-		w.
-	default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), Y && (k = e, V = r, N.setPoint(n, s))
-	}
-	function c() {
-		Y && (Y = !1), N.setPoint(0, 0)
-	}
-	function d() {
-		w.
-	default.DEBUG_LOG && console.log("Set Session Storage."), sessionStorage.setItem("Sleepy", "1")
-	}
-	function g(t) {
-		if ("mousewheel" == t.type);
-		else if ("mousedown" == t.type) p(t);
-		else if ("mousemove" == t.type) {
-			var i = sessionStorage.getItem("Sleepy");
-			"1" === i && sessionStorage.setItem("Sleepy", "0"), u(t)
-		} else if ("mouseup" == t.type) {
-			if ("button" in t && 0 != t.button) return
-		} else if ("mouseout" == t.type) {
-			w.
-		default.DEBUG_LOG && console.log("Mouse out Window."), c();
-			var e = sessionStorage.getItem("SleepyTimer");
-			window.clearTimeout(e), e = window.setTimeout(d, 5e4), sessionStorage.setItem("SleepyTimer", e)
-		}
-	}
-	function y(t) {
-		var i = t.touches[0];
-		"touchstart" == t.type ? 1 == t.touches.length && u(i) : "touchmove" == t.type ? f(i) : "touchend" == t.type && c()
-	}
-	function m(t) {
-		var i = G.transformX(t);
-		return B.invertTransformX(i)
-	}
-	function T(t) {
-		var i = G.transformY(t);
-		return B.invertTransformY(i)
-	}
-	function P(t) {
-		return G.transformX(t)
-	}
-	function S(t) {
-		return G.transformY(t)
-	}
-	function v() {
-		for (var t = ["webgl", "experimental-webgl", "webkit-3d", "moz-webgl"], i = 0; i < t.length; i++) try {
-			var e = C.getContext(t[i], {
-				premultipliedAlpha: !0
-			});
-			if (e) return e
-		} catch (t) {}
-		return null
-	}
-	function L(t, i, e) {
-		X = void 0 === e ? .5 : e, o(t), n(i)
-	}
-	e(6);
-	var M = e(0),
-		E = e(8),
-		A = r(E),
-		I = e(1),
-		w = r(I),
-		x = e(3),
-		O = r(x),
-		D = e(2),
-		R = (window.navigator.platform.toLowerCase(), new A.
-	default),
-		b = !1,
-		F = null,
-		C = null,
-		N = null,
-		B = null,
-		U = null,
-		G = null,
-		Y = !1,
-		k = 0,
-		V = 0,
-		X = .5;
-	window.loadlive2d = L
-}, function(t, i, e) {
-	"use strict";
-	(function(t) {
-		!
-		function() {
-			function i() {
-				At || (this._$MT = null, this._$5S = null, this._$NP = 0, i._$42++, this._$5S = new Y(this))
-			}
-			function e(t) {
-				if (!At) {
-					this.clipContextList = new Array, this.glcontext = t.gl, this.dp_webgl = t, this.curFrameNo = 0, this.firstError_clipInNotUpdate = !0, this.colorBuffer = 0, this.isInitGLFBFunc = !1, this.tmpBoundsOnModel = new S, at.glContext.length > at.frameBuffers.length && (this.curFrameNo = this.getMaskRenderTexture()), this.tmpModelToViewMatrix = new R, this.tmpMatrix2 = new R, this.tmpMatrixForMask = new R, this.tmpMatrixForDraw = new R, this.CHANNEL_COLORS = new Array;
-					var i = new A;
-					i = new A, i.r = 0, i.g = 0, i.b = 0, i.a = 1, this.CHANNEL_COLORS.push(i), i = new A, i.r = 1, i.g = 0, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 1, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 0, i.b = 1, i.a = 0, this.CHANNEL_COLORS.push(i);
-					for (var e = 0; e < this.CHANNEL_COLORS.length; e++) this.dp_webgl.setChannelFlagAsColor(e, this.CHANNEL_COLORS[e])
-				}
-			}
-			function r(t, i, e) {
-				this.clipIDList = new Array, this.clipIDList = e, this.clippingMaskDrawIndexList = new Array;
-				for (var r = 0; r < e.length; r++) this.clippingMaskDrawIndexList.push(i.getDrawDataIndex(e[r]));
-				this.clippedDrawContextList = new Array, this.isUsing = !0, this.layoutChannelNo = 0, this.layoutBounds = new S, this.allClippedDrawRect = new S, this.matrixForMask = new Float32Array(16), this.matrixForDraw = new Float32Array(16), this.owner = t
-			}
-			function o(t, i) {
-				this._$gP = t, this.drawDataIndex = i
-			}
-			function n() {
-				At || (this.color = null)
-			}
-			function s() {
-				At || (this._$dP = null, this._$eo = null, this._$V0 = null, this._$dP = 1e3, this._$eo = 1e3, this._$V0 = 1, this._$a0())
-			}
-			function _() {}
-			function a() {
-				this._$r = null, this._$0S = null
-			}
-			function h() {
-				At || (this.x = null, this.y = null, this.width = null, this.height = null)
-			}
-			function l(t) {
-				At || et.prototype.constructor.call(this, t)
-			}
-			function $() {}
-			function u(t) {
-				At || et.prototype.constructor.call(this, t)
-			}
-			function p() {
-				At || (this._$vo = null, this._$F2 = null, this._$ao = 400, this._$1S = 400, p._$42++)
-			}
-			function f() {
-				At || (this.p1 = new c, this.p2 = new c, this._$Fo = 0, this._$Db = 0, this._$L2 = 0, this._$M2 = 0, this._$ks = 0, this._$9b = 0, this._$iP = 0, this._$iT = 0, this._$lL = new Array, this._$qP = new Array, this.setup(.3, .5, .1))
-			}
-			function c() {
-				this._$p = 1, this.x = 0, this.y = 0, this.vx = 0, this.vy = 0, this.ax = 0, this.ay = 0, this.fx = 0, this.fy = 0, this._$s0 = 0, this._$70 = 0, this._$7L = 0, this._$HL = 0
-			}
-			function d(t, i, e) {
-				this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e
-			}
-			function g(t, i, e, r) {
-				d.prototype.constructor.call(this, i, e, r), this._$tL = null, this._$tL = t
-			}
-			function y(t, i, e) {
-				this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e
-			}
-			function T(t, i, e, r) {
-				y.prototype.constructor.call(this, i, e, r), this._$YP = null, this._$YP = t
-			}
-			function P() {
-				At || (this._$fL = 0, this._$gL = 0, this._$B0 = 1, this._$z0 = 1, this._$qT = 0, this.reflectX = !1, this.reflectY = !1)
-			}
-			function S() {
-				At || (this.x = null, this.y = null, this.width = null, this.height = null)
-			}
-			function v() {}
-			function L() {
-				At || (this.x = null, this.y = null)
-			}
-			function M() {
-				At || (this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null, this.clipID = null, this.clipIDList = new Array)
-			}
-			function E() {
-				At || (this._$Eb = E._$ps, this._$lT = 1, this._$C0 = 1, this._$tT = 1, this._$WL = 1, this.culling = !1, this.matrix4x4 = new Float32Array(16), this.premultipliedAlpha = !1, this.anisotropy = 0, this.clippingProcess = E.CLIPPING_PROCESS_NONE, this.clipBufPre_clipContextMask = null, this.clipBufPre_clipContextDraw = null, this.CHANNEL_COLORS = new Array)
-			}
-			function A() {
-				At || (this.a = 1, this.r = 1, this.g = 1, this.b = 1, this.scale = 1, this._$ho = 1, this.blendMode = at.L2D_COLOR_BLEND_MODE_MULT)
-			}
-			function I() {
-				At || (this._$kP = null, this._$dr = null, this._$Ai = !0, this._$mS = null)
-			}
-			function w() {}
-			function x() {
-				At || (this._$VP = 0, this._$wL = null, this._$GP = null, this._$8o = x._$ds, this._$2r = -1, this._$O2 = 0, this._$ri = 0)
-			}
-			function O() {}
-			function D() {
-				At || (this._$Ob = null)
-			}
-			function R() {
-				this.m = new Float32Array(16), this.identity()
-			}
-			function b(t) {
-				At || et.prototype.constructor.call(this, t)
-			}
-			function F() {
-				At || (this._$7 = 1, this._$f = 0, this._$H = 0, this._$g = 1, this._$k = 0, this._$w = 0, this._$hi = STATE_IDENTITY, this._$Z = _$pS)
-			}
-			function C() {
-				At || (s.prototype.constructor.call(this), this.motions = new Array, this._$7r = null, this._$7r = C._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !0, this.loopFadeIn = !0, this._$AS = -1, _$a0())
-			}
-			function N() {
-				this._$P = new Float32Array(100), this.size = 0
-			}
-			function B() {
-				this._$4P = null, this._$I0 = null, this._$RP = null
-			}
-			function U() {}
-			function G() {}
-			function Y(t) {
-				At || (this._$QT = !0, this._$co = -1, this._$qo = 0, this._$pb = new Array(Y._$is), this._$_2 = new Float32Array(Y._$is), this._$vr = new Float32Array(Y._$is), this._$Rr = new Float32Array(Y._$is), this._$Or = new Float32Array(Y._$is), this._$fs = new Float32Array(Y._$is), this._$Js = new Array(Y._$is), this._$3S = new Array, this._$aS = new Array, this._$Bo = null, this._$F2 = new Array, this._$db = new Array, this._$8b = new Array, this._$Hr = new Array, this._$Ws = null, this._$Vs = null, this._$Er = null, this._$Es = new Int16Array(U._$Qb), this._$ZP = new Float32Array(2 * U._$1r), this._$Ri = t, this._$b0 = Y._$HP++, this.clipManager = null, this.dp_webgl = null)
-			}
-			function k() {}
-			function V() {
-				At || (this._$12 = null, this._$bb = null, this._$_L = null, this._$jo = null, this._$iL = null, this._$0L = null, this._$Br = null, this._$Dr = null, this._$Cb = null, this._$mr = null, this._$_L = wt.STATE_FIRST, this._$Br = 4e3, this._$Dr = 100, this._$Cb = 50, this._$mr = 150, this._$jo = !0, this._$iL = "PARAM_EYE_L_OPEN", this._$0L = "PARAM_EYE_R_OPEN")
-			}
-			function X() {
-				At || (E.prototype.constructor.call(this), this._$sb = new Int32Array(X._$As), this._$U2 = new Array, this.transform = null, this.gl = null, null == X._$NT && (X._$NT = X._$9r(256), X._$vS = X._$9r(256), X._$no = X._$vb(256)))
-			}
-			function z() {
-				At || (I.prototype.constructor.call(this), this._$GS = null, this._$Y0 = null)
-			}
-			function H(t) {
-				_t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Yr = null, this._$Wr = null
-			}
-			function W() {
-				At || (M.prototype.constructor.call(this), this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null)
-			}
-			function j() {
-				At || (this._$NL = null, this._$3S = null, this._$aS = null, j._$42++)
-			}
-			function q() {
-				At || (i.prototype.constructor.call(this), this._$zo = new X)
-			}
-			function J() {
-				At || (s.prototype.constructor.call(this), this.motions = new Array, this._$o2 = null, this._$7r = J._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !1, this.loopFadeIn = !0, this._$rr = -1, this._$eP = 0)
-			}
-			function Q(t, i) {
-				return String.fromCharCode(t.getUint8(i))
-			}
-			function N() {
-				this._$P = new Float32Array(100), this.size = 0
-			}
-			function B() {
-				this._$4P = null, this._$I0 = null, this._$RP = null
-			}
-			function Z() {
-				At || (I.prototype.constructor.call(this), this._$o = 0, this._$A = 0, this._$GS = null, this._$Eo = null)
-			}
-			function K(t) {
-				_t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Cr = null, this._$hr = null
-			}
-			function tt() {
-				At || (this.visible = !0, this._$g0 = !1, this._$NL = null, this._$3S = null, this._$aS = null, tt._$42++)
-			}
-			function it(t) {
-				this._$VS = null, this._$e0 = null, this._$e0 = t
-			}
-			function et(t) {
-				At || (this.id = t)
-			}
-			function rt() {}
-			function ot() {
-				At || (this._$4S = null)
-			}
-			function nt(t, i) {
-				this.canvas = t, this.context = i, this.viewport = new Array(0, 0, t.width, t.height), this._$6r = 1, this._$xP = 0, this._$3r = 1, this._$uP = 0, this._$Qo = -1, this.cacheImages = {}
-			}
-			function st() {
-				At || (this._$TT = null, this._$LT = null, this._$FS = null, this._$wL = null)
-			}
-			function _t(t) {
-				At || (this._$e0 = null, this._$IP = null, this._$JS = !1, this._$AT = !0, this._$e0 = t, this.totalScale = 1, this._$7s = 1, this.totalOpacity = 1)
-			}
-			function at() {}
-			function ht() {}
-			function lt(t) {
-				At || (this._$ib = t)
-			}
-			function $t() {
-				At || (W.prototype.constructor.call(this), this._$LP = -1, this._$d0 = 0, this._$Yo = 0, this._$JP = null, this._$5P = null, this._$BP = null, this._$Eo = null, this._$Qi = null, this._$6s = $t._$ms, this.culling = !0, this.gl_cacheImage = null, this.instanceNo = $t._$42++)
-			}
-			function ut(t) {
-				Mt.prototype.constructor.call(this, t), this._$8r = W._$ur, this._$Cr = null, this._$hr = null
-			}
-			function pt() {
-				At || (this.x = null, this.y = null)
-			}
-			function ft(t) {
-				At || (i.prototype.constructor.call(this), this.drawParamWebGL = new mt(t), this.drawParamWebGL.setGL(at.getGL(t)))
-			}
-			function ct() {
-				At || (this.motions = null, this._$eb = !1, this.motions = new Array)
-			}
-			function dt() {
-				this._$w0 = null, this._$AT = !0, this._$9L = !1, this._$z2 = -1, this._$bs = -1, this._$Do = -1, this._$sr = null, this._$sr = dt._$Gs++
-			}
-			function gt() {
-				this.m = new Array(1, 0, 0, 0, 1, 0, 0, 0, 1)
-			}
-			function yt(t) {
-				At || et.prototype.constructor.call(this, t)
-			}
-			function mt(t) {
-				At || (E.prototype.constructor.call(this), this.textures = new Array, this.transform = null, this.gl = null, this.glno = t, this.firstDraw = !0, this.anisotropyExt = null, this.maxAnisotropy = 0, this._$As = 32, this._$Gr = !1, this._$NT = null, this._$vS = null, this._$no = null, this.vertShader = null, this.fragShader = null, this.vertShaderOff = null, this.fragShaderOff = null)
-			}
-			function Tt(t, i, e) {
-				return null == i && (i = t.createBuffer()), t.bindBuffer(t.ARRAY_BUFFER, i), t.bufferData(t.ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i
-			}
-			function Pt(t, i, e) {
-				return null == i && (i = t.createBuffer()), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, i), t.bufferData(t.ELEMENT_ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i
-			}
-			function St(t) {
-				At || (this._$P = new Int8Array(8), this._$R0 = new DataView(this._$P.buffer), this._$3i = new Int8Array(1e3), this._$hL = 0, this._$v0 = 0, this._$S2 = 0, this._$Ko = new Array, this._$T = t, this._$F = 0)
-			}
-			function vt() {}
-			function Lt() {}
-			function Mt(t) {
-				At || (this._$e0 = null, this._$IP = null, this._$Us = null, this._$7s = null, this._$IS = [!1], this._$VS = null, this._$AT = !0, this.baseOpacity = 1, this.clipBufPre_clipContext = null, this._$e0 = t)
-			}
-			function Et() {}
-			var At = !0;
-			i._$0s = 1, i._$4s = 2, i._$42 = 0, i._$62 = function(t, e) {
-				try {
-					if (e instanceof ArrayBuffer && (e = new DataView(e)), !(e instanceof DataView)) throw new lt("_$SS#loadModel(b) / b _$x be DataView or ArrayBuffer");
-					var r, o = new St(e),
-						n = o._$ST(),
-						s = o._$ST(),
-						a = o._$ST();
-					if (109 != n || 111 != s || 99 != a) throw new lt("_$gi _$C _$li , _$Q0 _$P0.");
-					if (r = o._$ST(), o._$gr(r), r > G._$T7) {
-						t._$NP |= i._$4s;
-						throw new lt("_$gi _$C _$li , _$n0 _$_ version _$li ( SDK : " + G._$T7 + " < _$f0 : " + r + " )@_$SS#loadModel()\n")
-					}
-					var h = o._$nP();
-					if (r >= G._$s7) {
-						var l = o._$9T(),
-							$ = o._$9T();
-						if (-30584 != l || -30584 != $) throw t._$NP |= i._$0s, new lt("_$gi _$C _$li , _$0 _$6 _$Ui.")
-					}
-					t._$KS(h);
-					var u = t.getModelContext();
-					u.setDrawParam(t.getDrawParam()), u.init()
-				} catch (t) {
-					_._$Rb(t)
-				}
-			}, i.prototype._$KS = function(t) {
-				this._$MT = t
-			}, i.prototype.getModelImpl = function() {
-				return null == this._$MT && (this._$MT = new p, this._$MT._$zP()), this._$MT
-			}, i.prototype.getCanvasWidth = function() {
-				return null == this._$MT ? 0 : this._$MT.getCanvasWidth()
-			}, i.prototype.getCanvasHeight = function() {
-				return null == this._$MT ? 0 : this._$MT.getCanvasHeight()
-			}, i.prototype.getParamFloat = function(t) {
-				return "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), this._$5S.getParamFloat(t)
-			}, i.prototype.setParamFloat = function(t, i, e) {
-				"number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 - e) + i * e)
-			}, i.prototype.addToParamFloat = function(t, i, e) {
-				"number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) + i * e)
-			}, i.prototype.multParamFloat = function(t, i, e) {
-				"number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 + (i - 1) * e))
-			}, i.prototype.getParamIndex = function(t) {
-				return this._$5S.getParamIndex(u.getID(t))
-			}, i.prototype.loadParam = function() {
-				this._$5S.loadParam()
-			}, i.prototype.saveParam = function() {
-				this._$5S.saveParam()
-			}, i.prototype.init = function() {
-				this._$5S.init()
-			}, i.prototype.update = function() {
-				this._$5S.update()
-			}, i.prototype._$Rs = function() {
-				return _._$li("_$60 _$PT _$Rs()"), -1
-			}, i.prototype._$Ds = function(t) {
-				_._$li("_$60 _$PT _$SS#_$Ds() \n")
-			}, i.prototype._$K2 = function() {}, i.prototype.draw = function() {}, i.prototype.getModelContext = function() {
-				return this._$5S
-			}, i.prototype._$s2 = function() {
-				return this._$NP
-			}, i.prototype._$P7 = function(t, i, e, r) {
-				var o = -1,
-					n = 0,
-					s = this;
-				if (0 != e) if (1 == t.length) {
-					var _ = t[0],
-						a = 0 != s.getParamFloat(_),
-						h = i[0],
-						l = s.getPartsOpacity(h),
-						$ = e / r;
-					a ? (l += $) > 1 && (l = 1) : (l -= $) < 0 && (l = 0), s.setPartsOpacity(h, l)
-				} else {
-					for (var u = 0; u < t.length; u++) {
-						var _ = t[u],
-							p = 0 != s.getParamFloat(_);
-						if (p) {
-							if (o >= 0) break;
-							o = u;
-							var h = i[u];
-							n = s.getPartsOpacity(h), n += e / r, n > 1 && (n = 1)
-						}
-					}
-					o < 0 && (console.log("No _$wi _$q0/ _$U default[%s]", t[0]), o = 0, n = 1, s.loadParam(), s.setParamFloat(t[o], n), s.saveParam());
-					for (var u = 0; u < t.length; u++) {
-						var h = i[u];
-						if (o == u) s.setPartsOpacity(h, n);
-						else {
-							var f, c = s.getPartsOpacity(h);
-							f = n < .5 ? -.5 * n / .5 + 1 : .5 * (1 - n) / .5;
-							var d = (1 - f) * (1 - n);
-							d > .15 && (f = 1 - .15 / (1 - n)), c > f && (c = f), s.setPartsOpacity(h, c)
-						}
-					}
-				} else for (var u = 0; u < t.length; u++) {
-					var _ = t[u],
-						h = i[u],
-						p = 0 != s.getParamFloat(_);
-					s.setPartsOpacity(h, p ? 1 : 0)
-				}
-			}, i.prototype.setPartsOpacity = function(t, i) {
-				"number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), this._$5S.setPartsOpacity(t, i)
-			}, i.prototype.getPartsDataIndex = function(t) {
-				return t instanceof l || (t = l.getID(t)), this._$5S.getPartsDataIndex(t)
-			}, i.prototype.getPartsOpacity = function(t) {
-				return "number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), t < 0 ? 0 : this._$5S.getPartsOpacity(t)
-			}, i.prototype.getDrawParam = function() {}, i.prototype.getDrawDataIndex = function(t) {
-				return this._$5S.getDrawDataIndex(b.getID(t))
-			}, i.prototype.getDrawData = function(t) {
-				return this._$5S.getDrawData(t)
-			}, i.prototype.getTransformedPoints = function(t) {
-				var i = this._$5S._$C2(t);
-				return i instanceof ut ? i.getTransformedPoints() : null
-			}, i.prototype.getIndexArray = function(t) {
-				if (t < 0 || t >= this._$5S._$aS.length) return null;
-				var i = this._$5S._$aS[t];
-				return null != i && i.getType() == W._$wb && i instanceof $t ? i.getIndexArray() : null
-			}, e.CHANNEL_COUNT = 4, e.RENDER_TEXTURE_USE_MIPMAP = !1, e.NOT_USED_FRAME = -100, e.prototype._$L7 = function() {
-				if (this.tmpModelToViewMatrix && (this.tmpModelToViewMatrix = null), this.tmpMatrix2 && (this.tmpMatrix2 = null), this.tmpMatrixForMask && (this.tmpMatrixForMask = null), this.tmpMatrixForDraw && (this.tmpMatrixForDraw = null), this.tmpBoundsOnModel && (this.tmpBoundsOnModel = null), this.CHANNEL_COLORS) {
-					for (var t = this.CHANNEL_COLORS.length - 1; t >= 0; --t) this.CHANNEL_COLORS.splice(t, 1);
-					this.CHANNEL_COLORS = []
-				}
-				this.releaseShader()
-			}, e.prototype.releaseShader = function() {
-				for (var t = at.frameBuffers.length, i = 0; i < t; i++) this.gl.deleteFramebuffer(at.frameBuffers[i].framebuffer);
-				at.frameBuffers = [], at.glContext = []
-			}, e.prototype.init = function(t, i, e) {
-				for (var o = 0; o < i.length; o++) {
-					var n = i[o].getClipIDList();
-					if (null != n) {
-						var s = this.findSameClip(n);
-						null == s && (s = new r(this, t, n), this.clipContextList.push(s));
-						var _ = i[o].getDrawDataID(),
-							a = t.getDrawDataIndex(_);
-						s.addClippedDrawData(_, a);
-						e[o].clipBufPre_clipContext = s
-					}
-				}
-			}, e.prototype.getMaskRenderTexture = function() {
-				var t = null;
-				return t = this.dp_webgl.createFramebuffer(), at.frameBuffers[this.dp_webgl.glno] = t, this.dp_webgl.glno
-			}, e.prototype.setupClip = function(t, i) {
-				for (var e = 0, r = 0; r < this.clipContextList.length; r++) {
-					var o = this.clipContextList[r];
-					this.calcClippedDrawTotalBounds(t, o), o.isUsing && e++
-				}
-				if (e > 0) {
-					var n = i.gl.getParameter(i.gl.FRAMEBUFFER_BINDING),
-						s = new Array(4);
-					s[0] = 0, s[1] = 0, s[2] = i.gl.canvas.width, s[3] = i.gl.canvas.height, i.gl.viewport(0, 0, at.clippingMaskBufferSize, at.clippingMaskBufferSize), this.setupLayoutBounds(e), i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, at.frameBuffers[this.curFrameNo].framebuffer), i.gl.clearColor(0, 0, 0, 0), i.gl.clear(i.gl.COLOR_BUFFER_BIT);
-					for (var r = 0; r < this.clipContextList.length; r++) {
-						var o = this.clipContextList[r],
-							_ = o.allClippedDrawRect,
-							a = (o.layoutChannelNo, o.layoutBounds);
-						this.tmpBoundsOnModel._$jL(_), this.tmpBoundsOnModel.expand(.05 * _.width, .05 * _.height);
-						var h = a.width / this.tmpBoundsOnModel.width,
-							l = a.height / this.tmpBoundsOnModel.height;
-						this.tmpMatrix2.identity(), this.tmpMatrix2.translate(-1, -1, 0), this.tmpMatrix2.scale(2, 2, 1), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForMask.setMatrix(this.tmpMatrix2.m), this.tmpMatrix2.identity(), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForDraw.setMatrix(this.tmpMatrix2.m);
-						for (var $ = this.tmpMatrixForMask.getArray(), u = 0; u < 16; u++) o.matrixForMask[u] = $[u];
-						for (var p = this.tmpMatrixForDraw.getArray(), u = 0; u < 16; u++) o.matrixForDraw[u] = p[u];
-						for (var f = o.clippingMaskDrawIndexList.length, c = 0; c < f; c++) {
-							var d = o.clippingMaskDrawIndexList[c],
-								g = t.getDrawData(d),
-								y = t._$C2(d);
-							i.setClipBufPre_clipContextForMask(o), g.draw(i, t, y)
-						}
-					}
-					i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, n), i.setClipBufPre_clipContextForMask(null), i.gl.viewport(s[0], s[1], s[2], s[3])
-				}
-			}, e.prototype.getColorBuffer = function() {
-				return this.colorBuffer
-			}, e.prototype.findSameClip = function(t) {
-				for (var i = 0; i < this.clipContextList.length; i++) {
-					var e = this.clipContextList[i],
-						r = e.clipIDList.length;
-					if (r == t.length) {
-						for (var o = 0, n = 0; n < r; n++) for (var s = e.clipIDList[n], _ = 0; _ < r; _++) if (t[_] == s) {
-							o++;
-							break
-						}
-						if (o == r) return e
-					}
-				}
-				return null
-			}, e.prototype.calcClippedDrawTotalBounds = function(t, i) {
-				for (var e = t._$Ri.getModelImpl().getCanvasWidth(), r = t._$Ri.getModelImpl().getCanvasHeight(), o = e > r ? e : r, n = o, s = o, _ = 0, a = 0, h = i.clippedDrawContextList.length, l = 0; l < h; l++) {
-					var $ = i.clippedDrawContextList[l],
-						u = $.drawDataIndex,
-						p = t._$C2(u);
-					if (p._$yo()) {
-						for (var f = p.getTransformedPoints(), c = f.length, d = [], g = [], y = 0, m = U._$i2; m < c; m += U._$No) d[y] = f[m], g[y] = f[m + 1], y++;
-						var T = Math.min.apply(null, d),
-							P = Math.min.apply(null, g),
-							S = Math.max.apply(null, d),
-							v = Math.max.apply(null, g);
-						T < n && (n = T), P < s && (s = P), S > _ && (_ = S), v > a && (a = v)
-					}
-				}
-				if (n == o) i.allClippedDrawRect.x = 0, i.allClippedDrawRect.y = 0, i.allClippedDrawRect.width = 0, i.allClippedDrawRect.height = 0, i.isUsing = !1;
-				else {
-					var L = _ - n,
-						M = a - s;
-					i.allClippedDrawRect.x = n, i.allClippedDrawRect.y = s, i.allClippedDrawRect.width = L, i.allClippedDrawRect.height = M, i.isUsing = !0
-				}
-			}, e.prototype.setupLayoutBounds = function(t) {
-				var i = t / e.CHANNEL_COUNT,
-					r = t % e.CHANNEL_COUNT;
-				i = ~~i, r = ~~r;
-				for (var o = 0, n = 0; n < e.CHANNEL_COUNT; n++) {
-					var s = i + (n < r ? 1 : 0);
-					if (0 == s);
-					else if (1 == s) {
-						var a = this.clipContextList[o++];
-						a.layoutChannelNo = n, a.layoutBounds.x = 0, a.layoutBounds.y = 0, a.layoutBounds.width = 1, a.layoutBounds.height = 1
-					} else if (2 == s) for (var h = 0; h < s; h++) {
-						var l = h % 2,
-							$ = 0;
-						l = ~~l;
-						var a = this.clipContextList[o++];
-						a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = 0, a.layoutBounds.width = .5, a.layoutBounds.height = 1
-					} else if (s <= 4) for (var h = 0; h < s; h++) {
-						var l = h % 2,
-							$ = h / 2;
-						l = ~~l, $ = ~~$;
-						var a = this.clipContextList[o++];
-						a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = .5 * $, a.layoutBounds.width = .5, a.layoutBounds.height = .5
-					} else if (s <= 9) for (var h = 0; h < s; h++) {
-						var l = h % 3,
-							$ = h / 3;
-						l = ~~l, $ = ~~$;
-						var a = this.clipContextList[o++];
-						a.layoutChannelNo = n, a.layoutBounds.x = l / 3, a.layoutBounds.y = $ / 3, a.layoutBounds.width = 1 / 3, a.layoutBounds.height = 1 / 3
-					} else _._$li("_$6 _$0P mask count : %d", s)
-				}
-			}, r.prototype.addClippedDrawData = function(t, i) {
-				var e = new o(t, i);
-				this.clippedDrawContextList.push(e)
-			}, s._$JT = function(t, i, e) {
-				var r = t / i,
-					o = e / i,
-					n = o,
-					s = 1 - (1 - o) * (1 - o),
-					_ = 1 - (1 - n) * (1 - n),
-					a = 1 / 3 * (1 - o) * s + (n * (2 / 3) + 1 / 3 * (1 - n)) * (1 - s),
-					h = (n + 2 / 3 * (1 - n)) * _ + (o * (1 / 3) + 2 / 3 * (1 - o)) * (1 - _),
-					l = 1 - 3 * h + 3 * a - 0,
-					$ = 3 * h - 6 * a + 0,
-					u = 3 * a - 0;
-				if (r <= 0) return 0;
-				if (r >= 1) return 1;
-				var p = r,
-					f = p * p;
-				return l * (p * f) + $ * f + u * p + 0
-			}, s.prototype._$a0 = function() {}, s.prototype.setFadeIn = function(t) {
-				this._$dP = t
-			}, s.prototype.setFadeOut = function(t) {
-				this._$eo = t
-			}, s.prototype._$pT = function(t) {
-				this._$V0 = t
-			}, s.prototype.getFadeOut = function() {
-				return this._$eo
-			}, s.prototype._$4T = function() {
-				return this._$eo
-			}, s.prototype._$mT = function() {
-				return this._$V0
-			}, s.prototype.getDurationMSec = function() {
-				return -1
-			}, s.prototype.getLoopDurationMSec = function() {
-				return -1
-			}, s.prototype.updateParam = function(t, i) {
-				if (i._$AT && !i._$9L) {
-					var e = w.getUserTimeMSec();
-					if (i._$z2 < 0) {
-						i._$z2 = e, i._$bs = e;
-						var r = this.getDurationMSec();
-						i._$Do < 0 && (i._$Do = r <= 0 ? -1 : i._$z2 + r)
-					}
-					var o = this._$V0;
-					o = o * (0 == this._$dP ? 1 : ht._$r2((e - i._$bs) / this._$dP)) * (0 == this._$eo || i._$Do < 0 ? 1 : ht._$r2((i._$Do - e) / this._$eo)), 0 <= o && o <= 1 || console.log("### assert!! ### "), this.updateParamExe(t, e, o, i), i._$Do > 0 && i._$Do < e && (i._$9L = !0)
-				}
-			}, s.prototype.updateParamExe = function(t, i, e, r) {}, _._$8s = 0, _._$fT = new Object, _.start = function(t) {
-				var i = _._$fT[t];
-				null == i && (i = new a, i._$r = t, _._$fT[t] = i), i._$0S = w.getSystemTimeMSec()
-			}, _.dump = function(t) {
-				var i = _._$fT[t];
-				if (null != i) {
-					var e = w.getSystemTimeMSec(),
-						r = e - i._$0S;
-					return console.log(t + " : " + r + "ms"), r
-				}
-				return -1
-			}, _.end = function(t) {
-				var i = _._$fT[t];
-				if (null != i) {
-					return w.getSystemTimeMSec() - i._$0S
-				}
-				return -1
-			}, _._$li = function(t, i) {
-				console.log("_$li : " + t + "\n", i)
-			}, _._$Ji = function(t, i) {
-				console.log(t, i)
-			}, _._$dL = function(t, i) {
-				console.log(t, i), console.log("\n")
-			}, _._$KL = function(t, i) {
-				for (var e = 0; e < i; e++) e % 16 == 0 && e > 0 ? console.log("\n") : e % 8 == 0 && e > 0 && console.log("  "), console.log("%02X ", 255 & t[e]);
-				console.log("\n")
-			}, _._$nr = function(t, i, e) {
-				console.log("%s\n", t);
-				for (var r = i.length, o = 0; o < r; ++o) console.log("%5d", i[o]), console.log("%s\n", e), console.log(",");
-				console.log("\n")
-			}, _._$Rb = function(t) {
-				console.log("dump exception : " + t), console.log("stack :: " + t.stack)
-			}, h.prototype._$8P = function() {
-				return .5 * (this.x + this.x + this.width)
-			}, h.prototype._$6P = function() {
-				return .5 * (this.y + this.y + this.height)
-			}, h.prototype._$EL = function() {
-				return this.x + this.width
-			}, h.prototype._$5T = function() {
-				return this.y + this.height
-			}, h.prototype._$jL = function(t, i, e, r) {
-				this.x = t, this.y = i, this.width = e, this.height = r
-			}, h.prototype._$jL = function(t) {
-				this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height
-			}, l.prototype = new et, l._$tP = new Object, l._$27 = function() {
-				l._$tP.clear()
-			}, l.getID = function(t) {
-				var i = l._$tP[t];
-				return null == i && (i = new l(t), l._$tP[t] = i), i
-			}, l.prototype._$3s = function() {
-				return new l
-			}, u.prototype = new et, u._$tP = new Object, u._$27 = function() {
-				u._$tP.clear()
-			}, u.getID = function(t) {
-				var i = u._$tP[t];
-				return null == i && (i = new u(t), u._$tP[t] = i), i
-			}, u.prototype._$3s = function() {
-				return new u
-			}, p._$42 = 0, p.prototype._$zP = function() {
-				null == this._$vo && (this._$vo = new ot), null == this._$F2 && (this._$F2 = new Array)
-			}, p.prototype.getCanvasWidth = function() {
-				return this._$ao
-			}, p.prototype.getCanvasHeight = function() {
-				return this._$1S
-			}, p.prototype._$F0 = function(t) {
-				this._$vo = t._$nP(), this._$F2 = t._$nP(), this._$ao = t._$6L(), this._$1S = t._$6L()
-			}, p.prototype._$6S = function(t) {
-				this._$F2.push(t)
-			}, p.prototype._$Xr = function() {
-				return this._$F2
-			}, p.prototype._$E2 = function() {
-				return this._$vo
-			}, f.prototype.setup = function(t, i, e) {
-				this._$ks = this._$Yb(), this.p2._$xT(), 3 == arguments.length && (this._$Fo = t, this._$L2 = i, this.p1._$p = e, this.p2._$p = e, this.p2.y = t, this.setup())
-			}, f.prototype.getPhysicsPoint1 = function() {
-				return this.p1
-			}, f.prototype.getPhysicsPoint2 = function() {
-				return this.p2
-			}, f.prototype._$qr = function() {
-				return this._$Db
-			}, f.prototype._$pr = function(t) {
-				this._$Db = t
-			}, f.prototype._$5r = function() {
-				return this._$M2
-			}, f.prototype._$Cs = function() {
-				return this._$9b
-			}, f.prototype._$Yb = function() {
-				return -180 * Math.atan2(this.p1.x - this.p2.x, -(this.p1.y - this.p2.y)) / Math.PI
-			}, f.prototype.addSrcParam = function(t, i, e, r) {
-				var o = new g(t, i, e, r);
-				this._$lL.push(o)
-			}, f.prototype.addTargetParam = function(t, i, e, r) {
-				var o = new T(t, i, e, r);
-				this._$qP.push(o)
-			}, f.prototype.update = function(t, i) {
-				if (0 == this._$iP) return this._$iP = this._$iT = i, void(this._$Fo = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y)));
-				var e = (i - this._$iT) / 1e3;
-				if (0 != e) {
-					for (var r = this._$lL.length - 1; r >= 0; --r) {
-						this._$lL[r]._$oP(t, this)
-					}
-					this._$oo(t, e), this._$M2 = this._$Yb(), this._$9b = (this._$M2 - this._$ks) / e, this._$ks = this._$M2
-				}
-				for (var r = this._$qP.length - 1; r >= 0; --r) {
-					this._$qP[r]._$YS(t, this)
-				}
-				this._$iT = i
-			}, f.prototype._$oo = function(t, i) {
-				i < .033 && (i = .033);
-				var e = 1 / i;
-				this.p1.vx = (this.p1.x - this.p1._$s0) * e, this.p1.vy = (this.p1.y - this.p1._$70) * e, this.p1.ax = (this.p1.vx - this.p1._$7L) * e, this.p1.ay = (this.p1.vy - this.p1._$HL) * e, this.p1.fx = this.p1.ax * this.p1._$p, this.p1.fy = this.p1.ay * this.p1._$p, this.p1._$xT();
-				var r, o, n = -Math.atan2(this.p1.y - this.p2.y, this.p1.x - this.p2.x),
-					s = Math.cos(n),
-					_ = Math.sin(n),
-					a = 9.8 * this.p2._$p,
-					h = this._$Db * Lt._$bS,
-					l = a * Math.cos(n - h);
-				r = l * _, o = l * s;
-				var $ = -this.p1.fx * _ * _,
-					u = -this.p1.fy * _ * s,
-					p = -this.p2.vx * this._$L2,
-					f = -this.p2.vy * this._$L2;
-				this.p2.fx = r + $ + p, this.p2.fy = o + u + f, this.p2.ax = this.p2.fx / this.p2._$p, this.p2.ay = this.p2.fy / this.p2._$p, this.p2.vx += this.p2.ax * i, this.p2.vy += this.p2.ay * i, this.p2.x += this.p2.vx * i, this.p2.y += this.p2.vy * i;
-				var c = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y));
-				this.p2.x = this.p1.x + this._$Fo * (this.p2.x - this.p1.x) / c, this.p2.y = this.p1.y + this._$Fo * (this.p2.y - this.p1.y) / c, this.p2.vx = (this.p2.x - this.p2._$s0) * e, this.p2.vy = (this.p2.y - this.p2._$70) * e, this.p2._$xT()
-			}, c.prototype._$xT = function() {
-				this._$s0 = this.x, this._$70 = this.y, this._$7L = this.vx, this._$HL = this.vy
-			}, d.prototype._$oP = function(t, i) {}, g.prototype = new d, g.prototype._$oP = function(t, i) {
-				var e = this.scale * t.getParamFloat(this._$wL),
-					r = i.getPhysicsPoint1();
-				switch (this._$tL) {
-				default:
-				case f.Src.SRC_TO_X:
-					r.x = r.x + (e - r.x) * this._$V0;
-					break;
-				case f.Src.SRC_TO_Y:
-					r.y = r.y + (e - r.y) * this._$V0;
-					break;
-				case f.Src.SRC_TO_G_ANGLE:
-					var o = i._$qr();
-					o += (e - o) * this._$V0, i._$pr(o)
-				}
-			}, y.prototype._$YS = function(t, i) {}, T.prototype = new y, T.prototype._$YS = function(t, i) {
-				switch (this._$YP) {
-				default:
-				case f.Target.TARGET_FROM_ANGLE:
-					t.setParamFloat(this._$wL, this.scale * i._$5r(), this._$V0);
-					break;
-				case f.Target.TARGET_FROM_ANGLE_V:
-					t.setParamFloat(this._$wL, this.scale * i._$Cs(), this._$V0)
-				}
-			}, f.Src = function() {}, f.Src.SRC_TO_X = "SRC_TO_X", f.Src.SRC_TO_Y = "SRC_TO_Y", f.Src.SRC_TO_G_ANGLE = "SRC_TO_G_ANGLE", f.Target = function() {}, f.Target.TARGET_FROM_ANGLE = "TARGET_FROM_ANGLE", f.Target.TARGET_FROM_ANGLE_V = "TARGET_FROM_ANGLE_V", P.prototype.init = function(t) {
-				this._$fL = t._$fL, this._$gL = t._$gL, this._$B0 = t._$B0, this._$z0 = t._$z0, this._$qT = t._$qT, this.reflectX = t.reflectX, this.reflectY = t.reflectY
-			}, P.prototype._$F0 = function(t) {
-				this._$fL = t._$_T(), this._$gL = t._$_T(), this._$B0 = t._$_T(), this._$z0 = t._$_T(), this._$qT = t._$_T(), t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this.reflectX = t._$po(), this.reflectY = t._$po())
-			}, P.prototype._$e = function() {};
-			var It = function() {};
-			It._$ni = function(t, i, e, r, o, n, s, _, a) {
-				var h = s * n - _ * o;
-				if (0 == h) return null;
-				var l, $ = ((t - e) * n - (i - r) * o) / h;
-				return l = 0 != o ? (t - e - $ * s) / o : (i - r - $ * _) / n, isNaN(l) && (l = (t - e - $ * s) / o, isNaN(l) && (l = (i - r - $ * _) / n), isNaN(l) && (console.log("a is NaN @UtVector#_$ni() "), console.log("v1x : " + o), console.log("v1x != 0 ? " + (0 != o)))), null == a ? new Array(l, $) : (a[0] = l, a[1] = $, a)
-			}, S.prototype._$8P = function() {
-				return this.x + .5 * this.width
-			}, S.prototype._$6P = function() {
-				return this.y + .5 * this.height
-			}, S.prototype._$EL = function() {
-				return this.x + this.width
-			}, S.prototype._$5T = function() {
-				return this.y + this.height
-			}, S.prototype._$jL = function(t, i, e, r) {
-				this.x = t, this.y = i, this.width = e, this.height = r
-			}, S.prototype._$jL = function(t) {
-				this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height
-			}, S.prototype.contains = function(t, i) {
-				return this.x <= this.x && this.y <= this.y && this.x <= this.x + this.width && this.y <= this.y + this.height
-			}, S.prototype.expand = function(t, i) {
-				this.x -= t, this.y -= i, this.width += 2 * t, this.height += 2 * i
-			}, v._$Z2 = function(t, i, e, r) {
-				var o = i._$Q2(t, e),
-					n = t._$vs(),
-					s = t._$Tr();
-				if (i._$zr(n, s, o), o <= 0) return r[n[0]];
-				if (1 == o) {
-					var _ = r[n[0]],
-						a = r[n[1]],
-						h = s[0];
-					return _ + (a - _) * h | 0
-				}
-				if (2 == o) {
-					var _ = r[n[0]],
-						a = r[n[1]],
-						l = r[n[2]],
-						$ = r[n[3]],
-						h = s[0],
-						u = s[1],
-						p = _ + (a - _) * h | 0,
-						f = l + ($ - l) * h | 0;
-					return p + (f - p) * u | 0
-				}
-				if (3 == o) {
-					var c = r[n[0]],
-						d = r[n[1]],
-						g = r[n[2]],
-						y = r[n[3]],
-						m = r[n[4]],
-						T = r[n[5]],
-						P = r[n[6]],
-						S = r[n[7]],
-						h = s[0],
-						u = s[1],
-						v = s[2],
-						_ = c + (d - c) * h | 0,
-						a = g + (y - g) * h | 0,
-						l = m + (T - m) * h | 0,
-						$ = P + (S - P) * h | 0,
-						p = _ + (a - _) * u | 0,
-						f = l + ($ - l) * u | 0;
-					return p + (f - p) * v | 0
-				}
-				if (4 == o) {
-					var L = r[n[0]],
-						M = r[n[1]],
-						E = r[n[2]],
-						A = r[n[3]],
-						I = r[n[4]],
-						w = r[n[5]],
-						x = r[n[6]],
-						O = r[n[7]],
-						D = r[n[8]],
-						R = r[n[9]],
-						b = r[n[10]],
-						F = r[n[11]],
-						C = r[n[12]],
-						N = r[n[13]],
-						B = r[n[14]],
-						U = r[n[15]],
-						h = s[0],
-						u = s[1],
-						v = s[2],
-						G = s[3],
-						c = L + (M - L) * h | 0,
-						d = E + (A - E) * h | 0,
-						g = I + (w - I) * h | 0,
-						y = x + (O - x) * h | 0,
-						m = D + (R - D) * h | 0,
-						T = b + (F - b) * h | 0,
-						P = C + (N - C) * h | 0,
-						S = B + (U - B) * h | 0,
-						_ = c + (d - c) * u | 0,
-						a = g + (y - g) * u | 0,
-						l = m + (T - m) * u | 0,
-						$ = P + (S - P) * u | 0,
-						p = _ + (a - _) * v | 0,
-						f = l + ($ - l) * v | 0;
-					return p + (f - p) * G | 0
-				}
-				for (var Y = 1 << o, k = new Float32Array(Y), V = 0; V < Y; V++) {
-					for (var X = V, z = 1, H = 0; H < o; H++) z *= X % 2 == 0 ? 1 - s[H] : s[H], X /= 2;
-					k[V] = z
-				}
-				for (var W = new Float32Array(Y), j = 0; j < Y; j++) W[j] = r[n[j]];
-				for (var q = 0, j = 0; j < Y; j++) q += k[j] * W[j];
-				return q + .5 | 0
-			}, v._$br = function(t, i, e, r) {
-				var o = i._$Q2(t, e),
-					n = t._$vs(),
-					s = t._$Tr();
-				if (i._$zr(n, s, o), o <= 0) return r[n[0]];
-				if (1 == o) {
-					var _ = r[n[0]],
-						a = r[n[1]],
-						h = s[0];
-					return _ + (a - _) * h
-				}
-				if (2 == o) {
-					var _ = r[n[0]],
-						a = r[n[1]],
-						l = r[n[2]],
-						$ = r[n[3]],
-						h = s[0],
-						u = s[1];
-					return (1 - u) * (_ + (a - _) * h) + u * (l + ($ - l) * h)
-				}
-				if (3 == o) {
-					var p = r[n[0]],
-						f = r[n[1]],
-						c = r[n[2]],
-						d = r[n[3]],
-						g = r[n[4]],
-						y = r[n[5]],
-						m = r[n[6]],
-						T = r[n[7]],
-						h = s[0],
-						u = s[1],
-						P = s[2];
-					return (1 - P) * ((1 - u) * (p + (f - p) * h) + u * (c + (d - c) * h)) + P * ((1 - u) * (g + (y - g) * h) + u * (m + (T - m) * h))
-				}
-				if (4 == o) {
-					var S = r[n[0]],
-						v = r[n[1]],
-						L = r[n[2]],
-						M = r[n[3]],
-						E = r[n[4]],
-						A = r[n[5]],
-						I = r[n[6]],
-						w = r[n[7]],
-						x = r[n[8]],
-						O = r[n[9]],
-						D = r[n[10]],
-						R = r[n[11]],
-						b = r[n[12]],
-						F = r[n[13]],
-						C = r[n[14]],
-						N = r[n[15]],
-						h = s[0],
-						u = s[1],
-						P = s[2],
-						B = s[3];
-					return (1 - B) * ((1 - P) * ((1 - u) * (S + (v - S) * h) + u * (L + (M - L) * h)) + P * ((1 - u) * (E + (A - E) * h) + u * (I + (w - I) * h))) + B * ((1 - P) * ((1 - u) * (x + (O - x) * h) + u * (D + (R - D) * h)) + P * ((1 - u) * (b + (F - b) * h) + u * (C + (N - C) * h)))
-				}
-				for (var U = 1 << o, G = new Float32Array(U), Y = 0; Y < U; Y++) {
-					for (var k = Y, V = 1, X = 0; X < o; X++) V *= k % 2 == 0 ? 1 - s[X] : s[X], k /= 2;
-					G[Y] = V
-				}
-				for (var z = new Float32Array(U), H = 0; H < U; H++) z[H] = r[n[H]];
-				for (var W = 0, H = 0; H < U; H++) W += G[H] * z[H];
-				return W
-			}, v._$Vr = function(t, i, e, r, o, n, s, _) {
-				var a = i._$Q2(t, e),
-					h = t._$vs(),
-					l = t._$Tr();
-				i._$zr(h, l, a);
-				var $ = 2 * r,
-					u = s;
-				if (a <= 0) {
-					var p = h[0],
-						f = o[p];
-					if (2 == _ && 0 == s) w._$jT(f, 0, n, 0, $);
-					else for (var c = 0; c < $;) n[u] = f[c++], n[u + 1] = f[c++], u += _
-				} else if (1 == a) for (var f = o[h[0]], d = o[h[1]], g = l[0], y = 1 - g, c = 0; c < $;) n[u] = f[c] * y + d[c] * g, ++c, n[u + 1] = f[c] * y + d[c] * g, ++c, u += _;
-				else if (2 == a) for (var f = o[h[0]], d = o[h[1]], m = o[h[2]], T = o[h[3]], g = l[0], P = l[1], y = 1 - g, S = 1 - P, v = S * y, L = S * g, M = P * y, E = P * g, c = 0; c < $;) n[u] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, n[u + 1] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, u += _;
-				else if (3 == a) for (var A = o[h[0]], I = o[h[1]], x = o[h[2]], O = o[h[3]], D = o[h[4]], R = o[h[5]], b = o[h[6]], F = o[h[7]], g = l[0], P = l[1], C = l[2], y = 1 - g, S = 1 - P, N = 1 - C, B = N * S * y, U = N * S * g, G = N * P * y, Y = N * P * g, k = C * S * y, V = C * S * g, X = C * P * y, z = C * P * g, c = 0; c < $;) n[u] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, n[u + 1] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, u += _;
-				else if (4 == a) for (var H = o[h[0]], W = o[h[1]], j = o[h[2]], q = o[h[3]], J = o[h[4]], Q = o[h[5]], Z = o[h[6]], K = o[h[7]], tt = o[h[8]], it = o[h[9]], et = o[h[10]], rt = o[h[11]], ot = o[h[12]], nt = o[h[13]], st = o[h[14]], _t = o[h[15]], g = l[0], P = l[1], C = l[2], at = l[3], y = 1 - g, S = 1 - P, N = 1 - C, ht = 1 - at, lt = ht * N * S * y, $t = ht * N * S * g, ut = ht * N * P * y, pt = ht * N * P * g, ft = ht * C * S * y, ct = ht * C * S * g, dt = ht * C * P * y, gt = ht * C * P * g, yt = at * N * S * y, mt = at * N * S * g, Tt = at * N * P * y, Pt = at * N * P * g, St = at * C * S * y, vt = at * C * S * g, Lt = at * C * P * y, Mt = at * C * P * g, c = 0; c < $;) n[u] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, n[u + 1] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, u += _;
-				else {
-					for (var Et = 1 << a, At = new Float32Array(Et), It = 0; It < Et; It++) {
-						for (var wt = It, xt = 1, Ot = 0; Ot < a; Ot++) xt *= wt % 2 == 0 ? 1 - l[Ot] : l[Ot], wt /= 2;
-						At[It] = xt
-					}
-					for (var Dt = new Float32Array(Et), Rt = 0; Rt < Et; Rt++) Dt[Rt] = o[h[Rt]];
-					for (var c = 0; c < $;) {
-						for (var bt = 0, Ft = 0, Ct = c + 1, Rt = 0; Rt < Et; Rt++) bt += At[Rt] * Dt[Rt][c], Ft += At[Rt] * Dt[Rt][Ct];
-						c += 2, n[u] = bt, n[u + 1] = Ft, u += _
-					}
-				}
-			}, L.prototype._$HT = function(t, i) {
-				this.x = t, this.y = i
-			}, L.prototype._$HT = function(t) {
-				this.x = t.x, this.y = t.y
-			}, M._$ur = -2, M._$ES = 500, M._$wb = 2, M._$8S = 3, M._$52 = M._$ES, M._$R2 = M._$ES, M._$or = function() {
-				return M._$52
-			}, M._$Pr = function() {
-				return M._$R2
-			}, M.prototype.convertClipIDForV2_11 = function(t) {
-				var i = [];
-				return null == t ? null : 0 == t.length ? null : /,/.test(t) ? i = t.id.split(",") : (i.push(t.id), i)
-			}, M.prototype._$F0 = function(t) {
-				this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = [], this._$MS(this._$Lb)
-			}, M.prototype.getClipIDList = function() {
-				return this.clipIDList
-			}, M.prototype.init = function(t) {}, M.prototype._$Nr = function(t, i) {
-				if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs);
-				else if (i._$IS[0]) return;
-				i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS)
-			}, M.prototype._$2b = function(t, i) {}, M.prototype.getDrawDataID = function() {
-				return this._$gP
-			}, M.prototype._$j2 = function(t) {
-				this._$gP = t
-			}, M.prototype.getOpacity = function(t, i) {
-				return i._$7s
-			}, M.prototype._$zS = function(t, i) {
-				return i._$Us
-			}, M.prototype._$MS = function(t) {
-				for (var i = t.length - 1; i >= 0; --i) {
-					var e = t[i];
-					e < M._$52 ? M._$52 = e : e > M._$R2 && (M._$R2 = e)
-				}
-			}, M.prototype.getTargetBaseDataID = function() {
-				return this._$dr
-			}, M.prototype._$gs = function(t) {
-				this._$dr = t
-			}, M.prototype._$32 = function() {
-				return null != this._$dr && this._$dr != yt._$2o()
-			}, M.prototype.preDraw = function(t, i, e) {}, M.prototype.draw = function(t, i, e) {}, M.prototype.getType = function() {}, M.prototype._$B2 = function(t, i, e) {}, E._$ps = 32, E.CLIPPING_PROCESS_NONE = 0, E.CLIPPING_PROCESS_OVERWRITE_ALPHA = 1, E.CLIPPING_PROCESS_MULTIPLY_ALPHA = 2, E.CLIPPING_PROCESS_DRAW = 3, E.CLIPPING_PROCESS_CLEAR_ALPHA = 4, E.prototype.setChannelFlagAsColor = function(t, i) {
-				this.CHANNEL_COLORS[t] = i
-			}, E.prototype.getChannelFlagAsColor = function(t) {
-				return this.CHANNEL_COLORS[t]
-			}, E.prototype._$ZT = function() {}, E.prototype._$Uo = function(t, i, e, r, o, n, s) {}, E.prototype._$Rs = function() {
-				return -1
-			}, E.prototype._$Ds = function(t) {}, E.prototype.setBaseColor = function(t, i, e, r) {
-				t < 0 ? t = 0 : t > 1 && (t = 1), i < 0 ? i = 0 : i > 1 && (i = 1), e < 0 ? e = 0 : e > 1 && (e = 1), r < 0 ? r = 0 : r > 1 && (r = 1), this._$lT = t, this._$C0 = i, this._$tT = e, this._$WL = r
-			}, E.prototype._$WP = function(t) {
-				this.culling = t
-			}, E.prototype.setMatrix = function(t) {
-				for (var i = 0; i < 16; i++) this.matrix4x4[i] = t[i]
-			}, E.prototype._$IT = function() {
-				return this.matrix4x4
-			}, E.prototype.setPremultipliedAlpha = function(t) {
-				this.premultipliedAlpha = t
-			}, E.prototype.isPremultipliedAlpha = function() {
-				return this.premultipliedAlpha
-			}, E.prototype.setAnisotropy = function(t) {
-				this.anisotropy = t
-			}, E.prototype.getAnisotropy = function() {
-				return this.anisotropy
-			}, E.prototype.getClippingProcess = function() {
-				return this.clippingProcess
-			}, E.prototype.setClippingProcess = function(t) {
-				this.clippingProcess = t
-			}, E.prototype.setClipBufPre_clipContextForMask = function(t) {
-				this.clipBufPre_clipContextMask = t
-			}, E.prototype.getClipBufPre_clipContextMask = function() {
-				return this.clipBufPre_clipContextMask
-			}, E.prototype.setClipBufPre_clipContextForDraw = function(t) {
-				this.clipBufPre_clipContextDraw = t
-			}, E.prototype.getClipBufPre_clipContextDraw = function() {
-				return this.clipBufPre_clipContextDraw
-			}, I._$ur = -2, I._$c2 = 1, I._$_b = 2, I.prototype._$F0 = function(t) {
-				this._$kP = t._$nP(), this._$dr = t._$nP()
-			}, I.prototype.readV2_opacity = function(t) {
-				t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this._$mS = t._$Tb())
-			}, I.prototype.init = function(t) {}, I.prototype._$Nr = function(t, i) {}, I.prototype.interpolateOpacity = function(t, i, e, r) {
-				null == this._$mS ? e.setInterpolatedOpacity(1) : e.setInterpolatedOpacity(v._$br(t, i, r, this._$mS))
-			}, I.prototype._$2b = function(t, i) {}, I.prototype._$nb = function(t, i, e, r, o, n, s) {}, I.prototype.getType = function() {}, I.prototype._$gs = function(t) {
-				this._$dr = t
-			}, I.prototype._$a2 = function(t) {
-				this._$kP = t
-			}, I.prototype.getTargetBaseDataID = function() {
-				return this._$dr
-			}, I.prototype.getBaseDataID = function() {
-				return this._$kP
-			}, I.prototype._$32 = function() {
-				return null != this._$dr && this._$dr != yt._$2o()
-			}, w._$W2 = 0, w._$CS = w._$W2, w._$Mo = function() {
-				return !0
-			}, w._$XP = function(t) {
-				try {
-					for (var i = getTimeMSec(); getTimeMSec() - i < t;);
-				} catch (t) {
-					t._$Rb()
-				}
-			}, w.getUserTimeMSec = function() {
-				return w._$CS == w._$W2 ? w.getSystemTimeMSec() : w._$CS
-			}, w.setUserTimeMSec = function(t) {
-				w._$CS = t
-			}, w.updateUserTimeMSec = function() {
-				return w._$CS = w.getSystemTimeMSec()
-			}, w.getTimeMSec = function() {
-				return (new Date).getTime()
-			}, w.getSystemTimeMSec = function() {
-				return (new Date).getTime()
-			}, w._$Q = function(t) {}, w._$jT = function(t, i, e, r, o) {
-				for (var n = 0; n < o; n++) e[r + n] = t[i + n]
-			}, x._$ds = -2, x.prototype._$F0 = function(t) {
-				this._$wL = t._$nP(), this._$VP = t._$6L(), this._$GP = t._$nP()
-			}, x.prototype.getParamIndex = function(t) {
-				return this._$2r != t && (this._$8o = x._$ds), this._$8o
-			}, x.prototype._$Pb = function(t, i) {
-				this._$8o = t, this._$2r = i
-			}, x.prototype.getParamID = function() {
-				return this._$wL
-			}, x.prototype._$yP = function(t) {
-				this._$wL = t
-			}, x.prototype._$N2 = function() {
-				return this._$VP
-			}, x.prototype._$d2 = function() {
-				return this._$GP
-			}, x.prototype._$t2 = function(t, i) {
-				this._$VP = t, this._$GP = i
-			}, x.prototype._$Lr = function() {
-				return this._$O2
-			}, x.prototype._$wr = function(t) {
-				this._$O2 = t
-			}, x.prototype._$SL = function() {
-				return this._$ri
-			}, x.prototype._$AL = function(t) {
-				this._$ri = t
-			}, O.startsWith = function(t, i, e) {
-				var r = i + e.length;
-				if (r >= t.length) return !1;
-				for (var o = i; o < r; o++) if (O.getChar(t, o) != e.charAt(o - i)) return !1;
-				return !0
-			}, O.getChar = function(t, i) {
-				return String.fromCharCode(t.getUint8(i))
-			}, O.createString = function(t, i, e) {
-				for (var r = new ArrayBuffer(2 * e), o = new Uint16Array(r), n = 0; n < e; n++) o[n] = t.getUint8(i + n);
-				return String.fromCharCode.apply(null, o)
-			}, O._$LS = function(t, i, e, r) {
-				t instanceof ArrayBuffer && (t = new DataView(t));
-				var o = e,
-					n = !1,
-					s = !1,
-					_ = 0,
-					a = O.getChar(t, o);
-				"-" == a && (n = !0, o++);
-				for (var h = !1; o < i; o++) {
-					switch (a = O.getChar(t, o)) {
-					case "0":
-						_ *= 10;
-						break;
-					case "1":
-						_ = 10 * _ + 1;
-						break;
-					case "2":
-						_ = 10 * _ + 2;
-						break;
-					case "3":
-						_ = 10 * _ + 3;
-						break;
-					case "4":
-						_ = 10 * _ + 4;
-						break;
-					case "5":
-						_ = 10 * _ + 5;
-						break;
-					case "6":
-						_ = 10 * _ + 6;
-						break;
-					case "7":
-						_ = 10 * _ + 7;
-						break;
-					case "8":
-						_ = 10 * _ + 8;
-						break;
-					case "9":
-						_ = 10 * _ + 9;
-						break;
-					case ".":
-						s = !0, o++, h = !0;
-						break;
-					default:
-						h = !0
-					}
-					if (h) break
-				}
-				if (s) for (var l = .1, $ = !1; o < i; o++) {
-					switch (a = O.getChar(t, o)) {
-					case "0":
-						break;
-					case "1":
-						_ += 1 * l;
-						break;
-					case "2":
-						_ += 2 * l;
-						break;
-					case "3":
-						_ += 3 * l;
-						break;
-					case "4":
-						_ += 4 * l;
-						break;
-					case "5":
-						_ += 5 * l;
-						break;
-					case "6":
-						_ += 6 * l;
-						break;
-					case "7":
-						_ += 7 * l;
-						break;
-					case "8":
-						_ += 8 * l;
-						break;
-					case "9":
-						_ += 9 * l;
-						break;
-					default:
-						$ = !0
-					}
-					if (l *= .1, $) break
-				}
-				return n && (_ = -_), r[0] = o, _
-			}, D.prototype._$zP = function() {
-				this._$Ob = new Array
-			}, D.prototype._$F0 = function(t) {
-				this._$Ob = t._$nP()
-			}, D.prototype._$Ur = function(t) {
-				if (t._$WS()) return !0;
-				for (var i = t._$v2(), e = this._$Ob.length - 1; e >= 0; --e) {
-					var r = this._$Ob[e].getParamIndex(i);
-					if (r == x._$ds && (r = t.getParamIndex(this._$Ob[e].getParamID())), t._$Xb(r)) return !0
-				}
-				return !1
-			}, D.prototype._$Q2 = function(t, i) {
-				for (var e, r, o = this._$Ob.length, n = t._$v2(), s = 0, _ = 0; _ < o; _++) {
-					var a = this._$Ob[_];
-					if (e = a.getParamIndex(n), e == x._$ds && (e = t.getParamIndex(a.getParamID()), a._$Pb(e, n)), e < 0) throw new Exception("err 23242 : " + a.getParamID());
-					var h = e < 0 ? 0 : t.getParamFloat(e);
-					r = a._$N2();
-					var l, $, u = a._$d2(),
-						p = -1,
-						f = 0;
-					if (r < 1);
-					else if (1 == r) l = u[0], l - U._$J < h && h < l + U._$J ? (p = 0, f = 0) : (p = 0, i[0] = !0);
-					else if (l = u[0], h < l - U._$J) p = 0, i[0] = !0;
-					else if (h < l + U._$J) p = 0;
-					else {
-						for (var c = !1, d = 1; d < r; ++d) {
-							if ($ = u[d], h < $ + U._$J) {
-								$ - U._$J < h ? p = d : (p = d - 1, f = (h - l) / ($ - l), s++), c = !0;
-								break
-							}
-							l = $
-						}
-						c || (p = r - 1, f = 0, i[0] = !0)
-					}
-					a._$wr(p), a._$AL(f)
-				}
-				return s
-			}, D.prototype._$zr = function(t, i, e) {
-				var r = 1 << e;
-				r + 1 > U._$Qb && console.log("err 23245\n");
-				for (var o = this._$Ob.length, n = 1, s = 1, _ = 0, a = 0; a < r; ++a) t[a] = 0;
-				for (var h = 0; h < o; ++h) {
-					var l = this._$Ob[h];
-					if (0 == l._$SL()) {
-						var $ = l._$Lr() * n;
-						if ($ < 0 && at._$3T) throw new Exception("err 23246");
-						for (var a = 0; a < r; ++a) t[a] += $
-					} else {
-						for (var $ = n * l._$Lr(), u = n * (l._$Lr() + 1), a = 0; a < r; ++a) t[a] += (a / s | 0) % 2 == 0 ? $ : u;
-						i[_++] = l._$SL(), s *= 2
-					}
-					n *= l._$N2()
-				}
-				t[r] = 65535, i[_] = -1
-			}, D.prototype._$h2 = function(t, i, e) {
-				for (var r = new Float32Array(i), o = 0; o < i; ++o) r[o] = e[o];
-				var n = new x;
-				n._$yP(t), n._$t2(i, r), this._$Ob.push(n)
-			}, D.prototype._$J2 = function(t) {
-				for (var i = t, e = this._$Ob.length, r = 0; r < e; ++r) {
-					var o = this._$Ob[r],
-						n = o._$N2(),
-						s = i % o._$N2(),
-						_ = o._$d2()[s];
-					console.log("%s[%d]=%7.2f / ", o.getParamID(), s, _), i /= n
-				}
-				console.log("\n")
-			}, D.prototype.getParamCount = function() {
-				return this._$Ob.length
-			}, D.prototype._$zs = function() {
-				return this._$Ob
-			}, R.prototype.identity = function() {
-				for (var t = 0; t < 16; t++) this.m[t] = t % 5 == 0 ? 1 : 0
-			}, R.prototype.getArray = function() {
-				return this.m
-			}, R.prototype.getCopyMatrix = function() {
-				return new Float32Array(this.m)
-			}, R.prototype.setMatrix = function(t) {
-				if (null != t && 16 == t.length) for (var i = 0; i < 16; i++) this.m[i] = t[i]
-			}, R.prototype.mult = function(t, i, e) {
-				return null == i ? null : (this == i ? this.mult_safe(this.m, t.m, i.m, e) : this.mult_fast(this.m, t.m, i.m, e), i)
-			}, R.prototype.mult_safe = function(t, i, e, r) {
-				if (t == e) {
-					var o = new Array(16);
-					this.mult_fast(t, i, o, r);
-					for (var n = 15; n >= 0; --n) e[n] = o[n]
-				} else this.mult_fast(t, i, e, r)
-			}, R.prototype.mult_fast = function(t, i, e, r) {
-				r ? (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14], e[3] = e[7] = e[11] = 0, e[15] = 1) : (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2] + t[12] * i[3], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6] + t[12] * i[7], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10] + t[12] * i[11], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12] * i[15], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2] + t[13] * i[3], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6] + t[13] * i[7], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10] + t[13] * i[11], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13] * i[15], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2] + t[14] * i[3], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6] + t[14] * i[7], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10] + t[14] * i[11], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14] * i[15], e[3] = t[3] * i[0] + t[7] * i[1] + t[11] * i[2] + t[15] * i[3], e[7] = t[3] * i[4] + t[7] * i[5] + t[11] * i[6] + t[15] * i[7], e[11] = t[3] * i[8] + t[7] * i[9] + t[11] * i[10] + t[15] * i[11], e[15] = t[3] * i[12] + t[7] * i[13] + t[11] * i[14] + t[15] * i[15])
-			}, R.prototype.translate = function(t, i, e) {
-				this.m[12] = this.m[0] * t + this.m[4] * i + this.m[8] * e + this.m[12], this.m[13] = this.m[1] * t + this.m[5] * i + this.m[9] * e + this.m[13], this.m[14] = this.m[2] * t + this.m[6] * i + this.m[10] * e + this.m[14], this.m[15] = this.m[3] * t + this.m[7] * i + this.m[11] * e + this.m[15]
-			}, R.prototype.scale = function(t, i, e) {
-				this.m[0] *= t, this.m[4] *= i, this.m[8] *= e, this.m[1] *= t, this.m[5] *= i, this.m[9] *= e, this.m[2] *= t, this.m[6] *= i, this.m[10] *= e, this.m[3] *= t, this.m[7] *= i, this.m[11] *= e
-			}, R.prototype.rotateX = function(t) {
-				var i = Lt.fcos(t),
-					e = Lt._$9(t),
-					r = this.m[4];
-				this.m[4] = r * i + this.m[8] * e, this.m[8] = r * -e + this.m[8] * i, r = this.m[5], this.m[5] = r * i + this.m[9] * e, this.m[9] = r * -e + this.m[9] * i, r = this.m[6], this.m[6] = r * i + this.m[10] * e, this.m[10] = r * -e + this.m[10] * i, r = this.m[7], this.m[7] = r * i + this.m[11] * e, this.m[11] = r * -e + this.m[11] * i
-			}, R.prototype.rotateY = function(t) {
-				var i = Lt.fcos(t),
-					e = Lt._$9(t),
-					r = this.m[0];
-				this.m[0] = r * i + this.m[8] * -e, this.m[8] = r * e + this.m[8] * i, r = this.m[1], this.m[1] = r * i + this.m[9] * -e, this.m[9] = r * e + this.m[9] * i, r = m[2], this.m[2] = r * i + this.m[10] * -e, this.m[10] = r * e + this.m[10] * i, r = m[3], this.m[3] = r * i + this.m[11] * -e, this.m[11] = r * e + this.m[11] * i
-			}, R.prototype.rotateZ = function(t) {
-				var i = Lt.fcos(t),
-					e = Lt._$9(t),
-					r = this.m[0];
-				this.m[0] = r * i + this.m[4] * e, this.m[4] = r * -e + this.m[4] * i, r = this.m[1], this.m[1] = r * i + this.m[5] * e, this.m[5] = r * -e + this.m[5] * i, r = this.m[2], this.m[2] = r * i + this.m[6] * e, this.m[6] = r * -e + this.m[6] * i, r = this.m[3], this.m[3] = r * i + this.m[7] * e, this.m[7] = r * -e + this.m[7] * i
-			}, b.prototype = new et, b._$tP = new Object, b._$27 = function() {
-				b._$tP.clear()
-			}, b.getID = function(t) {
-				var i = b._$tP[t];
-				return null == i && (i = new b(t), b._$tP[t] = i), i
-			}, b.prototype._$3s = function() {
-				return new b
-			}, F._$kS = -1, F._$pS = 0, F._$hb = 1, F.STATE_IDENTITY = 0, F._$gb = 1, F._$fo = 2, F._$go = 4, F.prototype.transform = function(t, i, e) {
-				var r, o, n, s, _, a, h = 0,
-					l = 0;
-				switch (this._$hi) {
-				default:
-					return;
-				case F._$go | F._$fo | F._$gb:
-					for (r = this._$7, o = this._$H, n = this._$k, s = this._$f, _ = this._$g, a = this._$w; --e >= 0;) {
-						var $ = t[h++],
-							u = t[h++];
-						i[l++] = r * $ + o * u + n, i[l++] = s * $ + _ * u + a
-					}
-					return;
-				case F._$go | F._$fo:
-					for (r = this._$7, o = this._$H, s = this._$f, _ = this._$g; --e >= 0;) {
-						var $ = t[h++],
-							u = t[h++];
-						i[l++] = r * $ + o * u, i[l++] = s * $ + _ * u
-					}
-					return;
-				case F._$go | F._$gb:
-					for (o = this._$H, n = this._$k, s = this._$f, a = this._$w; --e >= 0;) {
-						var $ = t[h++];
-						i[l++] = o * t[h++] + n, i[l++] = s * $ + a
-					}
-					return;
-				case F._$go:
-					for (o = this._$H, s = this._$f; --e >= 0;) {
-						var $ = t[h++];
-						i[l++] = o * t[h++], i[l++] = s * $
-					}
-					return;
-				case F._$fo | F._$gb:
-					for (r = this._$7, n = this._$k, _ = this._$g, a = this._$w; --e >= 0;) i[l++] = r * t[h++] + n, i[l++] = _ * t[h++] + a;
-					return;
-				case F._$fo:
-					for (r = this._$7, _ = this._$g; --e >= 0;) i[l++] = r * t[h++], i[l++] = _ * t[h++];
-					return;
-				case F._$gb:
-					for (n = this._$k, a = this._$w; --e >= 0;) i[l++] = t[h++] + n, i[l++] = t[h++] + a;
-					return;
-				case F.STATE_IDENTITY:
-					return void(t == i && h == l || w._$jT(t, h, i, l, 2 * e))
-				}
-			}, F.prototype.update = function() {
-				0 == this._$H && 0 == this._$f ? 1 == this._$7 && 1 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F.STATE_IDENTITY, this._$Z = F._$pS) : (this._$hi = F._$gb, this._$Z = F._$hb) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$fo, this._$Z = F._$kS) : (this._$hi = F._$fo | F._$gb, this._$Z = F._$kS) : 0 == this._$7 && 0 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go, this._$Z = F._$kS) : (this._$hi = F._$go | F._$gb, this._$Z = F._$kS) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go | F._$fo, this._$Z = F._$kS) : (this._$hi = F._$go | F._$fo | F._$gb, this._$Z = F._$kS)
-			}, F.prototype._$RT = function(t) {
-				this._$IT(t);
-				var i = t[0],
-					e = t[2],
-					r = t[1],
-					o = t[3],
-					n = Math.sqrt(i * i + r * r),
-					s = i * o - e * r;
-				0 == n ? at._$so && console.log("affine._$RT() / rt==0") : (t[0] = n, t[1] = s / n, t[2] = (r * o + i * e) / s, t[3] = Math.atan2(r, i))
-			}, F.prototype._$ho = function(t, i, e, r) {
-				var o = new Float32Array(6),
-					n = new Float32Array(6);
-				t._$RT(o), i._$RT(n);
-				var s = new Float32Array(6);
-				s[0] = o[0] + (n[0] - o[0]) * e, s[1] = o[1] + (n[1] - o[1]) * e, s[2] = o[2] + (n[2] - o[2]) * e, s[3] = o[3] + (n[3] - o[3]) * e, s[4] = o[4] + (n[4] - o[4]) * e, s[5] = o[5] + (n[5] - o[5]) * e, r._$CT(s)
-			}, F.prototype._$CT = function(t) {
-				var i = Math.cos(t[3]),
-					e = Math.sin(t[3]);
-				this._$7 = t[0] * i, this._$f = t[0] * e, this._$H = t[1] * (t[2] * i - e), this._$g = t[1] * (t[2] * e + i), this._$k = t[4], this._$w = t[5], this.update()
-			}, F.prototype._$IT = function(t) {
-				t[0] = this._$7, t[1] = this._$f, t[2] = this._$H, t[3] = this._$g, t[4] = this._$k, t[5] = this._$w
-			}, C.prototype = new s, C._$cs = "VISIBLE:", C._$ar = "LAYOUT:", C._$Co = 0, C._$D2 = [], C._$1T = 1, C.loadMotion = function(t) {
-				var i = new C,
-					e = [0],
-					r = t.length;
-				i._$yT = 0;
-				for (var o = 0; o < r; ++o) {
-					var n = 255 & t[o];
-					if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) {
-						if ("a" <= n && n <= "z" || "A" <= n && n <= "Z" || "_" == n) {
-							for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) {
-								_ = o;
-								break
-							}
-							if (_ >= 0) {
-								var a = new B;
-								O.startsWith(t, s, C._$cs) ? (a._$RP = B._$hs, a._$4P = new String(t, s, _ - s)) : O.startsWith(t, s, C._$ar) ? (a._$4P = new String(t, s + 7, _ - s - 7), O.startsWith(t, s + 7, "ANCHOR_X") ? a._$RP = B._$xs : O.startsWith(t, s + 7, "ANCHOR_Y") ? a._$RP = B._$us : O.startsWith(t, s + 7, "SCALE_X") ? a._$RP = B._$qs : O.startsWith(t, s + 7, "SCALE_Y") ? a._$RP = B._$Ys : O.startsWith(t, s + 7, "X") ? a._$RP = B._$ws : O.startsWith(t, s + 7, "Y") && (a._$RP = B._$Ns)) : (a._$RP = B._$Fr, a._$4P = new String(t, s, _ - s)), i.motions.push(a);
-								var h = 0;
-								for (C._$D2.clear(), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
-									var l = O._$LS(t, r, o, e);
-									if (e[0] > 0) {
-										C._$D2.push(l), h++;
-										var $ = e[0];
-										if ($ < o) {
-											console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n");
-											break
-										}
-										o = $
-									}
-								}
-								a._$I0 = C._$D2._$BL(), h > i._$yT && (i._$yT = h)
-							}
-						}
-					} else {
-						for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) {
-							_ = o;
-							break
-						}
-						var u = !1;
-						if (_ >= 0) for (_ == s + 4 && "f" == t[s + 1] && "p" == t[s + 2] && "s" == t[s + 3] && (u = !0), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
-							var l = O._$LS(t, r, o, e);
-							e[0] > 0 && u && 5 < l && l < 121 && (i._$D0 = l), o = e[0]
-						}
-						for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o);
-					} else for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o);
-				}
-				return i._$AS = 1e3 * i._$yT / i._$D0 | 0, i
-			}, C.prototype.getDurationMSec = function() {
-				return this._$AS
-			}, C.prototype.dump = function() {
-				for (var t = 0; t < this.motions.length; t++) {
-					var i = this.motions[t];
-					console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length);
-					for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]);
-					console.log("\n")
-				}
-			}, C.prototype.updateParamExe = function(t, i, e, r) {
-				for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) {
-					var h = this.motions[a],
-						l = h._$I0.length,
-						$ = h._$4P;
-					if (h._$RP == B._$hs) {
-						var u = h._$I0[s >= l ? l - 1 : s];
-						t.setParamFloat($, u)
-					} else if (B._$ws <= h._$RP && h._$RP <= B._$Ys);
-					else {
-						var p = t.getParamFloat($),
-							f = h._$I0[s >= l ? l - 1 : s],
-							c = h._$I0[s + 1 >= l ? l - 1 : s + 1],
-							d = f + (c - f) * _,
-							g = p + (d - p) * e;
-						t.setParamFloat($, g)
-					}
-				}
-				s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0)
-			}, C.prototype._$r0 = function() {
-				return this._$E
-			}, C.prototype._$aL = function(t) {
-				this._$E = t
-			}, C.prototype.isLoopFadeIn = function() {
-				return this.loopFadeIn
-			}, C.prototype.setLoopFadeIn = function(t) {
-				this.loopFadeIn = t
-			}, N.prototype.clear = function() {
-				this.size = 0
-			}, N.prototype.add = function(t) {
-				if (this._$P.length <= this.size) {
-					var i = new Float32Array(2 * this.size);
-					w._$jT(this._$P, 0, i, 0, this.size), this._$P = i
-				}
-				this._$P[this.size++] = t
-			}, N.prototype._$BL = function() {
-				var t = new Float32Array(this.size);
-				return w._$jT(this._$P, 0, t, 0, this.size), t
-			}, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, U._$Ms = 1, U._$Qs = 2, U._$i2 = 0, U._$No = 2, U._$do = U._$Ms, U._$Ls = !0, U._$1r = 5, U._$Qb = 65, U._$J = 1e-4, U._$FT = .001, U._$Ss = 3, G._$o7 = 6, G._$S7 = 7, G._$s7 = 8, G._$77 = 9, G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 = 10, G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1 = 11, G._$T7 = G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1, G._$Is = -2004318072, G._$h0 = 0, G._$4L = 23, G._$7P = 33, G._$uT = function(t) {
-				console.log("_$bo :: _$6 _$mo _$E0 : %d\n", t)
-			}, G._$9o = function(t) {
-				if (t < 40) return G._$uT(t), null;
-				if (t < 50) return G._$uT(t), null;
-				if (t < 60) return G._$uT(t), null;
-				if (t < 100) switch (t) {
-				case 65:
-					return new Z;
-				case 66:
-					return new D;
-				case 67:
-					return new x;
-				case 68:
-					return new z;
-				case 69:
-					return new P;
-				case 70:
-					return new $t;
-				default:
-					return G._$uT(t), null
-				} else if (t < 150) switch (t) {
-				case 131:
-					return new st;
-				case 133:
-					return new tt;
-				case 136:
-					return new p;
-				case 137:
-					return new ot;
-				case 142:
-					return new j
-				}
-				return G._$uT(t), null
-			}, Y._$HP = 0, Y._$_0 = !0;
-			Y._$V2 = -1, Y._$W0 = -1, Y._$jr = !1, Y._$ZS = !0, Y._$tr = -1e6, Y._$lr = 1e6, Y._$is = 32, Y._$e = !1, Y.prototype.getDrawDataIndex = function(t) {
-				for (var i = this._$aS.length - 1; i >= 0; --i) if (null != this._$aS[i] && this._$aS[i].getDrawDataID() == t) return i;
-				return -1
-			}, Y.prototype.getDrawData = function(t) {
-				if (t instanceof b) {
-					if (null == this._$Bo) {
-						this._$Bo = new Object;
-						for (var i = this._$aS.length, e = 0; e < i; e++) {
-							var r = this._$aS[e],
-								o = r.getDrawDataID();
-							null != o && (this._$Bo[o] = r)
-						}
-					}
-					return this._$Bo[id]
-				}
-				return t < this._$aS.length ? this._$aS[t] : null
-			}, Y.prototype.release = function() {
-				this._$3S.clear(), this._$aS.clear(), this._$F2.clear(), null != this._$Bo && this._$Bo.clear(), this._$db.clear(), this._$8b.clear(), this._$Hr.clear()
-			}, Y.prototype.init = function() {
-				this._$co++, this._$F2.length > 0 && this.release();
-				for (var t = this._$Ri.getModelImpl(), i = t._$Xr(), r = i.length, o = new Array, n = new Array, s = 0; s < r; ++s) {
-					var _ = i[s];
-					this._$F2.push(_), this._$Hr.push(_.init(this));
-					for (var a = _.getBaseData(), h = a.length, l = 0; l < h; ++l) o.push(a[l]);
-					for (var l = 0; l < h; ++l) {
-						var $ = a[l].init(this);
-						$._$l2(s), n.push($)
-					}
-					for (var u = _.getDrawData(), p = u.length, l = 0; l < p; ++l) {
-						var f = u[l],
-							c = f.init(this);
-						c._$IP = s, this._$aS.push(f), this._$8b.push(c)
-					}
-				}
-				for (var d = o.length, g = yt._$2o();;) {
-					for (var y = !1, s = 0; s < d; ++s) {
-						var m = o[s];
-						if (null != m) {
-							var T = m.getTargetBaseDataID();
-							(null == T || T == g || this.getBaseDataIndex(T) >= 0) && (this._$3S.push(m), this._$db.push(n[s]), o[s] = null, y = !0)
-						}
-					}
-					if (!y) break
-				}
-				var P = t._$E2();
-				if (null != P) {
-					var S = P._$1s();
-					if (null != S) for (var v = S.length, s = 0; s < v; ++s) {
-						var L = S[s];
-						null != L && this._$02(L.getParamID(), L.getDefaultValue(), L.getMinValue(), L.getMaxValue())
-					}
-				}
-				this.clipManager = new e(this.dp_webgl), this.clipManager.init(this, this._$aS, this._$8b), this._$QT = !0
-			}, Y.prototype.update = function() {
-				Y._$e && _.start("_$zL");
-				for (var t = this._$_2.length, i = 0; i < t; i++) this._$_2[i] != this._$vr[i] && (this._$Js[i] = Y._$ZS, this._$vr[i] = this._$_2[i]);
-				var e = this._$3S.length,
-					r = this._$aS.length,
-					o = W._$or(),
-					n = W._$Pr(),
-					s = n - o + 1;
-				(null == this._$Ws || this._$Ws.length < s) && (this._$Ws = new Int16Array(s), this._$Vs = new Int16Array(s));
-				for (var i = 0; i < s; i++) this._$Ws[i] = Y._$V2, this._$Vs[i] = Y._$V2;
-				(null == this._$Er || this._$Er.length < r) && (this._$Er = new Int16Array(r));
-				for (var i = 0; i < r; i++) this._$Er[i] = Y._$W0;
-				Y._$e && _.dump("_$zL"), Y._$e && _.start("_$UL");
-				for (var a = null, h = 0; h < e; ++h) {
-					var l = this._$3S[h],
-						$ = this._$db[h];
-					try {
-						l._$Nr(this, $), l._$2b(this, $)
-					} catch (t) {
-						null == a && (a = t)
-					}
-				}
-				null != a && Y._$_0 && _._$Rb(a), Y._$e && _.dump("_$UL"), Y._$e && _.start("_$DL");
-				for (var u = null, p = 0; p < r; ++p) {
-					var f = this._$aS[p],
-						c = this._$8b[p];
-					try {
-						if (f._$Nr(this, c), c._$u2()) continue;
-						f._$2b(this, c);
-						var d, g = Math.floor(f._$zS(this, c) - o);
-						try {
-							d = this._$Vs[g]
-						} catch (t) {
-							console.log("_$li :: %s / %s \t\t\t\t@@_$fS\n", t.toString(), f.getDrawDataID().toString()), g = Math.floor(f._$zS(this, c) - o);
-							continue
-						}
-						d == Y._$V2 ? this._$Ws[g] = p : this._$Er[d] = p, this._$Vs[g] = p
-					} catch (t) {
-						null == u && (u = t, at._$sT(at._$H7))
-					}
-				}
-				null != u && Y._$_0 && _._$Rb(u), Y._$e && _.dump("_$DL"), Y._$e && _.start("_$eL");
-				for (var i = this._$Js.length - 1; i >= 0; i--) this._$Js[i] = Y._$jr;
-				return this._$QT = !1, Y._$e && _.dump("_$eL"), !1
-			}, Y.prototype.preDraw = function(t) {
-				null != this.clipManager && (t._$ZT(), this.clipManager.setupClip(this, t))
-			}, Y.prototype.draw = function(t) {
-				if (null == this._$Ws) return void _._$li("call _$Ri.update() before _$Ri.draw() ");
-				var i = this._$Ws.length;
-				t._$ZT();
-				for (var e = 0; e < i; ++e) {
-					var r = this._$Ws[e];
-					if (r != Y._$V2) for (;;) {
-						var o = this._$aS[r],
-							n = this._$8b[r];
-						if (n._$yo()) {
-							var s = n._$IP,
-								a = this._$Hr[s];
-							n._$VS = a.getPartsOpacity(), o.draw(t, this, n)
-						}
-						var h = this._$Er[r];
-						if (h <= r || h == Y._$W0) break;
-						r = h
-					}
-				}
-			}, Y.prototype.getParamIndex = function(t) {
-				for (var i = this._$pb.length - 1; i >= 0; --i) if (this._$pb[i] == t) return i;
-				return this._$02(t, 0, Y._$tr, Y._$lr)
-			}, Y.prototype._$BS = function(t) {
-				return this.getBaseDataIndex(t)
-			}, Y.prototype.getBaseDataIndex = function(t) {
-				for (var i = this._$3S.length - 1; i >= 0; --i) if (null != this._$3S[i] && this._$3S[i].getBaseDataID() == t) return i;
-				return -1
-			}, Y.prototype._$UT = function(t, i) {
-				var e = new Float32Array(i);
-				return w._$jT(t, 0, e, 0, t.length), e
-			}, Y.prototype._$02 = function(t, i, e, r) {
-				if (this._$qo >= this._$pb.length) {
-					var o = this._$pb.length,
-						n = new Array(2 * o);
-					w._$jT(this._$pb, 0, n, 0, o), this._$pb = n, this._$_2 = this._$UT(this._$_2, 2 * o), this._$vr = this._$UT(this._$vr, 2 * o), this._$Rr = this._$UT(this._$Rr, 2 * o), this._$Or = this._$UT(this._$Or, 2 * o);
-					var s = new Array;
-					w._$jT(this._$Js, 0, s, 0, o), this._$Js = s
-				}
-				return this._$pb[this._$qo] = t, this._$_2[this._$qo] = i, this._$vr[this._$qo] = i, this._$Rr[this._$qo] = e, this._$Or[this._$qo] = r, this._$Js[this._$qo] = Y._$ZS, this._$qo++
-			}, Y.prototype._$Zo = function(t, i) {
-				this._$3S[t] = i
-			}, Y.prototype.setParamFloat = function(t, i) {
-				i < this._$Rr[t] && (i = this._$Rr[t]), i > this._$Or[t] && (i = this._$Or[t]), this._$_2[t] = i
-			}, Y.prototype.loadParam = function() {
-				var t = this._$_2.length;
-				t > this._$fs.length && (t = this._$fs.length), w._$jT(this._$fs, 0, this._$_2, 0, t)
-			}, Y.prototype.saveParam = function() {
-				var t = this._$_2.length;
-				t > this._$fs.length && (this._$fs = new Float32Array(t)), w._$jT(this._$_2, 0, this._$fs, 0, t)
-			}, Y.prototype._$v2 = function() {
-				return this._$co
-			}, Y.prototype._$WS = function() {
-				return this._$QT
-			}, Y.prototype._$Xb = function(t) {
-				return this._$Js[t] == Y._$ZS
-			}, Y.prototype._$vs = function() {
-				return this._$Es
-			}, Y.prototype._$Tr = function() {
-				return this._$ZP
-			}, Y.prototype.getBaseData = function(t) {
-				return this._$3S[t]
-			}, Y.prototype.getParamFloat = function(t) {
-				return this._$_2[t]
-			}, Y.prototype.getParamMax = function(t) {
-				return this._$Or[t]
-			}, Y.prototype.getParamMin = function(t) {
-				return this._$Rr[t]
-			}, Y.prototype.setPartsOpacity = function(t, i) {
-				this._$Hr[t].setPartsOpacity(i)
-			}, Y.prototype.getPartsOpacity = function(t) {
-				return this._$Hr[t].getPartsOpacity()
-			}, Y.prototype.getPartsDataIndex = function(t) {
-				for (var i = this._$F2.length - 1; i >= 0; --i) if (null != this._$F2[i] && this._$F2[i]._$p2() == t) return i;
-				return -1
-			}, Y.prototype._$q2 = function(t) {
-				return this._$db[t]
-			}, Y.prototype._$C2 = function(t) {
-				return this._$8b[t]
-			}, Y.prototype._$Bb = function(t) {
-				return this._$Hr[t]
-			}, Y.prototype._$5s = function(t, i) {
-				for (var e = this._$Ws.length, r = t, o = 0; o < e; ++o) {
-					var n = this._$Ws[o];
-					if (n != Y._$V2) for (;;) {
-						var s = this._$8b[n];
-						s._$yo() && (s._$GT()._$B2(this, s, r), r += i);
-						var _ = this._$Er[n];
-						if (_ <= n || _ == Y._$W0) break;
-						n = _
-					}
-				}
-			}, Y.prototype.setDrawParam = function(t) {
-				this.dp_webgl = t
-			}, Y.prototype.getDrawParam = function() {
-				return this.dp_webgl
-			}, k._$0T = function(t) {
-				return k._$0T(new _$5(t))
-			}, k._$0T = function(t) {
-				if (!t.exists()) throw new _$ls(t._$3b());
-				for (var i, e = t.length(), r = new Int8Array(e), o = new _$Xs(new _$kb(t), 8192), n = 0;
-				(i = o.read(r, n, e - n)) > 0;) n += i;
-				return r
-			}, k._$C = function(t) {
-				var i = null,
-					e = null;
-				try {
-					i = t instanceof Array ? t : new _$Xs(t, 8192), e = new _$js;
-					for (var r, o = new Int8Array(1e3);
-					(r = i.read(o)) > 0;) e.write(o, 0, r);
-					return e._$TS()
-				} finally {
-					null != t && t.close(), null != e && (e.flush(), e.close())
-				}
-			}, V.prototype._$T2 = function() {
-				return w.getUserTimeMSec() + Math._$10() * (2 * this._$Br - 1)
-			}, V.prototype._$uo = function(t) {
-				this._$Br = t
-			}, V.prototype._$QS = function(t, i, e) {
-				this._$Dr = t, this._$Cb = i, this._$mr = e
-			}, V.prototype._$7T = function(t) {
-				var i, e = w.getUserTimeMSec(),
-					r = 0;
-				switch (this._$_L) {
-				case STATE_CLOSING:
-					r = (e - this._$bb) / this._$Dr, r >= 1 && (r = 1, this._$_L = wt.STATE_CLOSED, this._$bb = e), i = 1 - r;
-					break;
-				case STATE_CLOSED:
-					r = (e - this._$bb) / this._$Cb, r >= 1 && (this._$_L = wt.STATE_OPENING, this._$bb = e), i = 0;
-					break;
-				case STATE_OPENING:
-					r = (e - this._$bb) / this._$mr, r >= 1 && (r = 1, this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2()), i = r;
-					break;
-				case STATE_INTERVAL:
-					this._$12 < e && (this._$_L = wt.STATE_CLOSING, this._$bb = e), i = 1;
-					break;
-				case STATE_FIRST:
-				default:
-					this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2(), i = 1
-				}
-				this._$jo || (i = -i), t.setParamFloat(this._$iL, i), t.setParamFloat(this._$0L, i)
-			};
-			var wt = function() {};
-			wt.STATE_FIRST = "STATE_FIRST", wt.STATE_INTERVAL = "STATE_INTERVAL", wt.STATE_CLOSING = "STATE_CLOSING", wt.STATE_CLOSED = "STATE_CLOSED", wt.STATE_OPENING = "STATE_OPENING", X.prototype = new E, X._$As = 32, X._$Gr = !1, X._$NT = null, X._$vS = null, X._$no = null, X._$9r = function(t) {
-				return new Float32Array(t)
-			}, X._$vb = function(t) {
-				return new Int16Array(t)
-			}, X._$cr = function(t, i) {
-				return null == t || t._$yL() < i.length ? (t = X._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
-			}, X._$mb = function(t, i) {
-				return null == t || t._$yL() < i.length ? (t = X._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
-			}, X._$Hs = function() {
-				return X._$Gr
-			}, X._$as = function(t) {
-				X._$Gr = t
-			}, X.prototype.setGL = function(t) {
-				this.gl = t
-			}, X.prototype.setTransform = function(t) {
-				this.transform = t
-			}, X.prototype._$ZT = function() {}, X.prototype._$Uo = function(t, i, e, r, o, n, s, _) {
-				if (!(n < .01)) {
-					var a = this._$U2[t],
-						h = n > .9 ? at.EXPAND_W : 0;
-					this.gl.drawElements(a, e, r, o, n, h, this.transform, _)
-				}
-			}, X.prototype._$Rs = function() {
-				throw new Error("_$Rs")
-			}, X.prototype._$Ds = function(t) {
-				throw new Error("_$Ds")
-			}, X.prototype._$K2 = function() {
-				for (var t = 0; t < this._$sb.length; t++) {
-					0 != this._$sb[t] && (this.gl._$Sr(1, this._$sb, t), this._$sb[t] = 0)
-				}
-			}, X.prototype.setTexture = function(t, i) {
-				this._$sb.length < t + 1 && this._$nS(t), this._$sb[t] = i
-			}, X.prototype.setTexture = function(t, i) {
-				this._$sb.length < t + 1 && this._$nS(t), this._$U2[t] = i
-			}, X.prototype._$nS = function(t) {
-				var i = Math.max(2 * this._$sb.length, t + 1 + 10),
-					e = new Int32Array(i);
-				w._$jT(this._$sb, 0, e, 0, this._$sb.length), this._$sb = e;
-				var r = new Array;
-				w._$jT(this._$U2, 0, r, 0, this._$U2.length), this._$U2 = r
-			}, z.prototype = new I, z._$Xo = new Float32Array(2), z._$io = new Float32Array(2), z._$0o = new Float32Array(2), z._$Lo = new Float32Array(2), z._$To = new Float32Array(2), z._$Po = new Float32Array(2), z._$gT = new Array, z.prototype._$zP = function() {
-				this._$GS = new D, this._$GS._$zP(), this._$Y0 = new Array
-			}, z.prototype.getType = function() {
-				return I._$c2
-			}, z.prototype._$F0 = function(t) {
-				I.prototype._$F0.call(this, t), this._$GS = t._$nP(), this._$Y0 = t._$nP(), I.prototype.readV2_opacity.call(this, t)
-			}, z.prototype.init = function(t) {
-				var i = new H(this);
-				return i._$Yr = new P, this._$32() && (i._$Wr = new P), i
-			}, z.prototype._$Nr = function(t, i) {
-				this != i._$GT() && console.log("### assert!! ### ");
-				var e = i;
-				if (this._$GS._$Ur(t)) {
-					var r = z._$gT;
-					r[0] = !1;
-					var o = this._$GS._$Q2(t, r);
-					i._$Ib(r[0]), this.interpolateOpacity(t, this._$GS, i, r);
-					var n = t._$vs(),
-						s = t._$Tr();
-					if (this._$GS._$zr(n, s, o), o <= 0) {
-						var _ = this._$Y0[n[0]];
-						e._$Yr.init(_)
-					} else if (1 == o) {
-						var _ = this._$Y0[n[0]],
-							a = this._$Y0[n[1]],
-							h = s[0];
-						e._$Yr._$fL = _._$fL + (a._$fL - _._$fL) * h, e._$Yr._$gL = _._$gL + (a._$gL - _._$gL) * h, e._$Yr._$B0 = _._$B0 + (a._$B0 - _._$B0) * h, e._$Yr._$z0 = _._$z0 + (a._$z0 - _._$z0) * h, e._$Yr._$qT = _._$qT + (a._$qT - _._$qT) * h
-					} else if (2 == o) {
-						var _ = this._$Y0[n[0]],
-							a = this._$Y0[n[1]],
-							l = this._$Y0[n[2]],
-							$ = this._$Y0[n[3]],
-							h = s[0],
-							u = s[1],
-							p = _._$fL + (a._$fL - _._$fL) * h,
-							f = l._$fL + ($._$fL - l._$fL) * h;
-						e._$Yr._$fL = p + (f - p) * u, p = _._$gL + (a._$gL - _._$gL) * h, f = l._$gL + ($._$gL - l._$gL) * h, e._$Yr._$gL = p + (f - p) * u, p = _._$B0 + (a._$B0 - _._$B0) * h, f = l._$B0 + ($._$B0 - l._$B0) * h, e._$Yr._$B0 = p + (f - p) * u, p = _._$z0 + (a._$z0 - _._$z0) * h, f = l._$z0 + ($._$z0 - l._$z0) * h, e._$Yr._$z0 = p + (f - p) * u, p = _._$qT + (a._$qT - _._$qT) * h, f = l._$qT + ($._$qT - l._$qT) * h, e._$Yr._$qT = p + (f - p) * u
-					} else if (3 == o) {
-						var c = this._$Y0[n[0]],
-							d = this._$Y0[n[1]],
-							g = this._$Y0[n[2]],
-							y = this._$Y0[n[3]],
-							m = this._$Y0[n[4]],
-							T = this._$Y0[n[5]],
-							P = this._$Y0[n[6]],
-							S = this._$Y0[n[7]],
-							h = s[0],
-							u = s[1],
-							v = s[2],
-							p = c._$fL + (d._$fL - c._$fL) * h,
-							f = g._$fL + (y._$fL - g._$fL) * h,
-							L = m._$fL + (T._$fL - m._$fL) * h,
-							M = P._$fL + (S._$fL - P._$fL) * h;
-						e._$Yr._$fL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$gL + (d._$gL - c._$gL) * h, f = g._$gL + (y._$gL - g._$gL) * h, L = m._$gL + (T._$gL - m._$gL) * h, M = P._$gL + (S._$gL - P._$gL) * h, e._$Yr._$gL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$B0 + (d._$B0 - c._$B0) * h, f = g._$B0 + (y._$B0 - g._$B0) * h, L = m._$B0 + (T._$B0 - m._$B0) * h, M = P._$B0 + (S._$B0 - P._$B0) * h, e._$Yr._$B0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$z0 + (d._$z0 - c._$z0) * h, f = g._$z0 + (y._$z0 - g._$z0) * h, L = m._$z0 + (T._$z0 - m._$z0) * h, M = P._$z0 + (S._$z0 - P._$z0) * h, e._$Yr._$z0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$qT + (d._$qT - c._$qT) * h, f = g._$qT + (y._$qT - g._$qT) * h, L = m._$qT + (T._$qT - m._$qT) * h, M = P._$qT + (S._$qT - P._$qT) * h, e._$Yr._$qT = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)
-					} else if (4 == o) {
-						var E = this._$Y0[n[0]],
-							A = this._$Y0[n[1]],
-							I = this._$Y0[n[2]],
-							w = this._$Y0[n[3]],
-							x = this._$Y0[n[4]],
-							O = this._$Y0[n[5]],
-							D = this._$Y0[n[6]],
-							R = this._$Y0[n[7]],
-							b = this._$Y0[n[8]],
-							F = this._$Y0[n[9]],
-							C = this._$Y0[n[10]],
-							N = this._$Y0[n[11]],
-							B = this._$Y0[n[12]],
-							U = this._$Y0[n[13]],
-							G = this._$Y0[n[14]],
-							Y = this._$Y0[n[15]],
-							h = s[0],
-							u = s[1],
-							v = s[2],
-							k = s[3],
-							p = E._$fL + (A._$fL - E._$fL) * h,
-							f = I._$fL + (w._$fL - I._$fL) * h,
-							L = x._$fL + (O._$fL - x._$fL) * h,
-							M = D._$fL + (R._$fL - D._$fL) * h,
-							V = b._$fL + (F._$fL - b._$fL) * h,
-							X = C._$fL + (N._$fL - C._$fL) * h,
-							H = B._$fL + (U._$fL - B._$fL) * h,
-							W = G._$fL + (Y._$fL - G._$fL) * h;
-						e._$Yr._$fL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$gL + (A._$gL - E._$gL) * h, f = I._$gL + (w._$gL - I._$gL) * h, L = x._$gL + (O._$gL - x._$gL) * h, M = D._$gL + (R._$gL - D._$gL) * h, V = b._$gL + (F._$gL - b._$gL) * h, X = C._$gL + (N._$gL - C._$gL) * h, H = B._$gL + (U._$gL - B._$gL) * h, W = G._$gL + (Y._$gL - G._$gL) * h, e._$Yr._$gL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$B0 + (A._$B0 - E._$B0) * h, f = I._$B0 + (w._$B0 - I._$B0) * h, L = x._$B0 + (O._$B0 - x._$B0) * h, M = D._$B0 + (R._$B0 - D._$B0) * h, V = b._$B0 + (F._$B0 - b._$B0) * h, X = C._$B0 + (N._$B0 - C._$B0) * h, H = B._$B0 + (U._$B0 - B._$B0) * h, W = G._$B0 + (Y._$B0 - G._$B0) * h, e._$Yr._$B0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$z0 + (A._$z0 - E._$z0) * h, f = I._$z0 + (w._$z0 - I._$z0) * h, L = x._$z0 + (O._$z0 - x._$z0) * h, M = D._$z0 + (R._$z0 - D._$z0) * h, V = b._$z0 + (F._$z0 - b._$z0) * h, X = C._$z0 + (N._$z0 - C._$z0) * h, H = B._$z0 + (U._$z0 - B._$z0) * h, W = G._$z0 + (Y._$z0 - G._$z0) * h, e._$Yr._$z0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$qT + (A._$qT - E._$qT) * h, f = I._$qT + (w._$qT - I._$qT) * h, L = x._$qT + (O._$qT - x._$qT) * h, M = D._$qT + (R._$qT - D._$qT) * h, V = b._$qT + (F._$qT - b._$qT) * h, X = C._$qT + (N._$qT - C._$qT) * h, H = B._$qT + (U._$qT - B._$qT) * h, W = G._$qT + (Y._$qT - G._$qT) * h, e._$Yr._$qT = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u))
-					} else {
-						for (var j = 0 | Math.pow(2, o), q = new Float32Array(j), J = 0; J < j; J++) {
-							for (var Q = J, Z = 1, K = 0; K < o; K++) Z *= Q % 2 == 0 ? 1 - s[K] : s[K], Q /= 2;
-							q[J] = Z
-						}
-						for (var tt = new Array, it = 0; it < j; it++) tt[it] = this._$Y0[n[it]];
-						for (var et = 0, rt = 0, ot = 0, nt = 0, st = 0, it = 0; it < j; it++) et += q[it] * tt[it]._$fL, rt += q[it] * tt[it]._$gL, ot += q[it] * tt[it]._$B0, nt += q[it] * tt[it]._$z0, st += q[it] * tt[it]._$qT;
-						e._$Yr._$fL = et, e._$Yr._$gL = rt, e._$Yr._$B0 = ot, e._$Yr._$z0 = nt, e._$Yr._$qT = st
-					}
-					var _ = this._$Y0[n[0]];
-					e._$Yr.reflectX = _.reflectX, e._$Yr.reflectY = _.reflectY
-				}
-			}, z.prototype._$2b = function(t, i) {
-				this != i._$GT() && console.log("### assert!! ### ");
-				var e = i;
-				if (e._$hS(!0), this._$32()) {
-					var r = this.getTargetBaseDataID();
-					if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1);
-					else {
-						var o = t.getBaseData(e._$8r);
-						if (null != o) {
-							var n = t._$q2(e._$8r),
-								s = z._$Xo;
-							s[0] = e._$Yr._$fL, s[1] = e._$Yr._$gL;
-							var a = z._$io;
-							a[0] = 0, a[1] = -.1;
-							n._$GT().getType() == I._$c2 ? a[1] = -10 : a[1] = -.1;
-							var h = z._$0o;
-							this._$Jr(t, o, n, s, a, h);
-							var l = Lt._$92(a, h);
-							o._$nb(t, n, s, s, 1, 0, 2), e._$Wr._$fL = s[0], e._$Wr._$gL = s[1], e._$Wr._$B0 = e._$Yr._$B0, e._$Wr._$z0 = e._$Yr._$z0, e._$Wr._$qT = e._$Yr._$qT - l * Lt._$NS;
-							var $ = n.getTotalScale();
-							e.setTotalScale_notForClient($ * e._$Wr._$B0);
-							var u = n.getTotalOpacity();
-							e.setTotalOpacity(u * e.getInterpolatedOpacity()), e._$Wr.reflectX = e._$Yr.reflectX, e._$Wr.reflectY = e._$Yr.reflectY, e._$hS(n._$yo())
-						} else e._$hS(!1)
-					}
-				} else e.setTotalScale_notForClient(e._$Yr._$B0), e.setTotalOpacity(e.getInterpolatedOpacity())
-			}, z.prototype._$nb = function(t, i, e, r, o, n, s) {
-				this != i._$GT() && console.log("### assert!! ### ");
-				for (var _, a, h = i, l = null != h._$Wr ? h._$Wr : h._$Yr, $ = Math.sin(Lt._$bS * l._$qT), u = Math.cos(Lt._$bS * l._$qT), p = h.getTotalScale(), f = l.reflectX ? -1 : 1, c = l.reflectY ? -1 : 1, d = u * p * f, g = -$ * p * c, y = $ * p * f, m = u * p * c, T = l._$fL, P = l._$gL, S = o * s, v = n; v < S; v += s) _ = e[v], a = e[v + 1], r[v] = d * _ + g * a + T, r[v + 1] = y * _ + m * a + P
-			}, z.prototype._$Jr = function(t, i, e, r, o, n) {
-				i != e._$GT() && console.log("### assert!! ### ");
-				var s = z._$Lo;
-				z._$Lo[0] = r[0], z._$Lo[1] = r[1], i._$nb(t, e, s, s, 1, 0, 2);
-				for (var _ = z._$To, a = z._$Po, h = 1, l = 0; l < 10; l++) {
-					if (a[0] = r[0] + h * o[0], a[1] = r[1] + h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return n[0] = _[0], void(n[1] = _[1]);
-					if (a[0] = r[0] - h * o[0], a[1] = r[1] - h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return _[0] = -_[0], _[0] = -_[0], n[0] = _[0], void(n[1] = _[1]);
-					h *= .1
-				}
-				at._$so && console.log("_$L0 to transform _$SP\n")
-			}, H.prototype = new _t, W.prototype = new M, W._$ur = -2, W._$ES = 500, W._$wb = 2, W._$8S = 3, W._$os = 4, W._$52 = W._$ES, W._$R2 = W._$ES, W._$Sb = function(t) {
-				for (var i = t.length - 1; i >= 0; --i) {
-					var e = t[i];
-					e < W._$52 ? W._$52 = e : e > W._$R2 && (W._$R2 = e)
-				}
-			}, W._$or = function() {
-				return W._$52
-			}, W._$Pr = function() {
-				return W._$R2
-			}, W.prototype._$F0 = function(t) {
-				this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = null, W._$Sb(this._$Lb)
-			}, W.prototype.getClipIDList = function() {
-				return this.clipIDList
-			}, W.prototype._$Nr = function(t, i) {
-				if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs);
-				else if (i._$IS[0]) return;
-				i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS)
-			}, W.prototype._$2b = function(t) {}, W.prototype.getDrawDataID = function() {
-				return this._$gP
-			}, W.prototype._$j2 = function(t) {
-				this._$gP = t
-			}, W.prototype.getOpacity = function(t, i) {
-				return i._$7s
-			}, W.prototype._$zS = function(t, i) {
-				return i._$Us
-			}, W.prototype.getTargetBaseDataID = function() {
-				return this._$dr
-			}, W.prototype._$gs = function(t) {
-				this._$dr = t
-			}, W.prototype._$32 = function() {
-				return null != this._$dr && this._$dr != yt._$2o()
-			}, W.prototype.getType = function() {}, j._$42 = 0, j.prototype._$1b = function() {
-				return this._$3S
-			}, j.prototype.getDrawDataList = function() {
-				return this._$aS
-			}, j.prototype._$F0 = function(t) {
-				this._$NL = t._$nP(), this._$aS = t._$nP(), this._$3S = t._$nP()
-			}, j.prototype._$kr = function(t) {
-				t._$Zo(this._$3S), t._$xo(this._$aS), this._$3S = null, this._$aS = null
-			}, q.prototype = new i, q.loadModel = function(t) {
-				var e = new q;
-				return i._$62(e, t), e
-			}, q.loadModel = function(t) {
-				var e = new q;
-				return i._$62(e, t), e
-			}, q._$to = function() {
-				return new q
-			}, q._$er = function(t) {
-				var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d");
-				if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL());
-				for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = q.loadModel(i._$3b()), o = 0; o < e.length; o++) {
-					var n = new _$5(e[o]);
-					if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL());
-					r.setTexture(o, _$nL._$_o(t, n._$3b()))
-				}
-				return r
-			}, q.prototype.setGL = function(t) {
-				this._$zo.setGL(t)
-			}, q.prototype.setTransform = function(t) {
-				this._$zo.setTransform(t)
-			}, q.prototype.draw = function() {
-				this._$5S.draw(this._$zo)
-			}, q.prototype._$K2 = function() {
-				this._$zo._$K2()
-			}, q.prototype.setTexture = function(t, i) {
-				null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i)
-			}, q.prototype.setTexture = function(t, i) {
-				null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i)
-			}, q.prototype._$Rs = function() {
-				return this._$zo._$Rs()
-			}, q.prototype._$Ds = function(t) {
-				this._$zo._$Ds(t)
-			}, q.prototype.getDrawParam = function() {
-				return this._$zo
-			}, J.prototype = new s, J._$cs = "VISIBLE:", J._$ar = "LAYOUT:", J.MTN_PREFIX_FADEIN = "FADEIN:", J.MTN_PREFIX_FADEOUT = "FADEOUT:", J._$Co = 0, J._$1T = 1, J.loadMotion = function(t) {
-				var i = k._$C(t);
-				return J.loadMotion(i)
-			}, J.loadMotion = function(t) {
-				t instanceof ArrayBuffer && (t = new DataView(t));
-				var i = new J,
-					e = [0],
-					r = t.byteLength;
-				i._$yT = 0;
-				for (var o = 0; o < r; ++o) {
-					var n = Q(t, o),
-						s = n.charCodeAt(0);
-					if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) {
-						if (97 <= s && s <= 122 || 65 <= s && s <= 90 || "_" == n) {
-							for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) {
-								a = o;
-								break
-							}
-							if (a >= 0) {
-								var h = new B;
-								O.startsWith(t, _, J._$cs) ? (h._$RP = B._$hs, h._$4P = O.createString(t, _, a - _)) : O.startsWith(t, _, J._$ar) ? (h._$4P = O.createString(t, _ + 7, a - _ - 7), O.startsWith(t, _ + 7, "ANCHOR_X") ? h._$RP = B._$xs : O.startsWith(t, _ + 7, "ANCHOR_Y") ? h._$RP = B._$us : O.startsWith(t, _ + 7, "SCALE_X") ? h._$RP = B._$qs : O.startsWith(t, _ + 7, "SCALE_Y") ? h._$RP = B._$Ys : O.startsWith(t, _ + 7, "X") ? h._$RP = B._$ws : O.startsWith(t, _ + 7, "Y") && (h._$RP = B._$Ns)) : (h._$RP = B._$Fr, h._$4P = O.createString(t, _, a - _)), i.motions.push(h);
-								var l = 0,
-									$ = [];
-								for (o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
-									var u = O._$LS(t, r, o, e);
-									if (e[0] > 0) {
-										$.push(u), l++;
-										var p = e[0];
-										if (p < o) {
-											console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n");
-											break
-										}
-										o = p - 1
-									}
-								}
-								h._$I0 = new Float32Array($), l > i._$yT && (i._$yT = l)
-							}
-						}
-					} else {
-						for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) {
-							a = o;
-							break
-						}
-						var f = !1;
-						if (a >= 0) for (a == _ + 4 && "f" == Q(t, _ + 1) && "p" == Q(t, _ + 2) && "s" == Q(t, _ + 3) && (f = !0), o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
-							var u = O._$LS(t, r, o, e);
-							e[0] > 0 && f && 5 < u && u < 121 && (i._$D0 = u), o = e[0]
-						}
-						for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o);
-					} else for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o);
-				}
-				return i._$rr = 1e3 * i._$yT / i._$D0 | 0, i
-			}, J.prototype.getDurationMSec = function() {
-				return this._$E ? -1 : this._$rr
-			}, J.prototype.getLoopDurationMSec = function() {
-				return this._$rr
-			}, J.prototype.dump = function() {
-				for (var t = 0; t < this.motions.length; t++) {
-					var i = this.motions[t];
-					console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length);
-					for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]);
-					console.log("\n")
-				}
-			}, J.prototype.updateParamExe = function(t, i, e, r) {
-				for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) {
-					var h = this.motions[a],
-						l = h._$I0.length,
-						$ = h._$4P;
-					if (h._$RP == B._$hs) {
-						var u = h._$I0[s >= l ? l - 1 : s];
-						t.setParamFloat($, u)
-					} else if (B._$ws <= h._$RP && h._$RP <= B._$Ys);
-					else {
-						var p, f = t.getParamIndex($),
-							c = t.getModelContext(),
-							d = c.getParamMax(f),
-							g = c.getParamMin(f),
-							y = .4 * (d - g),
-							m = c.getParamFloat(f),
-							T = h._$I0[s >= l ? l - 1 : s],
-							P = h._$I0[s + 1 >= l ? l - 1 : s + 1];
-						p = T < P && P - T > y || T > P && T - P > y ? T : T + (P - T) * _;
-						var S = m + (p - m) * e;
-						t.setParamFloat($, S)
-					}
-				}
-				s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0), this._$eP = e
-			}, J.prototype._$r0 = function() {
-				return this._$E
-			}, J.prototype._$aL = function(t) {
-				this._$E = t
-			}, J.prototype._$S0 = function() {
-				return this._$D0
-			}, J.prototype._$U0 = function(t) {
-				this._$D0 = t
-			}, J.prototype.isLoopFadeIn = function() {
-				return this.loopFadeIn
-			}, J.prototype.setLoopFadeIn = function(t) {
-				this.loopFadeIn = t
-			}, N.prototype.clear = function() {
-				this.size = 0
-			}, N.prototype.add = function(t) {
-				if (this._$P.length <= this.size) {
-					var i = new Float32Array(2 * this.size);
-					w._$jT(this._$P, 0, i, 0, this.size), this._$P = i
-				}
-				this._$P[this.size++] = t
-			}, N.prototype._$BL = function() {
-				var t = new Float32Array(this.size);
-				return w._$jT(this._$P, 0, t, 0, this.size), t
-			}, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, Z.prototype = new I, Z._$gT = new Array, Z.prototype._$zP = function() {
-				this._$GS = new D, this._$GS._$zP()
-			}, Z.prototype._$F0 = function(t) {
-				I.prototype._$F0.call(this, t), this._$A = t._$6L(), this._$o = t._$6L(), this._$GS = t._$nP(), this._$Eo = t._$nP(), I.prototype.readV2_opacity.call(this, t)
-			}, Z.prototype.init = function(t) {
-				var i = new K(this),
-					e = (this._$o + 1) * (this._$A + 1);
-				return null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(2 * e), null != i._$hr && (i._$hr = null), this._$32() ? i._$hr = new Float32Array(2 * e) : i._$hr = null, i
-			}, Z.prototype._$Nr = function(t, i) {
-				var e = i;
-				if (this._$GS._$Ur(t)) {
-					var r = this._$VT(),
-						o = Z._$gT;
-					o[0] = !1, v._$Vr(t, this._$GS, o, r, this._$Eo, e._$Cr, 0, 2), i._$Ib(o[0]), this.interpolateOpacity(t, this._$GS, i, o)
-				}
-			}, Z.prototype._$2b = function(t, i) {
-				var e = i;
-				if (e._$hS(!0), this._$32()) {
-					var r = this.getTargetBaseDataID();
-					if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1);
-					else {
-						var o = t.getBaseData(e._$8r),
-							n = t._$q2(e._$8r);
-						if (null != o && n._$yo()) {
-							var s = n.getTotalScale();
-							e.setTotalScale_notForClient(s);
-							var a = n.getTotalOpacity();
-							e.setTotalOpacity(a * e.getInterpolatedOpacity()), o._$nb(t, n, e._$Cr, e._$hr, this._$VT(), 0, 2), e._$hS(!0)
-						} else e._$hS(!1)
-					}
-				} else e.setTotalOpacity(e.getInterpolatedOpacity())
-			}, Z.prototype._$nb = function(t, i, e, r, o, n, s) {
-				var _ = i,
-					a = null != _._$hr ? _._$hr : _._$Cr;
-				Z.transformPoints_sdk2(e, r, o, n, s, a, this._$o, this._$A)
-			}, Z.transformPoints_sdk2 = function(i, e, r, o, n, s, _, a) {
-				for (var h, l, $, u = r * n, p = 0, f = 0, c = 0, d = 0, g = 0, y = 0, m = !1, T = o; T < u; T += n) {
-					var P, S, v, L;
-					if (v = i[T], L = i[T + 1], P = v * _, S = L * a, P < 0 || S < 0 || _ <= P || a <= S) {
-						var M = _ + 1;
-						if (!m) {
-							m = !0, p = .25 * (s[2 * (0 + 0 * M)] + s[2 * (_ + 0 * M)] + s[2 * (0 + a * M)] + s[2 * (_ + a * M)]), f = .25 * (s[2 * (0 + 0 * M) + 1] + s[2 * (_ + 0 * M) + 1] + s[2 * (0 + a * M) + 1] + s[2 * (_ + a * M) + 1]);
-							var E = s[2 * (_ + a * M)] - s[2 * (0 + 0 * M)],
-								A = s[2 * (_ + a * M) + 1] - s[2 * (0 + 0 * M) + 1],
-								I = s[2 * (_ + 0 * M)] - s[2 * (0 + a * M)],
-								w = s[2 * (_ + 0 * M) + 1] - s[2 * (0 + a * M) + 1];
-							c = .5 * (E + I), d = .5 * (A + w), g = .5 * (E - I), y = .5 * (A - w), p -= .5 * (c + g), f -= .5 * (d + y)
-						}
-						if (-2 < v && v < 3 && -2 < L && L < 3) if (v <= 0) if (L <= 0) {
-							var x = s[2 * (0 + 0 * M)],
-								O = s[2 * (0 + 0 * M) + 1],
-								D = p - 2 * c,
-								R = f - 2 * d,
-								b = p - 2 * g,
-								F = f - 2 * y,
-								C = p - 2 * c - 2 * g,
-								N = f - 2 * d - 2 * y,
-								B = .5 * (v - -2),
-								U = .5 * (L - -2);
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else if (L >= 1) {
-							var b = s[2 * (0 + a * M)],
-								F = s[2 * (0 + a * M) + 1],
-								C = p - 2 * c + 1 * g,
-								N = f - 2 * d + 1 * y,
-								x = p + 3 * g,
-								O = f + 3 * y,
-								D = p - 2 * c + 3 * g,
-								R = f - 2 * d + 3 * y,
-								B = .5 * (v - -2),
-								U = .5 * (L - 1);
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else {
-							var G = 0 | S;
-							G == a && (G = a - 1);
-							var B = .5 * (v - -2),
-								U = S - G,
-								Y = G / a,
-								k = (G + 1) / a,
-								b = s[2 * (0 + G * M)],
-								F = s[2 * (0 + G * M) + 1],
-								x = s[2 * (0 + (G + 1) * M)],
-								O = s[2 * (0 + (G + 1) * M) + 1],
-								C = p - 2 * c + Y * g,
-								N = f - 2 * d + Y * y,
-								D = p - 2 * c + k * g,
-								R = f - 2 * d + k * y;
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else if (1 <= v) if (L <= 0) {
-							var D = s[2 * (_ + 0 * M)],
-								R = s[2 * (_ + 0 * M) + 1],
-								x = p + 3 * c,
-								O = f + 3 * d,
-								C = p + 1 * c - 2 * g,
-								N = f + 1 * d - 2 * y,
-								b = p + 3 * c - 2 * g,
-								F = f + 3 * d - 2 * y,
-								B = .5 * (v - 1),
-								U = .5 * (L - -2);
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else if (L >= 1) {
-							var C = s[2 * (_ + a * M)],
-								N = s[2 * (_ + a * M) + 1],
-								b = p + 3 * c + 1 * g,
-								F = f + 3 * d + 1 * y,
-								D = p + 1 * c + 3 * g,
-								R = f + 1 * d + 3 * y,
-								x = p + 3 * c + 3 * g,
-								O = f + 3 * d + 3 * y,
-								B = .5 * (v - 1),
-								U = .5 * (L - 1);
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else {
-							var G = 0 | S;
-							G == a && (G = a - 1);
-							var B = .5 * (v - 1),
-								U = S - G,
-								Y = G / a,
-								k = (G + 1) / a,
-								C = s[2 * (_ + G * M)],
-								N = s[2 * (_ + G * M) + 1],
-								D = s[2 * (_ + (G + 1) * M)],
-								R = s[2 * (_ + (G + 1) * M) + 1],
-								b = p + 3 * c + Y * g,
-								F = f + 3 * d + Y * y,
-								x = p + 3 * c + k * g,
-								O = f + 3 * d + k * y;
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else if (L <= 0) {
-							var V = 0 | P;
-							V == _ && (V = _ - 1);
-							var B = P - V,
-								U = .5 * (L - -2),
-								X = V / _,
-								z = (V + 1) / _,
-								D = s[2 * (V + 0 * M)],
-								R = s[2 * (V + 0 * M) + 1],
-								x = s[2 * (V + 1 + 0 * M)],
-								O = s[2 * (V + 1 + 0 * M) + 1],
-								C = p + X * c - 2 * g,
-								N = f + X * d - 2 * y,
-								b = p + z * c - 2 * g,
-								F = f + z * d - 2 * y;
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else if (L >= 1) {
-							var V = 0 | P;
-							V == _ && (V = _ - 1);
-							var B = P - V,
-								U = .5 * (L - 1),
-								X = V / _,
-								z = (V + 1) / _,
-								C = s[2 * (V + a * M)],
-								N = s[2 * (V + a * M) + 1],
-								b = s[2 * (V + 1 + a * M)],
-								F = s[2 * (V + 1 + a * M) + 1],
-								D = p + X * c + 3 * g,
-								R = f + X * d + 3 * y,
-								x = p + z * c + 3 * g,
-								O = f + z * d + 3 * y;
-							B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
-						} else t.err.printf("_$li calc : %.4f , %.4f\t\t\t\t\t@@BDBoxGrid\n", v, L);
-						else e[T] = p + v * c + L * g, e[T + 1] = f + v * d + L * y
-					} else l = P - (0 | P), $ = S - (0 | S), h = 2 * ((0 | P) + (0 | S) * (_ + 1)), l + $ < 1 ? (e[T] = s[h] * (1 - l - $) + s[h + 2] * l + s[h + 2 * (_ + 1)] * $, e[T + 1] = s[h + 1] * (1 - l - $) + s[h + 3] * l + s[h + 2 * (_ + 1) + 1] * $) : (e[T] = s[h + 2 * (_ + 1) + 2] * (l - 1 + $) + s[h + 2 * (_ + 1)] * (1 - l) + s[h + 2] * (1 - $), e[T + 1] = s[h + 2 * (_ + 1) + 3] * (l - 1 + $) + s[h + 2 * (_ + 1) + 1] * (1 - l) + s[h + 3] * (1 - $))
-				}
-			}, Z.prototype.transformPoints_sdk1 = function(t, i, e, r, o, n, s) {
-				for (var _, a, h, l, $, u, p, f = i, c = this._$o, d = this._$A, g = o * s, y = null != f._$hr ? f._$hr : f._$Cr, m = n; m < g; m += s) at._$ts ? (_ = e[m], a = e[m + 1], _ < 0 ? _ = 0 : _ > 1 && (_ = 1), a < 0 ? a = 0 : a > 1 && (a = 1), _ *= c, a *= d, h = 0 | _, l = 0 | a, h > c - 1 && (h = c - 1), l > d - 1 && (l = d - 1), u = _ - h, p = a - l, $ = 2 * (h + l * (c + 1))) : (_ = e[m] * c, a = e[m + 1] * d, u = _ - (0 | _), p = a - (0 | a), $ = 2 * ((0 | _) + (0 | a) * (c + 1))), u + p < 1 ? (r[m] = y[$] * (1 - u - p) + y[$ + 2] * u + y[$ + 2 * (c + 1)] * p, r[m + 1] = y[$ + 1] * (1 - u - p) + y[$ + 3] * u + y[$ + 2 * (c + 1) + 1] * p) : (r[m] = y[$ + 2 * (c + 1) + 2] * (u - 1 + p) + y[$ + 2 * (c + 1)] * (1 - u) + y[$ + 2] * (1 - p), r[m + 1] = y[$ + 2 * (c + 1) + 3] * (u - 1 + p) + y[$ + 2 * (c + 1) + 1] * (1 - u) + y[$ + 3] * (1 - p))
-			}, Z.prototype._$VT = function() {
-				return (this._$o + 1) * (this._$A + 1)
-			}, Z.prototype.getType = function() {
-				return I._$_b
-			}, K.prototype = new _t, tt._$42 = 0, tt.prototype._$zP = function() {
-				this._$3S = new Array, this._$aS = new Array
-			}, tt.prototype._$F0 = function(t) {
-				this._$g0 = t._$8L(), this.visible = t._$8L(), this._$NL = t._$nP(), this._$3S = t._$nP(), this._$aS = t._$nP()
-			}, tt.prototype.init = function(t) {
-				var i = new it(this);
-				return i.setPartsOpacity(this.isVisible() ? 1 : 0), i
-			}, tt.prototype._$6o = function(t) {
-				if (null == this._$3S) throw new Error("_$3S _$6 _$Wo@_$6o");
-				this._$3S.push(t)
-			}, tt.prototype._$3o = function(t) {
-				if (null == this._$aS) throw new Error("_$aS _$6 _$Wo@_$3o");
-				this._$aS.push(t)
-			}, tt.prototype._$Zo = function(t) {
-				this._$3S = t
-			}, tt.prototype._$xo = function(t) {
-				this._$aS = t
-			}, tt.prototype.isVisible = function() {
-				return this.visible
-			}, tt.prototype._$uL = function() {
-				return this._$g0
-			}, tt.prototype._$KP = function(t) {
-				this.visible = t
-			}, tt.prototype._$ET = function(t) {
-				this._$g0 = t
-			}, tt.prototype.getBaseData = function() {
-				return this._$3S
-			}, tt.prototype.getDrawData = function() {
-				return this._$aS
-			}, tt.prototype._$p2 = function() {
-				return this._$NL
-			}, tt.prototype._$ob = function(t) {
-				this._$NL = t
-			}, tt.prototype.getPartsID = function() {
-				return this._$NL
-			}, tt.prototype._$MP = function(t) {
-				this._$NL = t
-			}, it.prototype = new $, it.prototype.getPartsOpacity = function() {
-				return this._$VS
-			}, it.prototype.setPartsOpacity = function(t) {
-				this._$VS = t
-			}, et._$L7 = function() {
-				u._$27(), yt._$27(), b._$27(), l._$27()
-			}, et.prototype.toString = function() {
-				return this.id
-			}, rt.prototype._$F0 = function(t) {}, ot.prototype._$1s = function() {
-				return this._$4S
-			}, ot.prototype._$zP = function() {
-				this._$4S = new Array
-			}, ot.prototype._$F0 = function(t) {
-				this._$4S = t._$nP()
-			}, ot.prototype._$Ks = function(t) {
-				this._$4S.push(t)
-			}, nt.tr = new gt, nt._$50 = new gt, nt._$Ti = new Array(0, 0), nt._$Pi = new Array(0, 0), nt._$B = new Array(0, 0), nt.prototype._$lP = function(t, i, e, r) {
-				this.viewport = new Array(t, i, e, r)
-			}, nt.prototype._$bL = function() {
-				this.context.save();
-				var t = this.viewport;
-				null != t && (this.context.beginPath(), this.context._$Li(t[0], t[1], t[2], t[3]), this.context.clip())
-			}, nt.prototype._$ei = function() {
-				this.context.restore()
-			}, nt.prototype.drawElements = function(t, i, e, r, o, n, s, a) {
-				try {
-					o != this._$Qo && (this._$Qo = o, this.context.globalAlpha = o);
-					for (var h = i.length, l = t.width, $ = t.height, u = this.context, p = this._$xP, f = this._$uP, c = this._$6r, d = this._$3r, g = nt.tr, y = nt._$Ti, m = nt._$Pi, T = nt._$B, P = 0; P < h; P += 3) {
-						u.save();
-						var S = i[P],
-							v = i[P + 1],
-							L = i[P + 2],
-							M = p + c * e[2 * S],
-							E = f + d * e[2 * S + 1],
-							A = p + c * e[2 * v],
-							I = f + d * e[2 * v + 1],
-							w = p + c * e[2 * L],
-							x = f + d * e[2 * L + 1];
-						s && (s._$PS(M, E, T), M = T[0], E = T[1], s._$PS(A, I, T), A = T[0], I = T[1], s._$PS(w, x, T), w = T[0], x = T[1]);
-						var O = l * r[2 * S],
-							D = $ - $ * r[2 * S + 1],
-							R = l * r[2 * v],
-							b = $ - $ * r[2 * v + 1],
-							F = l * r[2 * L],
-							C = $ - $ * r[2 * L + 1],
-							N = Math.atan2(b - D, R - O),
-							B = Math.atan2(I - E, A - M),
-							U = A - M,
-							G = I - E,
-							Y = Math.sqrt(U * U + G * G),
-							k = R - O,
-							V = b - D,
-							X = Math.sqrt(k * k + V * V),
-							z = Y / X;
-						It._$ni(F, C, O, D, R - O, b - D, -(b - D), R - O, y), It._$ni(w, x, M, E, A - M, I - E, -(I - E), A - M, m);
-						var H = (m[0] - y[0]) / y[1],
-							W = Math.min(O, R, F),
-							j = Math.max(O, R, F),
-							q = Math.min(D, b, C),
-							J = Math.max(D, b, C),
-							Q = Math.floor(W),
-							Z = Math.floor(q),
-							K = Math.ceil(j),
-							tt = Math.ceil(J);
-						g.identity(), g.translate(M, E), g.rotate(B), g.scale(1, m[1] / y[1]), g.shear(H, 0), g.scale(z, z), g.rotate(-N), g.translate(-O, -D), g.setContext(u);
-						if (n || (n = 1.2), at.IGNORE_EXPAND && (n = 0), at.USE_CACHED_POLYGON_IMAGE) {
-							var it = a._$e0;
-							if (it.gl_cacheImage = it.gl_cacheImage || {}, !it.gl_cacheImage[P]) {
-								var et = nt.createCanvas(K - Q, tt - Z);
-								at.DEBUG_DATA.LDGL_CANVAS_MB = at.DEBUG_DATA.LDGL_CANVAS_MB || 0, at.DEBUG_DATA.LDGL_CANVAS_MB += (K - Q) * (tt - Z) * 4;
-								var rt = et.getContext("2d");
-								rt.translate(-Q, -Z), nt.clip(rt, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), rt.drawImage(t, 0, 0), it.gl_cacheImage[P] = {
-									cacheCanvas: et,
-									cacheContext: rt
-								}
-							}
-							u.drawImage(it.gl_cacheImage[P].cacheCanvas, Q, Z)
-						} else at.IGNORE_CLIP || nt.clip(u, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), at.USE_ADJUST_TRANSLATION && (W = 0, j = l, q = 0, J = $), u.drawImage(t, W, q, j - W, J - q, W, q, j - W, J - q);
-						u.restore()
-					}
-				} catch (t) {
-					_._$Rb(t)
-				}
-			}, nt.clip = function(t, i, e, r, o, n, s, _, a, h, l, $, u, p, f, c) {
-				e > .02 ? nt.expandClip(t, i, e, r, l, $, u, p, f, c) : nt.clipWithTransform(t, null, o, n, s, _, a, h)
-			}, nt.expandClip = function(t, i, e, r, o, n, s, _, a, h) {
-				var l = s - o,
-					$ = _ - n,
-					u = a - o,
-					p = h - n,
-					f = l * p - $ * u > 0 ? e : -e,
-					c = -$,
-					d = l,
-					g = a - s,
-					y = h - _,
-					m = -y,
-					T = g,
-					P = Math.sqrt(g * g + y * y),
-					S = -p,
-					v = u,
-					L = Math.sqrt(u * u + p * p),
-					M = o - f * c / r,
-					E = n - f * d / r,
-					A = s - f * c / r,
-					I = _ - f * d / r,
-					w = s - f * m / P,
-					x = _ - f * T / P,
-					O = a - f * m / P,
-					D = h - f * T / P,
-					R = o + f * S / L,
-					b = n + f * v / L,
-					F = a + f * S / L,
-					C = h + f * v / L,
-					N = nt._$50;
-				return null != i._$P2(N) && (nt.clipWithTransform(t, N, M, E, A, I, w, x, O, D, F, C, R, b), !0)
-			}, nt.clipWithTransform = function(t, i, e, r, o, n, s, a) {
-				if (arguments.length < 7) return void _._$li("err : @LDGL.clip()");
-				if (!(arguments[1] instanceof gt)) return void _._$li("err : a[0] is _$6 LDTransform @LDGL.clip()");
-				var h = nt._$B,
-					l = i,
-					$ = arguments;
-				if (t.beginPath(), l) {
-					l._$PS($[2], $[3], h), t.moveTo(h[0], h[1]);
-					for (var u = 4; u < $.length; u += 2) l._$PS($[u], $[u + 1], h), t.lineTo(h[0], h[1])
-				} else {
-					t.moveTo($[2], $[3]);
-					for (var u = 4; u < $.length; u += 2) t.lineTo($[u], $[u + 1])
-				}
-				t.clip()
-			}, nt.createCanvas = function(t, i) {
-				var e = document.createElement("canvas");
-				return e.setAttribute("width", t), e.setAttribute("height", i), e || _._$li("err : " + e), e
-			}, nt.dumpValues = function() {
-				for (var t = "", i = 0; i < arguments.length; i++) t += "[" + i + "]= " + arguments[i].toFixed(3) + " , ";
-				console.log(t)
-			}, st.prototype._$F0 = function(t) {
-				this._$TT = t._$_T(), this._$LT = t._$_T(), this._$FS = t._$_T(), this._$wL = t._$nP()
-			}, st.prototype.getMinValue = function() {
-				return this._$TT
-			}, st.prototype.getMaxValue = function() {
-				return this._$LT
-			}, st.prototype.getDefaultValue = function() {
-				return this._$FS
-			}, st.prototype.getParamID = function() {
-				return this._$wL
-			}, _t.prototype._$yo = function() {
-				return this._$AT && !this._$JS
-			}, _t.prototype._$hS = function(t) {
-				this._$AT = t
-			}, _t.prototype._$GT = function() {
-				return this._$e0
-			}, _t.prototype._$l2 = function(t) {
-				this._$IP = t
-			}, _t.prototype.getPartsIndex = function() {
-				return this._$IP
-			}, _t.prototype._$x2 = function() {
-				return this._$JS
-			}, _t.prototype._$Ib = function(t) {
-				this._$JS = t
-			}, _t.prototype.getTotalScale = function() {
-				return this.totalScale
-			}, _t.prototype.setTotalScale_notForClient = function(t) {
-				this.totalScale = t
-			}, _t.prototype.getInterpolatedOpacity = function() {
-				return this._$7s
-			}, _t.prototype.setInterpolatedOpacity = function(t) {
-				this._$7s = t
-			}, _t.prototype.getTotalOpacity = function(t) {
-				return this.totalOpacity
-			}, _t.prototype.setTotalOpacity = function(t) {
-				this.totalOpacity = t
-			}, at._$2s = "2.1.00_1", at._$Kr = 201001e3, at._$sP = !0, at._$so = !0, at._$cb = !1, at._$3T = !0, at._$Ts = !0, at._$fb = !0, at._$ts = !0, at.L2D_DEFORMER_EXTEND = !0, at._$Wb = !1;
-			at._$yr = !1, at._$Zs = !1, at.L2D_NO_ERROR = 0, at._$i7 = 1e3, at._$9s = 1001, at._$es = 1100, at._$r7 = 2e3, at._$07 = 2001, at._$b7 = 2002, at._$H7 = 4e3, at.L2D_COLOR_BLEND_MODE_MULT = 0, at.L2D_COLOR_BLEND_MODE_ADD = 1, at.L2D_COLOR_BLEND_MODE_INTERPOLATE = 2, at._$6b = !0, at._$cT = 0, at.clippingMaskBufferSize = 256, at.glContext = new Array, at.frameBuffers = new Array, at.fTexture = new Array, at.IGNORE_CLIP = !1, at.IGNORE_EXPAND = !1, at.EXPAND_W = 2, at.USE_ADJUST_TRANSLATION = !0, at.USE_CANVAS_TRANSFORM = !0, at.USE_CACHED_POLYGON_IMAGE = !1, at.DEBUG_DATA = {}, at.PROFILE_IOS_SPEED = {
-				PROFILE_NAME: "iOS Speed",
-				USE_ADJUST_TRANSLATION: !0,
-				USE_CACHED_POLYGON_IMAGE: !0,
-				EXPAND_W: 4
-			}, at.PROFILE_IOS_QUALITY = {
-				PROFILE_NAME: "iOS HiQ",
-				USE_ADJUST_TRANSLATION: !0,
-				USE_CACHED_POLYGON_IMAGE: !1,
-				EXPAND_W: 2
-			}, at.PROFILE_IOS_DEFAULT = at.PROFILE_IOS_QUALITY, at.PROFILE_ANDROID = {
-				PROFILE_NAME: "Android",
-				USE_ADJUST_TRANSLATION: !1,
-				USE_CACHED_POLYGON_IMAGE: !1,
-				EXPAND_W: 2
-			}, at.PROFILE_DESKTOP = {
-				PROFILE_NAME: "Desktop",
-				USE_ADJUST_TRANSLATION: !1,
-				USE_CACHED_POLYGON_IMAGE: !1,
-				EXPAND_W: 2
-			}, at.initProfile = function() {
-				Et.isIOS() ? at.setupProfile(at.PROFILE_IOS_DEFAULT) : Et.isAndroid() ? at.setupProfile(at.PROFILE_ANDROID) : at.setupProfile(at.PROFILE_DESKTOP)
-			}, at.setupProfile = function(t, i) {
-				if ("number" == typeof t) switch (t) {
-				case 9901:
-					t = at.PROFILE_IOS_SPEED;
-					break;
-				case 9902:
-					t = at.PROFILE_IOS_QUALITY;
-					break;
-				case 9903:
-					t = at.PROFILE_IOS_DEFAULT;
-					break;
-				case 9904:
-					t = at.PROFILE_ANDROID;
-					break;
-				case 9905:
-					t = at.PROFILE_DESKTOP;
-					break;
-				default:
-					alert("profile _$6 _$Ui : " + t)
-				}
-				arguments.length < 2 && (i = !0), i && console.log("profile : " + t.PROFILE_NAME);
-				for (var e in t) at[e] = t[e], i && console.log("  [" + e + "] = " + t[e])
-			}, at.init = function() {
-				if (at._$6b) {
-					console.log("Live2D %s", at._$2s), at._$6b = !1;
-					!0, at.initProfile()
-				}
-			}, at.getVersionStr = function() {
-				return at._$2s
-			}, at.getVersionNo = function() {
-				return at._$Kr
-			}, at._$sT = function(t) {
-				at._$cT = t
-			}, at.getError = function() {
-				var t = at._$cT;
-				return at._$cT = 0, t
-			}, at.dispose = function() {
-				at.glContext = [], at.frameBuffers = [], at.fTexture = []
-			}, at.setGL = function(t, i) {
-				var e = i || 0;
-				at.glContext[e] = t
-			}, at.getGL = function(t) {
-				return at.glContext[t]
-			}, at.setClippingMaskBufferSize = function(t) {
-				at.clippingMaskBufferSize = t
-			}, at.getClippingMaskBufferSize = function() {
-				return at.clippingMaskBufferSize
-			}, at.deleteBuffer = function(t) {
-				at.getGL(t).deleteFramebuffer(at.frameBuffers[t].framebuffer), delete at.frameBuffers[t], delete at.glContext[t]
-			}, ht._$r2 = function(t) {
-				return t < 0 ? 0 : t > 1 ? 1 : .5 - .5 * Math.cos(t * Lt.PI_F)
-			}, lt._$fr = -1, lt.prototype.toString = function() {
-				return this._$ib
-			}, $t.prototype = new W, $t._$42 = 0, $t._$Os = 30, $t._$ms = 0, $t._$ns = 1, $t._$_s = 2, $t._$gT = new Array, $t.prototype._$_S = function(t) {
-				this._$LP = t
-			}, $t.prototype.getTextureNo = function() {
-				return this._$LP
-			}, $t.prototype._$ZL = function() {
-				return this._$Qi
-			}, $t.prototype._$H2 = function() {
-				return this._$JP
-			}, $t.prototype.getNumPoints = function() {
-				return this._$d0
-			}, $t.prototype.getType = function() {
-				return W._$wb
-			}, $t.prototype._$B2 = function(t, i, e) {
-				var r = i,
-					o = null != r._$hr ? r._$hr : r._$Cr;
-				switch (U._$do) {
-				default:
-				case U._$Ms:
-					throw new Error("_$L _$ro ");
-				case U._$Qs:
-					for (var n = this._$d0 - 1; n >= 0; --n) o[n * U._$No + 4] = e
-				}
-			}, $t.prototype._$zP = function() {
-				this._$GS = new D, this._$GS._$zP()
-			}, $t.prototype._$F0 = function(t) {
-				W.prototype._$F0.call(this, t), this._$LP = t._$6L(), this._$d0 = t._$6L(), this._$Yo = t._$6L();
-				var i = t._$nP();
-				this._$BP = new Int16Array(3 * this._$Yo);
-				for (var e = 3 * this._$Yo - 1; e >= 0; --e) this._$BP[e] = i[e];
-				if (this._$Eo = t._$nP(), this._$Qi = t._$nP(), t.getFormatVersion() >= G._$s7) {
-					if (this._$JP = t._$6L(), 0 != this._$JP) {
-						if (0 != (1 & this._$JP)) {
-							var r = t._$6L();
-							null == this._$5P && (this._$5P = new Object), this._$5P._$Hb = parseInt(r)
-						}
-						0 != (this._$JP & $t._$Os) ? this._$6s = (this._$JP & $t._$Os) >> 1 : this._$6s = $t._$ms, 0 != (32 & this._$JP) && (this.culling = !1)
-					}
-				} else this._$JP = 0
-			}, $t.prototype.init = function(t) {
-				var i = new ut(this),
-					e = this._$d0 * U._$No,
-					r = this._$32();
-				switch (null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(e), null != i._$hr && (i._$hr = null), i._$hr = r ? new Float32Array(e) : null, U._$do) {
-				default:
-				case U._$Ms:
-					if (U._$Ls) for (var o = this._$d0 - 1; o >= 0; --o) {
-						var n = o << 1;
-						this._$Qi[n + 1] = 1 - this._$Qi[n + 1]
-					}
-					break;
-				case U._$Qs:
-					for (var o = this._$d0 - 1; o >= 0; --o) {
-						var n = o << 1,
-							s = o * U._$No,
-							_ = this._$Qi[n],
-							a = this._$Qi[n + 1];
-						i._$Cr[s] = _, i._$Cr[s + 1] = a, i._$Cr[s + 4] = 0, r && (i._$hr[s] = _, i._$hr[s + 1] = a, i._$hr[s + 4] = 0)
-					}
-				}
-				return i
-			}, $t.prototype._$Nr = function(t, i) {
-				var e = i;
-				if (this != e._$GT() && console.log("### assert!! ### "), this._$GS._$Ur(t) && (W.prototype._$Nr.call(this, t, e), !e._$IS[0])) {
-					var r = $t._$gT;
-					r[0] = !1, v._$Vr(t, this._$GS, r, this._$d0, this._$Eo, e._$Cr, U._$i2, U._$No)
-				}
-			}, $t.prototype._$2b = function(t, i) {
-				try {
-					this != i._$GT() && console.log("### assert!! ### ");
-					var e = !1;
-					i._$IS[0] && (e = !0);
-					var r = i;
-					if (!e && (W.prototype._$2b.call(this, t), this._$32())) {
-						var o = this.getTargetBaseDataID();
-						if (r._$8r == W._$ur && (r._$8r = t.getBaseDataIndex(o)), r._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", o);
-						else {
-							var n = t.getBaseData(r._$8r),
-								s = t._$q2(r._$8r);
-							null == n || s._$x2() ? r._$AT = !1 : (n._$nb(t, s, r._$Cr, r._$hr, this._$d0, U._$i2, U._$No), r._$AT = !0), r.baseOpacity = s.getTotalOpacity()
-						}
-					}
-				} catch (t) {
-					throw t
-				}
-			}, $t.prototype.draw = function(t, i, e) {
-				if (this != e._$GT() && console.log("### assert!! ### "), !e._$IS[0]) {
-					var r = e,
-						o = this._$LP;
-					o < 0 && (o = 1);
-					var n = this.getOpacity(i, r) * e._$VS * e.baseOpacity,
-						s = null != r._$hr ? r._$hr : r._$Cr;
-					t.setClipBufPre_clipContextForDraw(e.clipBufPre_clipContext), t._$WP(this.culling), t._$Uo(o, 3 * this._$Yo, this._$BP, s, this._$Qi, n, this._$6s, r)
-				}
-			}, $t.prototype.dump = function() {
-				console.log("  _$yi( %d ) , _$d0( %d ) , _$Yo( %d ) \n", this._$LP, this._$d0, this._$Yo), console.log("  _$Oi _$di = { ");
-				for (var t = 0; t < this._$BP.length; t++) console.log("%5d ,", this._$BP[t]);
-				console.log("\n  _$5i _$30");
-				for (var t = 0; t < this._$Eo.length; t++) {
-					console.log("\n    _$30[%d] = ", t);
-					for (var i = this._$Eo[t], e = 0; e < i.length; e++) console.log("%6.2f, ", i[e])
-				}
-				console.log("\n")
-			}, $t.prototype._$72 = function(t) {
-				return null == this._$5P ? null : this._$5P[t]
-			}, $t.prototype.getIndexArray = function() {
-				return this._$BP
-			}, ut.prototype = new Mt, ut.prototype.getTransformedPoints = function() {
-				return null != this._$hr ? this._$hr : this._$Cr
-			}, pt.prototype._$HT = function(t) {
-				this.x = t.x, this.y = t.y
-			}, pt.prototype._$HT = function(t, i) {
-				this.x = t, this.y = i
-			}, ft.prototype = new i, ft.loadModel = function(t) {
-				var e = new ft;
-				return i._$62(e, t), e
-			}, ft.loadModel = function(t, e) {
-				var r = e || 0,
-					o = new ft(r);
-				return i._$62(o, t), o
-			}, ft._$to = function() {
-				return new ft
-			}, ft._$er = function(t) {
-				var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d");
-				if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL());
-				for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = ft.loadModel(i._$3b()), o = 0; o < e.length; o++) {
-					var n = new _$5(e[o]);
-					if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL());
-					r.setTexture(o, _$nL._$_o(t, n._$3b()))
-				}
-				return r
-			}, ft.prototype.setGL = function(t) {
-				at.setGL(t)
-			}, ft.prototype.setTransform = function(t) {
-				this.drawParamWebGL.setTransform(t)
-			}, ft.prototype.update = function() {
-				this._$5S.update(), this._$5S.preDraw(this.drawParamWebGL)
-			}, ft.prototype.draw = function() {
-				this._$5S.draw(this.drawParamWebGL)
-			}, ft.prototype._$K2 = function() {
-				this.drawParamWebGL._$K2()
-			}, ft.prototype.setTexture = function(t, i) {
-				null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i)
-			}, ft.prototype.setTexture = function(t, i) {
-				null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i)
-			}, ft.prototype._$Rs = function() {
-				return this.drawParamWebGL._$Rs()
-			}, ft.prototype._$Ds = function(t) {
-				this.drawParamWebGL._$Ds(t)
-			}, ft.prototype.getDrawParam = function() {
-				return this.drawParamWebGL
-			}, ft.prototype.setMatrix = function(t) {
-				this.drawParamWebGL.setMatrix(t)
-			}, ft.prototype.setPremultipliedAlpha = function(t) {
-				this.drawParamWebGL.setPremultipliedAlpha(t)
-			}, ft.prototype.isPremultipliedAlpha = function() {
-				return this.drawParamWebGL.isPremultipliedAlpha()
-			}, ft.prototype.setAnisotropy = function(t) {
-				this.drawParamWebGL.setAnisotropy(t)
-			}, ft.prototype.getAnisotropy = function() {
-				return this.drawParamWebGL.getAnisotropy()
-			}, ct.prototype._$tb = function() {
-				return this.motions
-			}, ct.prototype.startMotion = function(t, i) {
-				for (var e = null, r = this.motions.length, o = 0; o < r; ++o) null != (e = this.motions[o]) && (e._$qS(e._$w0.getFadeOut()), this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / start _$K _$3 (m%d)\n", r, e._$sr));
-				if (null == t) return -1;
-				e = new dt, e._$w0 = t, this.motions.push(e);
-				var n = e._$sr;
-				return this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / new _$w0 (m%d)\n", r, n), n
-			}, ct.prototype.updateParam = function(t) {
-				try {
-					for (var i = !1, e = 0; e < this.motions.length; e++) {
-						var r = this.motions[e];
-						if (null != r) {
-							var o = r._$w0;
-							null != o ? (o.updateParam(t, r), i = !0, r.isFinished() && (this._$eb && _._$Ji("MotionQueueManager[size:%2d]->updateParam() / _$T0 _$w0 (m%d)\n", this.motions.length - 1, r._$sr), this.motions.splice(e, 1), e--)) : (this.motions = this.motions.splice(e, 1), e--)
-						} else this.motions.splice(e, 1), e--
-					}
-					return i
-				} catch (t) {
-					return _._$li(t), !0
-				}
-			}, ct.prototype.isFinished = function(t) {
-				if (arguments.length >= 1) {
-					for (var i = 0; i < this.motions.length; i++) {
-						var e = this.motions[i];
-						if (null != e && (e._$sr == t && !e.isFinished())) return !1
-					}
-					return !0
-				}
-				for (var i = 0; i < this.motions.length; i++) {
-					var e = this.motions[i];
-					if (null != e) {
-						if (null != e._$w0) {
-							if (!e.isFinished()) return !1
-						} else this.motions.splice(i, 1), i--
-					} else this.motions.splice(i, 1), i--
-				}
-				return !0
-			}, ct.prototype.stopAllMotions = function() {
-				for (var t = 0; t < this.motions.length; t++) {
-					var i = this.motions[t];
-					if (null != i) {
-						i._$w0;
-						this.motions.splice(t, 1), t--
-					} else this.motions.splice(t, 1), t--
-				}
-			}, ct.prototype._$Zr = function(t) {
-				this._$eb = t
-			}, ct.prototype._$e = function() {
-				console.log("-- _$R --\n");
-				for (var t = 0; t < this.motions.length; t++) {
-					var i = this.motions[t],
-						e = i._$w0;
-					console.log("MotionQueueEnt[%d] :: %s\n", this.motions.length, e.toString())
-				}
-			}, dt._$Gs = 0, dt.prototype.isFinished = function() {
-				return this._$9L
-			}, dt.prototype._$qS = function(t) {
-				var i = w.getUserTimeMSec(),
-					e = i + t;
-				(this._$Do < 0 || e < this._$Do) && (this._$Do = e)
-			}, dt.prototype._$Bs = function() {
-				return this._$sr
-			}, gt.prototype.setContext = function(t) {
-				var i = this.m;
-				t.transform(i[0], i[1], i[3], i[4], i[6], i[7])
-			}, gt.prototype.toString = function() {
-				for (var t = "LDTransform { ", i = 0; i < 9; i++) t += this.m[i].toFixed(2) + " ,";
-				return t += " }"
-			}, gt.prototype.identity = function() {
-				var t = this.m;
-				t[0] = t[4] = t[8] = 1, t[1] = t[2] = t[3] = t[5] = t[6] = t[7] = 0
-			}, gt.prototype._$PS = function(t, i, e) {
-				null == e && (e = new Array(0, 0));
-				var r = this.m;
-				return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e
-			}, gt.prototype._$P2 = function(t) {
-				t || (t = new gt);
-				var i = this.m,
-					e = i[0],
-					r = i[1],
-					o = i[2],
-					n = i[3],
-					s = i[4],
-					_ = i[5],
-					a = i[6],
-					h = i[7],
-					l = i[8],
-					$ = e * s * l + r * _ * a + o * n * h - e * _ * h - o * s * a - r * n * l;
-				if (0 == $) return null;
-				var u = 1 / $;
-				return t.m[0] = u * (s * l - h * _), t.m[1] = u * (h * o - r * l), t.m[2] = u * (r * _ - s * o), t.m[3] = u * (a * _ - n * l), t.m[4] = u * (e * l - a * o), t.m[5] = u * (n * o - e * _), t.m[6] = u * (n * h - a * s), t.m[7] = u * (a * r - e * h), t.m[8] = u * (e * s - n * r), t
-			}, gt.prototype.transform = function(t, i, e) {
-				null == e && (e = new Array(0, 0));
-				var r = this.m;
-				return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e
-			}, gt.prototype.translate = function(t, i) {
-				var e = this.m;
-				e[6] = e[0] * t + e[3] * i + e[6], e[7] = e[1] * t + e[4] * i + e[7], e[8] = e[2] * t + e[5] * i + e[8]
-			}, gt.prototype.scale = function(t, i) {
-				var e = this.m;
-				e[0] *= t, e[1] *= t, e[2] *= t, e[3] *= i, e[4] *= i, e[5] *= i
-			}, gt.prototype.shear = function(t, i) {
-				var e = this.m,
-					r = e[0] + e[3] * i,
-					o = e[1] + e[4] * i,
-					n = e[2] + e[5] * i;
-				e[3] = e[0] * t + e[3], e[4] = e[1] * t + e[4], e[5] = e[2] * t + e[5], e[0] = r, e[1] = o, e[2] = n
-			}, gt.prototype.rotate = function(t) {
-				var i = this.m,
-					e = Math.cos(t),
-					r = Math.sin(t),
-					o = i[0] * e + i[3] * r,
-					n = i[1] * e + i[4] * r,
-					s = i[2] * e + i[5] * r;
-				i[3] = -i[0] * r + i[3] * e, i[4] = -i[1] * r + i[4] * e, i[5] = -i[2] * r + i[5] * e, i[0] = o, i[1] = n, i[2] = s
-			}, gt.prototype.concatenate = function(t) {
-				var i = this.m,
-					e = t.m,
-					r = i[0] * e[0] + i[3] * e[1] + i[6] * e[2],
-					o = i[1] * e[0] + i[4] * e[1] + i[7] * e[2],
-					n = i[2] * e[0] + i[5] * e[1] + i[8] * e[2],
-					s = i[0] * e[3] + i[3] * e[4] + i[6] * e[5],
-					_ = i[1] * e[3] + i[4] * e[4] + i[7] * e[5],
-					a = i[2] * e[3] + i[5] * e[4] + i[8] * e[5],
-					h = i[0] * e[6] + i[3] * e[7] + i[6] * e[8],
-					l = i[1] * e[6] + i[4] * e[7] + i[7] * e[8],
-					$ = i[2] * e[6] + i[5] * e[7] + i[8] * e[8];
-				m[0] = r, m[1] = o, m[2] = n, m[3] = s, m[4] = _, m[5] = a, m[6] = h, m[7] = l, m[8] = $
-			}, yt.prototype = new et, yt._$eT = null, yt._$tP = new Object, yt._$2o = function() {
-				return null == yt._$eT && (yt._$eT = yt.getID("DST_BASE")), yt._$eT
-			}, yt._$27 = function() {
-				yt._$tP.clear(), yt._$eT = null
-			}, yt.getID = function(t) {
-				var i = yt._$tP[t];
-				return null == i && (i = new yt(t), yt._$tP[t] = i), i
-			}, yt.prototype._$3s = function() {
-				return new yt
-			}, mt.prototype = new E, mt._$9r = function(t) {
-				return new Float32Array(t)
-			}, mt._$vb = function(t) {
-				return new Int16Array(t)
-			}, mt._$cr = function(t, i) {
-				return null == t || t._$yL() < i.length ? (t = mt._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
-			}, mt._$mb = function(t, i) {
-				return null == t || t._$yL() < i.length ? (t = mt._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
-			}, mt._$Hs = function() {
-				return this._$Gr
-			}, mt._$as = function(t) {
-				this._$Gr = t
-			}, mt.prototype.getGL = function() {
-				return this.gl
-			}, mt.prototype.setGL = function(t) {
-				this.gl = t
-			}, mt.prototype.setTransform = function(t) {
-				this.transform = t
-			}, mt.prototype._$ZT = function() {
-				var t = this.gl;
-				this.firstDraw && (this.initShader(), this.firstDraw = !1, this.anisotropyExt = t.getExtension("EXT_texture_filter_anisotropic") || t.getExtension("WEBKIT_EXT_texture_filter_anisotropic") || t.getExtension("MOZ_EXT_texture_filter_anisotropic"), this.anisotropyExt && (this.maxAnisotropy = t.getParameter(this.anisotropyExt.MAX_TEXTURE_MAX_ANISOTROPY_EXT))), t.disable(t.SCISSOR_TEST), t.disable(t.STENCIL_TEST), t.disable(t.DEPTH_TEST), t.frontFace(t.CW), t.enable(t.BLEND), t.colorMask(1, 1, 1, 1), t.bindBuffer(t.ARRAY_BUFFER, null), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, null)
-			}, mt.prototype._$Uo = function(t, i, e, r, o, n, s, _) {
-				if (!(n < .01 && null == this.clipBufPre_clipContextMask)) {
-					var a = (n > .9 && at.EXPAND_W, this.gl);
-					if (null == this.gl) throw new Error("gl is null");
-					var h = 1 * this._$C0 * n,
-						l = 1 * this._$tT * n,
-						$ = 1 * this._$WL * n,
-						u = this._$lT * n;
-					if (null != this.clipBufPre_clipContextMask) {
-						a.frontFace(a.CCW), a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.getClipBufPre_clipContextMask().matrixForMask);
-						var p = this.getClipBufPre_clipContextMask().layoutChannelNo,
-							f = this.getChannelFlagAsColor(p);
-						a.uniform4f(this.u_channelFlag, f.r, f.g, f.b, f.a);
-						var c = this.getClipBufPre_clipContextMask().layoutBounds;
-						a.uniform4f(this.u_baseColor_Loc, 2 * c.x - 1, 2 * c.y - 1, 2 * c._$EL() - 1, 2 * c._$5T() - 1), a.uniform1i(this.u_maskFlag_Loc, !0)
-					} else if (null != this.getClipBufPre_clipContextDraw()) {
-						a.useProgram(this.shaderProgramOff), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc_Off), a.vertexAttribPointer(this.a_position_Loc_Off, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc_Off, 1), a.enableVertexAttribArray(this.a_texCoord_Loc_Off), a.vertexAttribPointer(this.a_texCoord_Loc_Off, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_clipMatrix_Loc_Off, !1, this.getClipBufPre_clipContextDraw().matrixForDraw), a.uniformMatrix4fv(this.u_matrix_Loc_Off, !1, this.matrix4x4), a.activeTexture(a.TEXTURE2), a.bindTexture(a.TEXTURE_2D, at.fTexture[this.glno]), a.uniform1i(this.s_texture1_Loc_Off, 2);
-						var p = this.getClipBufPre_clipContextDraw().layoutChannelNo,
-							f = this.getChannelFlagAsColor(p);
-						a.uniform4f(this.u_channelFlag_Loc_Off, f.r, f.g, f.b, f.a), a.uniform4f(this.u_baseColor_Loc_Off, h, l, $, u)
-					} else a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.matrix4x4), a.uniform4f(this.u_baseColor_Loc, h, l, $, u), a.uniform1i(this.u_maskFlag_Loc, !1);
-					this.culling ? this.gl.enable(a.CULL_FACE) : this.gl.disable(a.CULL_FACE), this.gl.enable(a.BLEND);
-					var d, g, y, m;
-					if (null != this.clipBufPre_clipContextMask) d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA;
-					else switch (s) {
-					case $t._$ms:
-						d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA;
-						break;
-					case $t._$ns:
-						d = a.ONE, g = a.ONE, y = a.ZERO, m = a.ONE;
-						break;
-					case $t._$_s:
-						d = a.DST_COLOR, g = a.ONE_MINUS_SRC_ALPHA, y = a.ZERO, m = a.ONE
-					}
-					a.blendEquationSeparate(a.FUNC_ADD, a.FUNC_ADD), a.blendFuncSeparate(d, g, y, m), this.anisotropyExt && a.texParameteri(a.TEXTURE_2D, this.anisotropyExt.TEXTURE_MAX_ANISOTROPY_EXT, this.maxAnisotropy);
-					var T = e.length;
-					a.drawElements(a.TRIANGLES, T, a.UNSIGNED_SHORT, 0), a.bindTexture(a.TEXTURE_2D, null)
-				}
-			}, mt.prototype._$Rs = function() {
-				throw new Error("_$Rs")
-			}, mt.prototype._$Ds = function(t) {
-				throw new Error("_$Ds")
-			}, mt.prototype._$K2 = function() {
-				for (var t = 0; t < this.textures.length; t++) {
-					0 != this.textures[t] && (this.gl._$K2(1, this.textures, t), this.textures[t] = null)
-				}
-			}, mt.prototype.setTexture = function(t, i) {
-				this.textures[t] = i
-			}, mt.prototype.initShader = function() {
-				var t = this.gl;
-				this.loadShaders2(), this.a_position_Loc = t.getAttribLocation(this.shaderProgram, "a_position"), this.a_texCoord_Loc = t.getAttribLocation(this.shaderProgram, "a_texCoord"), this.u_matrix_Loc = t.getUniformLocation(this.shaderProgram, "u_mvpMatrix"), this.s_texture0_Loc = t.getUniformLocation(this.shaderProgram, "s_texture0"), this.u_channelFlag = t.getUniformLocation(this.shaderProgram, "u_channelFlag"), this.u_baseColor_Loc = t.getUniformLocation(this.shaderProgram, "u_baseColor"), this.u_maskFlag_Loc = t.getUniformLocation(this.shaderProgram, "u_maskFlag"), this.a_position_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_position"), this.a_texCoord_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_texCoord"), this.u_matrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_mvpMatrix"), this.u_clipMatrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_ClipMatrix"), this.s_texture0_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture0"), this.s_texture1_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture1"), this.u_channelFlag_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_channelFlag"), this.u_baseColor_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_baseColor")
-			}, mt.prototype.disposeShader = function() {
-				var t = this.gl;
-				this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = null), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = null)
-			}, mt.prototype.compileShader = function(t, i) {
-				var e = this.gl,
-					r = i,
-					o = e.createShader(t);
-				if (null == o) return _._$Ji("_$L0 to create shader"), null;
-				if (e.shaderSource(o, r), e.compileShader(o), !e.getShaderParameter(o, e.COMPILE_STATUS)) {
-					var n = e.getShaderInfoLog(o);
-					return _._$Ji("_$L0 to compile shader : " + n), e.deleteShader(o), null
-				}
-				return o
-			}, mt.prototype.loadShaders2 = function() {
-				var t = this.gl;
-				if (this.shaderProgram = t.createProgram(), !this.shaderProgram) return !1;
-				if (this.shaderProgramOff = t.createProgram(), !this.shaderProgramOff) return !1;
-				if (this.vertShader = this.compileShader(t.VERTEX_SHADER, "attribute vec4     a_position;attribute vec2     a_texCoord;varying vec2       v_texCoord;varying vec4       v_ClipPos;uniform mat4       u_mvpMatrix;void main(){    gl_Position = u_mvpMatrix * a_position;    v_ClipPos = u_mvpMatrix * a_position;    v_texCoord = a_texCoord;}"), !this.vertShader) return _._$Ji("Vertex shader compile _$li!"), !1;
-				if (this.vertShaderOff = this.compileShader(t.VERTEX_SHADER, "attribute vec4     a_position;attribute vec2     a_texCoord;varying vec2       v_texCoord;varying vec4       v_ClipPos;uniform mat4       u_mvpMatrix;uniform mat4       u_ClipMatrix;void main(){    gl_Position = u_mvpMatrix * a_position;    v_ClipPos = u_ClipMatrix * a_position;    v_texCoord = a_texCoord ;}"), !this.vertShaderOff) return _._$Ji("OffVertex shader compile _$li!"), !1;
-				if (this.fragShader = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float;varying vec2       v_texCoord;varying vec4       v_ClipPos;uniform sampler2D  s_texture0;uniform vec4       u_channelFlag;uniform vec4       u_baseColor;uniform bool       u_maskFlag;void main(){    vec4 smpColor;     if(u_maskFlag){        float isInside =             step(u_baseColor.x, v_ClipPos.x/v_ClipPos.w)          * step(u_baseColor.y, v_ClipPos.y/v_ClipPos.w)          * step(v_ClipPos.x/v_ClipPos.w, u_baseColor.z)          * step(v_ClipPos.y/v_ClipPos.w, u_baseColor.w);        smpColor = u_channelFlag * texture2D(s_texture0 , v_texCoord).a * isInside;    }else{        smpColor = texture2D(s_texture0 , v_texCoord) * u_baseColor;    }    gl_FragColor = smpColor;}"), !this.fragShader) return _._$Ji("Fragment shader compile _$li!"), !1;
-				if (this.fragShaderOff = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float ;varying vec2       v_texCoord;varying vec4       v_ClipPos;uniform sampler2D  s_texture0;uniform sampler2D  s_texture1;uniform vec4       u_channelFlag;uniform vec4       u_baseColor ;void main(){    vec4 col_formask = texture2D(s_texture0, v_texCoord) * u_baseColor;    vec4 clipMask = texture2D(s_texture1, v_ClipPos.xy / v_ClipPos.w) * u_channelFlag;    float maskVal = clipMask.r + clipMask.g + clipMask.b + clipMask.a;    col_formask = col_formask * maskVal;    gl_FragColor = col_formask;}"), !this.fragShaderOff) return _._$Ji("OffFragment shader compile _$li!"), !1;
-				if (t.attachShader(this.shaderProgram, this.vertShader), t.attachShader(this.shaderProgram, this.fragShader), t.attachShader(this.shaderProgramOff, this.vertShaderOff), t.attachShader(this.shaderProgramOff, this.fragShaderOff), t.linkProgram(this.shaderProgram), t.linkProgram(this.shaderProgramOff), !t.getProgramParameter(this.shaderProgram, t.LINK_STATUS)) {
-					var i = t.getProgramInfoLog(this.shaderProgram);
-					return _._$Ji("_$L0 to link program: " + i), this.vertShader && (t.deleteShader(this.vertShader), this.vertShader = 0), this.fragShader && (t.deleteShader(this.fragShader), this.fragShader = 0), this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = 0), this.vertShaderOff && (t.deleteShader(this.vertShaderOff), this.vertShaderOff = 0), this.fragShaderOff && (t.deleteShader(this.fragShaderOff), this.fragShaderOff = 0), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = 0), !1
-				}
-				return !0
-			}, mt.prototype.createFramebuffer = function() {
-				var t = this.gl,
-					i = at.clippingMaskBufferSize,
-					e = t.createFramebuffer();
-				t.bindFramebuffer(t.FRAMEBUFFER, e);
-				var r = t.createRenderbuffer();
-				t.bindRenderbuffer(t.RENDERBUFFER, r), t.renderbufferStorage(t.RENDERBUFFER, t.RGBA4, i, i), t.framebufferRenderbuffer(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.RENDERBUFFER, r);
-				var o = t.createTexture();
-				return t.bindTexture(t.TEXTURE_2D, o), t.texImage2D(t.TEXTURE_2D, 0, t.RGBA, i, i, 0, t.RGBA, t.UNSIGNED_BYTE, null), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MIN_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MAG_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_S, t.CLAMP_TO_EDGE), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_T, t.CLAMP_TO_EDGE), t.framebufferTexture2D(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.TEXTURE_2D, o, 0), t.bindTexture(t.TEXTURE_2D, null), t.bindRenderbuffer(t.RENDERBUFFER, null), t.bindFramebuffer(t.FRAMEBUFFER, null), at.fTexture[this.glno] = o, {
-					framebuffer: e,
-					renderbuffer: r,
-					texture: at.fTexture[this.glno]
-				}
-			}, St.prototype._$fP = function() {
-				var t, i, e, r = this._$ST();
-				if (0 == (128 & r)) return 255 & r;
-				if (0 == (128 & (t = this._$ST()))) return (127 & r) << 7 | 127 & t;
-				if (0 == (128 & (i = this._$ST()))) return (127 & r) << 14 | (127 & t) << 7 | 255 & i;
-				if (0 == (128 & (e = this._$ST()))) return (127 & r) << 21 | (127 & t) << 14 | (127 & i) << 7 | 255 & e;
-				throw new lt("_$L _$0P  _")
-			}, St.prototype.getFormatVersion = function() {
-				return this._$S2
-			}, St.prototype._$gr = function(t) {
-				this._$S2 = t
-			}, St.prototype._$3L = function() {
-				return this._$fP()
-			}, St.prototype._$mP = function() {
-				return this._$zT(), this._$F += 8, this._$T.getFloat64(this._$F - 8)
-			}, St.prototype._$_T = function() {
-				return this._$zT(), this._$F += 4, this._$T.getFloat32(this._$F - 4)
-			}, St.prototype._$6L = function() {
-				return this._$zT(), this._$F += 4, this._$T.getInt32(this._$F - 4)
-			}, St.prototype._$ST = function() {
-				return this._$zT(), this._$T.getInt8(this._$F++)
-			}, St.prototype._$9T = function() {
-				return this._$zT(), this._$F += 2, this._$T.getInt16(this._$F - 2)
-			}, St.prototype._$2T = function() {
-				throw this._$zT(), this._$F += 8, new lt("_$L _$q read long")
-			}, St.prototype._$po = function() {
-				return this._$zT(), 0 != this._$T.getInt8(this._$F++)
-			};
-			var xt = !0;
-			St.prototype._$bT = function() {
-				this._$zT();
-				var t = this._$3L(),
-					i = null;
-				if (xt) try {
-					var e = new ArrayBuffer(2 * t);
-					i = new Uint16Array(e);
-					for (var r = 0; r < t; ++r) i[r] = this._$T.getUint8(this._$F++);
-					return String.fromCharCode.apply(null, i)
-				} catch (t) {
-					xt = !1
-				}
-				try {
-					var o = new Array;
-					if (null == i) for (var r = 0; r < t; ++r) o[r] = this._$T.getUint8(this._$F++);
-					else for (var r = 0; r < t; ++r) o[r] = i[r];
-					return String.fromCharCode.apply(null, o)
-				} catch (t) {
-					console.log("read utf8 / _$rT _$L0 !! : " + t)
-				}
-			}, St.prototype._$cS = function() {
-				this._$zT();
-				for (var t = this._$3L(), i = new Int32Array(t), e = 0; e < t; e++) i[e] = this._$T.getInt32(this._$F), this._$F += 4;
-				return i
-			}, St.prototype._$Tb = function() {
-				this._$zT();
-				for (var t = this._$3L(), i = new Float32Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat32(this._$F), this._$F += 4;
-				return i
-			}, St.prototype._$5b = function() {
-				this._$zT();
-				for (var t = this._$3L(), i = new Float64Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat64(this._$F), this._$F += 8;
-				return i
-			}, St.prototype._$nP = function() {
-				return this._$Jb(-1)
-			}, St.prototype._$Jb = function(t) {
-				if (this._$zT(), t < 0 && (t = this._$3L()), t == G._$7P) {
-					var i = this._$6L();
-					if (0 <= i && i < this._$Ko.length) return this._$Ko[i];
-					throw new lt("_$sL _$4i @_$m0")
-				}
-				var e = this._$4b(t);
-				return this._$Ko.push(e), e
-			}, St.prototype._$4b = function(t) {
-				if (0 == t) return null;
-				if (50 == t) {
-					var i = this._$bT(),
-						e = b.getID(i);
-					return e
-				}
-				if (51 == t) {
-					var i = this._$bT(),
-						e = yt.getID(i);
-					return e
-				}
-				if (134 == t) {
-					var i = this._$bT(),
-						e = l.getID(i);
-					return e
-				}
-				if (60 == t) {
-					var i = this._$bT(),
-						e = u.getID(i);
-					return e
-				}
-				if (t >= 48) {
-					var r = G._$9o(t);
-					return null != r ? (r._$F0(this), r) : null
-				}
-				switch (t) {
-				case 1:
-					return this._$bT();
-				case 10:
-					return new n(this._$6L(), !0);
-				case 11:
-					return new S(this._$mP(), this._$mP(), this._$mP(), this._$mP());
-				case 12:
-					return new S(this._$_T(), this._$_T(), this._$_T(), this._$_T());
-				case 13:
-					return new L(this._$mP(), this._$mP());
-				case 14:
-					return new L(this._$_T(), this._$_T());
-				case 15:
-					for (var o = this._$3L(), e = new Array(o), s = 0; s < o; s++) e[s] = this._$nP();
-					return e;
-				case 17:
-					var e = new F(this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP());
-					return e;
-				case 21:
-					return new h(this._$6L(), this._$6L(), this._$6L(), this._$6L());
-				case 22:
-					return new pt(this._$6L(), this._$6L());
-				case 23:
-					throw new Error("_$L _$ro ");
-				case 16:
-				case 25:
-					return this._$cS();
-				case 26:
-					return this._$5b();
-				case 27:
-					return this._$Tb();
-				case 2:
-				case 3:
-				case 4:
-				case 5:
-				case 6:
-				case 7:
-				case 8:
-				case 9:
-				case 18:
-				case 19:
-				case 20:
-				case 24:
-				case 28:
-					throw new lt("_$6 _$q : _$nP() of 2-9 ,18,19,20,24,28 : " + t);
-				default:
-					throw new lt("_$6 _$q : _$nP() NO _$i : " + t)
-				}
-			}, St.prototype._$8L = function() {
-				return 0 == this._$hL ? this._$v0 = this._$ST() : 8 == this._$hL && (this._$v0 = this._$ST(), this._$hL = 0), 1 == (this._$v0 >> 7 - this._$hL++ & 1)
-			}, St.prototype._$zT = function() {
-				0 != this._$hL && (this._$hL = 0)
-			}, vt.prototype._$wP = function(t, i, e) {
-				for (var r = 0; r < e; r++) {
-					for (var o = 0; o < i; o++) {
-						var n = 2 * (o + r * i);
-						console.log("(% 7.3f , % 7.3f) , ", t[n], t[n + 1])
-					}
-					console.log("\n")
-				}
-				console.log("\n")
-			}, Lt._$2S = Math.PI / 180, Lt._$bS = Math.PI / 180, Lt._$wS = 180 / Math.PI, Lt._$NS = 180 / Math.PI, Lt.PI_F = Math.PI, Lt._$kT = [0, .012368, .024734, .037097, .049454, .061803, .074143, .086471, .098786, .111087, .12337, .135634, .147877, .160098, .172295, .184465, .196606, .208718, .220798, .232844, .244854, .256827, .268761, .280654, .292503, .304308, .316066, .327776, .339436, .351044, .362598, .374097, .385538, .396921, .408243, .419502, .430697, .441826, .452888, .463881, .474802, .485651, .496425, .507124, .517745, .528287, .538748, .549126, .559421, .56963, .579752, .589785, .599728, .609579, .619337, .629, .638567, .648036, .657406, .666676, .675843, .684908, .693867, .70272, .711466, .720103, .72863, .737045, .745348, .753536, .76161, .769566, .777405, .785125, .792725, .800204, .807561, .814793, .821901, .828884, .835739, .842467, .849066, .855535, .861873, .868079, .874153, .880093, .885898, .891567, .897101, .902497, .907754, .912873, .917853, .922692, .92739, .931946, .936359, .940629, .944755, .948737, .952574, .956265, .959809, .963207, .966457, .96956, .972514, .97532, .977976, .980482, .982839, .985045, .987101, .989006, .990759, .992361, .993811, .995109, .996254, .997248, .998088, .998776, .999312, .999694, .999924, 1], Lt._$92 = function(t, i) {
-				var e = Math.atan2(t[1], t[0]),
-					r = Math.atan2(i[1], i[0]);
-				return Lt._$tS(e, r)
-			}, Lt._$tS = function(t, i) {
-				for (var e = t - i; e < -Math.PI;) e += 2 * Math.PI;
-				for (; e > Math.PI;) e -= 2 * Math.PI;
-				return e
-			}, Lt._$9 = function(t) {
-				return Math.sin(t)
-			}, Lt.fcos = function(t) {
-				return Math.cos(t)
-			}, Mt.prototype._$u2 = function() {
-				return this._$IS[0]
-			}, Mt.prototype._$yo = function() {
-				return this._$AT && !this._$IS[0]
-			}, Mt.prototype._$GT = function() {
-				return this._$e0
-			}, Et._$W2 = 0, Et.SYSTEM_INFO = null, Et.USER_AGENT = navigator.userAgent, Et.isIPhone = function() {
-				return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone
-			}, Et.isIOS = function() {
-				return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad
-			}, Et.isAndroid = function() {
-				return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isAndroid
-			}, Et.getOSVersion = function() {
-				return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO.version
-			}, Et.getOS = function() {
-				return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad ? "iOS" : Et.SYSTEM_INFO._isAndroid ? "Android" : "_$Q0 OS"
-			}, Et.setup = function() {
-				function t(t, i) {
-					for (var e = t.substring(i).split(/[ _,;\.]/), r = 0, o = 0; o <= 2 && !isNaN(e[o]); o++) {
-						var n = parseInt(e[o]);
-						if (n < 0 || n > 999) {
-							_._$li("err : " + n + " @UtHtml5.setup()"), r = 0;
-							break
-						}
-						r += n * Math.pow(1e3, 2 - o)
-					}
-					return r
-				}
-				var i, e = Et.USER_AGENT,
-					r = Et.SYSTEM_INFO = {
-						userAgent: e
-					};
-				if ((i = e.indexOf("iPhone OS ")) >= 0) r.os = "iPhone", r._isIPhone = !0, r.version = t(e, i + "iPhone OS ".length);
-				else if ((i = e.indexOf("iPad")) >= 0) {
-					if ((i = e.indexOf("CPU OS")) < 0) return void _._$li(" err : " + e + " @UtHtml5.setup()");
-					r.os = "iPad", r._isIPad = !0, r.version = t(e, i + "CPU OS ".length)
-				} else(i = e.indexOf("Android")) >= 0 ? (r.os = "Android", r._isAndroid = !0, r.version = t(e, i + "Android ".length)) : (r.os = "-", r.version = -1)
-			}, window.UtSystem = w, window.UtDebug = _, window.LDTransform = gt, window.LDGL = nt, window.Live2D = at, window.Live2DModelWebGL = ft, window.Live2DModelJS = q, window.Live2DMotion = J, window.MotionQueueManager = ct, window.PhysicsHair = f, window.AMotion = s, window.PartsDataID = l, window.DrawDataID = b, window.BaseDataID = yt, window.ParamID = u, at.init();
-			var At = !1
-		}()
-	}).call(i, e(7))
-}, function(t, i) {
-	t.exports = {
-		import: function() {
-			throw new Error("System.import cannot be used indirectly")
-		}
-	}
-}, function(t, i, e) {
-	"use strict";
-
-	function r(t) {
-		return t && t.__esModule ? t : {
-		default:
-			t
-		}
-	}
-	function o() {
-		this.models = [], this.count = -1, this.reloadFlg = !1, Live2D.init(), n.Live2DFramework.setPlatformManager(new _.
-	default)
-	}
-	Object.defineProperty(i, "__esModule", {
-		value: !0
-	}), i.
-default = o;
-	var n = e(0),
-		s = e(9),
-		_ = r(s),
-		a = e(10),
-		h = r(a),
-		l = e(1),
-		$ = r(l);
-	o.prototype.createModel = function() {
-		var t = new h.
-	default;
-		return this.models.push(t), t
-	}, o.prototype.changeModel = function(t, i) {
-		if (this.reloadFlg) {
-			this.reloadFlg = !1;
-			this.releaseModel(0, t), this.createModel(), this.models[0].load(t, i)
-		}
-	}, o.prototype.getModel = function(t) {
-		return t >= this.models.length ? null : this.models[t]
-	}, o.prototype.releaseModel = function(t, i) {
-		this.models.length <= t || (this.models[t].release(i), delete this.models[t], this.models.splice(t, 1))
-	}, o.prototype.numModels = function() {
-		return this.models.length
-	}, o.prototype.setDrag = function(t, i) {
-		for (var e = 0; e < this.models.length; e++) this.models[e].setDrag(t, i)
-	}, o.prototype.maxScaleEvent = function() {
-		$.
-	default.DEBUG_LOG && console.log("Max scale event.");
-		for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($.
-	default.MOTION_GROUP_PINCH_IN, $.
-	default.PRIORITY_NORMAL)
-	}, o.prototype.minScaleEvent = function() {
-		$.
-	default.DEBUG_LOG && console.log("Min scale event.");
-		for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($.
-	default.MOTION_GROUP_PINCH_OUT, $.
-	default.PRIORITY_NORMAL)
-	}, o.prototype.tapEvent = function(t, i) {
-		$.
-	default.DEBUG_LOG && console.log("tapEvent view x:" + t + " y:" + i);
-		for (var e = 0; e < this.models.length; e++) this.models[e].hitTest($.
-	default.HIT_AREA_HEAD, t, i) ? ($.
-	default.DEBUG_LOG && console.log("Tap face."), this.models[e].setRandomExpression()):
-		this.models[e].hitTest($.
-	default.HIT_AREA_BODY, t, i) ? ($.
-	default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($.
-	default.MOTION_GROUP_TAP_BODY, $.
-	default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("head", t, i) ? ($.
-	default.DEBUG_LOG && console.log("Tap face."), this.models[e].startRandomMotion($.
-	default.MOTION_GROUP_FLICK_HEAD, $.
-	default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("body", t, i) && ($.
-	default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($.
-	default.MOTION_GROUP_TAP_BODY, $.
-	default.PRIORITY_NORMAL));
-		return !0
-	}
-}, function(t, i, e) {
-	"use strict";
-
-	function r() {}
-	Object.defineProperty(i, "__esModule", {
-		value: !0
-	}), i.
-default = r;
-	var o = e(2);
-	var requestCache = {};
-	r.prototype.loadBytes = function(t, i) {
-		// Cache 相同的请求,减少请求数量
-		if (requestCache[t] !== undefined) {
-			i(requestCache[t]);
-			return;
-		}
-		var e = new XMLHttpRequest;
-		e.open("GET", t, !0), e.responseType = "arraybuffer", e.onload = function() {
-			switch (e.status) {
-			case 200:
-				requestCache[t] = e.response;
-				i(e.response);
-				break;
-			default:
-				console.error("Failed to load (" + e.status + ") : " + t)
-			}
-		}, e.send(null)
-	}, r.prototype.loadString = function(t) {
-		this.loadBytes(t, function(t) {
-			return t
-		})
-	}, r.prototype.loadLive2DModel = function(t, i) {
-		var e = null;
-		this.loadBytes(t, function(t) {
-			e = Live2DModelWebGL.loadModel(t), i(e)
-		})
-	}, r.prototype.loadTexture = function(t, i, e, r) {
-		var n = new Image;
-		n.crossOrigin = "Anonymous", n.src = e;
-		n.onload = function() {
-			var e = (0, o.getContext)(),
-				s = e.createTexture();
-			if (!s) return console.error("Failed to generate gl texture name."), -1;
-			0 == t.isPremultipliedAlpha() && e.pixelStorei(e.UNPACK_PREMULTIPLY_ALPHA_WEBGL, 1), e.pixelStorei(e.UNPACK_FLIP_Y_WEBGL, 1), e.activeTexture(e.TEXTURE0), e.bindTexture(e.TEXTURE_2D, s), e.texImage2D(e.TEXTURE_2D, 0, e.RGBA, e.RGBA, e.UNSIGNED_BYTE, n), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MAG_FILTER, e.LINEAR), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MIN_FILTER, e.LINEAR_MIPMAP_NEAREST), e.generateMipmap(e.TEXTURE_2D), t.setTexture(i, s), s = null, "function" == typeof r && r()
-		}, n.onerror = function() {
-			console.error("Failed to load image : " + e)
-		}
-	}, r.prototype.jsonParseFromBytes = function(t) {
-		var i, e = new Uint8Array(t, 0, 3);
-		return i = 239 == e[0] && 187 == e[1] && 191 == e[2] ? String.fromCharCode.apply(null, new Uint8Array(t, 3)) : String.fromCharCode.apply(null, new Uint8Array(t)), JSON.parse(i)
-	}, r.prototype.log = function(t) {}
-}, function(t, i, e) {
-	"use strict";
-
-	function r(t) {
-		return t && t.__esModule ? t : {
-		default:
-			t
-		}
-	}
-	function o() {
-		n.L2DBaseModel.prototype.constructor.call(this), this.modelHomeDir = "", this.modelSetting = null, this.tmpMatrix = []
-	}
-	Object.defineProperty(i, "__esModule", {
-		value: !0
-	}), i.
-default = o;
-	var n = e(0),
-		s = e(11),
-		_ = r(s),
-		a = e(1),
-		h = r(a),
-		l = e(3),
-		$ = r(l);
-	o.prototype = new n.L2DBaseModel, o.prototype.load = function(t, i, e) {
-		this.setUpdating(!0), this.setInitialized(!1), this.modelHomeDir = i.substring(0, i.lastIndexOf("/") + 1), this.modelSetting = new _.
-	default;
-		var r = this;
-		this.modelSetting.loadModelSetting(i, function() {
-			var t = r.modelHomeDir + r.modelSetting.getModelFile();
-			r.loadModelData(t, function(t) {
-				for (var i = 0; i < r.modelSetting.getTextureNum(); i++) {
-					if (/^https?:\/\/|^\/\//i.test(r.modelSetting.getTextureFile(i))) var o = r.modelSetting.getTextureFile(i);
-					else var o = r.modelHomeDir + r.modelSetting.getTextureFile(i);
-					r.loadTexture(i, o, function() {
-						if (r.isTexLoaded) {
-							if (r.modelSetting.getExpressionNum() > 0) {
-								r.expressions = {};
-								for (var t = 0; t < r.modelSetting.getExpressionNum(); t++) {
-									var i = r.modelSetting.getExpressionName(t),
-										o = r.modelHomeDir + r.modelSetting.getExpressionFile(t);
-									r.loadExpression(i, o)
-								}
-							} else r.expressionManager = null, r.expressions = {};
-							if (r.eyeBlink, null != r.modelSetting.getPhysicsFile() ? r.loadPhysics(r.modelHomeDir + r.modelSetting.getPhysicsFile()) : r.physics = null, null != r.modelSetting.getPoseFile() ? r.loadPose(r.modelHomeDir + r.modelSetting.getPoseFile(), function() {
-								r.pose.updateParam(r.live2DModel)
-							}) : r.pose = null, null != r.modelSetting.getLayout()) {
-								var n = r.modelSetting.getLayout();
-								null != n.width && r.modelMatrix.setWidth(n.width), null != n.height && r.modelMatrix.setHeight(n.height), null != n.x && r.modelMatrix.setX(n.x), null != n.y && r.modelMatrix.setY(n.y), null != n.center_x && r.modelMatrix.centerX(n.center_x), null != n.center_y && r.modelMatrix.centerY(n.center_y), null != n.top && r.modelMatrix.top(n.top), null != n.bottom && r.modelMatrix.bottom(n.bottom), null != n.left && r.modelMatrix.left(n.left), null != n.right && r.modelMatrix.right(n.right)
-							}
-							if (null != r.modelSetting.getHitAreasCustom()) {
-								var s = r.modelSetting.getHitAreasCustom();
-								null != s.head_x && (h.
-							default.hit_areas_custom_head_x = s.head_x), null != s.head_y && (h.
-							default.hit_areas_custom_head_y = s.head_y), null != s.body_x && (h.
-							default.hit_areas_custom_body_x = s.body_x), null != s.body_y && (h.
-							default.hit_areas_custom_body_y = s.body_y)
-							}
-							for (var t = 0; t < r.modelSetting.getInitParamNum(); t++) r.live2DModel.setParamFloat(r.modelSetting.getInitParamID(t), r.modelSetting.getInitParamValue(t));
-							for (var t = 0; t < r.modelSetting.getInitPartsVisibleNum(); t++) r.live2DModel.setPartsOpacity(r.modelSetting.getInitPartsVisibleID(t), r.modelSetting.getInitPartsVisibleValue(t));
-							r.live2DModel.saveParam(), r.preloadMotionGroup(h.
-						default.MOTION_GROUP_IDLE), r.preloadMotionGroup(h.
-						default.MOTION_GROUP_SLEEPY), r.mainMotionManager.stopAllMotions(), r.setUpdating(!1), r.setInitialized(!0), "function" == typeof e && e()
-						}
-					})
-				}
-			})
-		})
-	}, o.prototype.release = function(t) {
-		var i = n.Live2DFramework.getPlatformManager();
-		t.deleteTexture(i.texture)
-	}, o.prototype.preloadMotionGroup = function(t) {
-		for (var i = this, e = 0; e < this.modelSetting.getMotionNum(t); e++) {
-			var r = this.modelSetting.getMotionFile(t, e);
-			this.loadMotion(r, this.modelHomeDir + r, function(r) {
-				r.setFadeIn(i.modelSetting.getMotionFadeIn(t, e)), r.setFadeOut(i.modelSetting.getMotionFadeOut(t, e))
-			})
-		}
-	}, o.prototype.update = function() {
-		if (null == this.live2DModel) return void(h.
-	default.DEBUG_LOG && console.error("Failed to update."));
-		var t = UtSystem.getUserTimeMSec() - this.startTimeMSec,
-			i = t / 1e3,
-			e = 2 * i * Math.PI;
-		if (this.mainMotionManager.isFinished()) {
-			"1" === sessionStorage.getItem("Sleepy") ? this.startRandomMotion(h.
-		default.MOTION_GROUP_SLEEPY, h.
-		default.PRIORITY_SLEEPY) : this.startRandomMotion(h.
-		default.MOTION_GROUP_IDLE, h.
-		default.PRIORITY_IDLE)
-		}
-		this.live2DModel.loadParam(), this.mainMotionManager.updateParam(this.live2DModel) || null != this.eyeBlink && this.eyeBlink.updateParam(this.live2DModel), this.live2DModel.saveParam(), null == this.expressionManager || null == this.expressions || this.expressionManager.isFinished() || this.expressionManager.updateParam(this.live2DModel), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", 30 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", 30 * this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", this.dragX * this.dragY * -30, 1), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", 10 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_X", this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_Y", this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", Number(15 * Math.sin(e / 6.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", Number(8 * Math.sin(e / 3.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", Number(10 * Math.sin(e / 5.5345)), .5), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", Number(4 * Math.sin(e / 15.5345)), .5), this.live2DModel.setParamFloat("PARAM_BREATH", Number(.5 + .5 * Math.sin(e / 3.2345)), 1), null != this.physics && this.physics.updateParam(this.live2DModel), null == this.lipSync && this.live2DModel.setParamFloat("PARAM_MOUTH_OPEN_Y", this.lipSyncValue), null != this.pose && this.pose.updateParam(this.live2DModel), this.live2DModel.update()
-	}, o.prototype.setRandomExpression = function() {
-		var t = [];
-		for (var i in this.expressions) t.push(i);
-		var e = parseInt(Math.random() * t.length);
-		this.setExpression(t[e])
-	}, o.prototype.startRandomMotion = function(t, i) {
-		var e = this.modelSetting.getMotionNum(t),
-			r = parseInt(Math.random() * e);
-		this.startMotion(t, r, i)
-	}, o.prototype.startMotion = function(t, i, e) {
-		var r = this.modelSetting.getMotionFile(t, i);
-		if (null == r || "" == r) return void(h.
-	default.DEBUG_LOG && console.error("Failed to motion."));
-		if (e == h.
-	default.PRIORITY_FORCE) this.mainMotionManager.setReservePriority(e);
-		else if (!this.mainMotionManager.reserveMotion(e)) return void(h.
-	default.DEBUG_LOG && console.log("Motion is running."));
-		var o, n = this;
-		null == this.motions[t] ? this.loadMotion(null, this.modelHomeDir + r, function(r) {
-			o = r, n.setFadeInFadeOut(t, i, e, o)
-		}) : (o = this.motions[t], n.setFadeInFadeOut(t, i, e, o))
-	}, o.prototype.setFadeInFadeOut = function(t, i, e, r) {
-		var o = this.modelSetting.getMotionFile(t, i);
-		if (r.setFadeIn(this.modelSetting.getMotionFadeIn(t, i)), r.setFadeOut(this.modelSetting.getMotionFadeOut(t, i)), h.
-	default.DEBUG_LOG && console.log("Start motion : " + o), null == this.modelSetting.getMotionSound(t, i)) this.mainMotionManager.startMotionPrio(r, e);
-		else {
-			var n = this.modelSetting.getMotionSound(t, i),
-				s = document.createElement("audio");
-			s.src = this.modelHomeDir + n, h.
-		default.DEBUG_LOG && console.log("Start sound : " + n), s.play(), this.mainMotionManager.startMotionPrio(r, e)
-		}
-	}, o.prototype.setExpression = function(t) {
-		var i = this.expressions[t];
-		h.
-	default.DEBUG_LOG && console.log("Expression : " + t), this.expressionManager.startMotion(i, !1)
-	}, o.prototype.draw = function(t) {
-		$.
-	default.push(), $.
-	default.multMatrix(this.modelMatrix.getArray()), this.tmpMatrix = $.
-	default.getMatrix(), this.live2DModel.setMatrix(this.tmpMatrix), this.live2DModel.draw(), $.
-	default.pop()
-	}, o.prototype.hitTest = function(t, i, e) {
-		for (var r = this.modelSetting.getHitAreaNum(), o = 0; o < r; o++) if (t == this.modelSetting.getHitAreaName(o)) {
-			var n = this.modelSetting.getHitAreaID(o);
-			return this.hitTestSimple(n, i, e)
-		}
-		return !1
-	}, o.prototype.hitTestCustom = function(t, i, e) {
-		return "head" == t ? this.hitTestSimpleCustom(h.
-	default.hit_areas_custom_head_x, h.
-	default.hit_areas_custom_head_y, i, e) : "body" == t && this.hitTestSimpleCustom(h.
-	default.hit_areas_custom_body_x, h.
-	default.hit_areas_custom_body_y, i, e)
-	}
-}, function(t, i, e) {
-	"use strict";
-
-	function r() {
-		this.NAME = "name", this.ID = "id", this.MODEL = "model", this.TEXTURES = "textures", this.HIT_AREAS = "hit_areas", this.PHYSICS = "physics", this.POSE = "pose", this.EXPRESSIONS = "expressions", this.MOTION_GROUPS = "motions", this.SOUND = "sound", this.FADE_IN = "fade_in", this.FADE_OUT = "fade_out", this.LAYOUT = "layout", this.HIT_AREAS_CUSTOM = "hit_areas_custom", this.INIT_PARAM = "init_param", this.INIT_PARTS_VISIBLE = "init_parts_visible", this.VALUE = "val", this.FILE = "file", this.json = {}
-	}
-	Object.defineProperty(i, "__esModule", {
-		value: !0
-	}), i.
-default = r;
-	var o = e(0);
-	r.prototype.loadModelSetting = function(t, i) {
-		var e = this;
-		o.Live2DFramework.getPlatformManager().loadBytes(t, function(t) {
-			var r = String.fromCharCode.apply(null, new Uint8Array(t));
-			e.json = JSON.parse(r), i()
-		})
-	}, r.prototype.getTextureFile = function(t) {
-		return null == this.json[this.TEXTURES] || null == this.json[this.TEXTURES][t] ? null : this.json[this.TEXTURES][t]
-	}, r.prototype.getModelFile = function() {
-		return this.json[this.MODEL]
-	}, r.prototype.getTextureNum = function() {
-		return null == this.json[this.TEXTURES] ? 0 : this.json[this.TEXTURES].length
-	}, r.prototype.getHitAreaNum = function() {
-		return null == this.json[this.HIT_AREAS] ? 0 : this.json[this.HIT_AREAS].length
-	}, r.prototype.getHitAreaID = function(t) {
-		return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.ID]
-	}, r.prototype.getHitAreaName = function(t) {
-		return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.NAME]
-	}, r.prototype.getPhysicsFile = function() {
-		return this.json[this.PHYSICS]
-	}, r.prototype.getPoseFile = function() {
-		return this.json[this.POSE]
-	}, r.prototype.getExpressionNum = function() {
-		return null == this.json[this.EXPRESSIONS] ? 0 : this.json[this.EXPRESSIONS].length
-	}, r.prototype.getExpressionFile = function(t) {
-		return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.FILE]
-	}, r.prototype.getExpressionName = function(t) {
-		return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.NAME]
-	}, r.prototype.getLayout = function() {
-		return this.json[this.LAYOUT]
-	}, r.prototype.getHitAreasCustom = function() {
-		return this.json[this.HIT_AREAS_CUSTOM]
-	}, r.prototype.getInitParamNum = function() {
-		return null == this.json[this.INIT_PARAM] ? 0 : this.json[this.INIT_PARAM].length
-	}, r.prototype.getMotionNum = function(t) {
-		return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] ? 0 : this.json[this.MOTION_GROUPS][t].length
-	}, r.prototype.getMotionFile = function(t, i) {
-		return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] ? null : this.json[this.MOTION_GROUPS][t][i][this.FILE]
-	}, r.prototype.getMotionSound = function(t, i) {
-		return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.SOUND] ? null : this.json[this.MOTION_GROUPS][t][i][this.SOUND]
-	}, r.prototype.getMotionFadeIn = function(t, i) {
-		return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_IN] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_IN]
-	}, r.prototype.getMotionFadeOut = function(t, i) {
-		return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT]
-	}, r.prototype.getInitParamID = function(t) {
-		return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? null : this.json[this.INIT_PARAM][t][this.ID]
-	}, r.prototype.getInitParamValue = function(t) {
-		return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? NaN : this.json[this.INIT_PARAM][t][this.VALUE]
-	}, r.prototype.getInitPartsVisibleNum = function() {
-		return null == this.json[this.INIT_PARTS_VISIBLE] ? 0 : this.json[this.INIT_PARTS_VISIBLE].length
-	}, r.prototype.getInitPartsVisibleID = function(t) {
-		return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? null : this.json[this.INIT_PARTS_VISIBLE][t][this.ID]
-	}, r.prototype.getInitPartsVisibleValue = function(t) {
-		return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? NaN : this.json[this.INIT_PARTS_VISIBLE][t][this.VALUE]
-	}
-}]);
-//# sourceMappingURL=live2d.js.map
diff --git a/spaces/DEBO-PROJECT/DEBO-V1/bots/normal_debate.py b/spaces/DEBO-PROJECT/DEBO-V1/bots/normal_debate.py
deleted file mode 100644
index 69c12729a593cfc58e95926de7653111f8cf1bc3..0000000000000000000000000000000000000000
--- a/spaces/DEBO-PROJECT/DEBO-V1/bots/normal_debate.py
+++ /dev/null
@@ -1,397 +0,0 @@
-import re
-import random
-from langchain.prompts import PromptTemplate
-from modules.gpt_modules import gpt_call
-
-def erase_start_word_and_after(text, start_word):
-    pattern = re.compile(re.escape(start_word) + '.*')
-    return re.sub(pattern, '', text)
-
-def nomal_debator(prompt, history, debate_subject, bot_role, history_num):
-    # Debate Rule 설명하기
-    if history_num == 0:
-        print("history_num", history_num)
-
-        user_role = ""
-        bot_response = ""
-
-        debate_role = [
-            "first debater for the pro side", 
-            "first debater for the con side", 
-            "second debater for the pro side",
-            "second debater for the con side"
-        ]
-
-        # user role random으로 정하기
-        user_debate_role = random.choice(debate_role)
-        # user role이 아닌 것이 bot의 role임
-        bot_debate_role_list = [role for role in debate_role if role != user_debate_role]
-
-        print("user_debate_role", user_debate_role)
-        print("bot_debate_role_list", bot_debate_role_list)
-
-        debate_preset = "\n".join([
-            "Debate Rules: ",
-            "1) This debate will be divided into two teams, pro and con, with two debates on each team.",
-            "2) The order of speaking is: first debater for the pro side, first debater for the con side, second debater for the pro side, second debater for the con side.",
-            "3) Answer logically with an introduction, body, and conclusion.\n", #add this one.
-            "User debate role: " + user_debate_role,
-            "Bot debate roles: " + ", ".join(bot_debate_role_list) + "\n",
-            "Debate subject: " + debate_subject
-        ])
-
-        # User가 첫번째 차례라면, User에게 먼저 prompt를 받아야 함
-        if user_debate_role == debate_role[0]:
-            #print("user_debate_role", user_debate_role)
-            bot_preset = "\n".join([
-                debate_preset + "\n",
-                "It's your turn! Write your opinion!"
-            ])
-            bot_response = bot_preset
-            print("bot_response", bot_response)
-            #return bot_response
-        
-        # User가 두번째 차례라면, Bot이 1번째 차례에 대한 response를 만들고, 사용자의 답변을 받아야 함
-        elif user_debate_role == debate_role[1]:
-
-            bot_preset = "\n".join([
-                debate_preset,
-            ])
-
-            first_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    bot_preset, #persona
-                    "{prompt}",
-                    "Only say " + debate_role[0] + "\'s opinion after \':\'. Do not write " + debate_role[1] + "\'s " + "opinions, " + debate_role[2] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
-                    debate_role[0] + ": "
-                    ])
-            )
-            first_bot_prompt = first_prompt_template.format(
-                prompt=""
-            )
-            first_response = gpt_call(first_bot_prompt)
-
-            # preprocess
-            # if first_response contain the first debater for the con side's opinion, remove it.
-            first_response = erase_start_word_and_after(first_response, debate_role[1])
-            first_response = erase_start_word_and_after(first_response, debate_role[2])
-            first_response = erase_start_word_and_after(first_response, debate_role[3])
-
-            #first_response = re.sub(debate_role[1] + ":.*", "", first_response)
-
-            bot_response = "\n".join([
-                bot_preset + "\n",
-                "-----------------------------------------------------------------",
-                "[First debater for the pro side]: " + "\n" + first_response + "\n",
-                "-----------------------------------------------------------------",
-                "It's your turn! Write your opinion!"
-            ])
-
-        # User가 세번째 차례라면, Bot이 1, 2번째 차례에 대한 response를 만들고, 사용자의 답변을 받아야 함
-        elif user_debate_role == debate_role[2]:
-
-            bot_preset = "\n".join([
-                debate_preset,
-            ])
-            # first
-            first_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    bot_preset, #persona
-                    "{prompt}",
-                    debate_role[0] + ": ",
-                    ])
-            )
-            first_bot_prompt = first_prompt_template.format(
-                prompt=""
-            )
-            first_response = gpt_call(first_bot_prompt)
-
-            # second
-            second_prompt_template = PromptTemplate(
-                input_variables=["first_prompt"],
-                template="\n".join([
-                    bot_preset, #persona
-                    "Only say " + debate_role[1] + "\'s opinion after \':\'. Do not write " + debate_role[0] + "\'s " + "opinions, " + debate_role[2] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
-                    debate_role[0] + ": " + "{first_prompt}",
-                    debate_role[1] + ": "
-                    ])
-            )
-            second_bot_prompt = second_prompt_template.format(
-                first_prompt=first_response
-            )
-            second_response = gpt_call(second_bot_prompt)
-
-            # preprocess
-            # if first_response contain the first debater for the con side's opinion, remove it.
-            first_response = erase_start_word_and_after(first_response, debate_role[1])
-            first_response = erase_start_word_and_after(first_response, debate_role[2])
-            first_response = erase_start_word_and_after(first_response, debate_role[3])
-            # if second_response contain the first debater for the con side's opinion, remove it.
-            #second_response = re.sub(debate_role[2] + ":.*", "", second_response)
-            second_response = erase_start_word_and_after(second_response, debate_role[2])
-            second_response = erase_start_word_and_after(second_response, debate_role[3])
-
-            bot_response = "\n".join([
-                bot_preset + "\n",
-                "-----------------------------------------------------------------",
-                "[First debater for the pro side]: " + "\n" + first_response + "\n",
-                "-----------------------------------------------------------------",
-                "[First debater for the con side]: " + "\n" + second_response + "\n",
-                "-----------------------------------------------------------------",
-                "It's your turn! Write your opinion!"
-            ])
-
-
-        elif user_debate_role == debate_role[3]:
-
-            bot_preset = "\n".join([
-                debate_preset,
-            ])
-
-            # first
-            first_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    bot_preset, #persona
-                    "{prompt}",
-                    debate_role[0] + ": ",
-                    ])
-            )
-            first_bot_prompt = first_prompt_template.format(
-                prompt=""
-            )
-            first_response = gpt_call(first_bot_prompt)
-
-            # second
-            second_prompt_template = PromptTemplate(
-                input_variables=["first_prompt"],
-                template="\n".join([
-                    bot_preset, #persona
-                    "Only say " + debate_role[1] + "'s opinion after \':\'. Do not write " + debate_role[0] + "\'s " + "opinions, " + debate_role[2] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
-                    debate_role[0] + ": " + "{first_prompt}",
-                    debate_role[1] + ": "
-                    ])
-            )
-            second_bot_prompt = second_prompt_template.format(
-                first_prompt=first_response
-            )
-            second_response = gpt_call(second_bot_prompt)
-
-            # third
-            third_prompt_template = PromptTemplate(
-                input_variables=["first_prompt", "second_prompt"],
-                template="\n".join([
-                    bot_preset, #persona
-                    "Only say " + debate_role[2] + "\'s opinion after \':\'. Do not write " + debate_role[0] + "\'s " + "opinions, " + debate_role[1] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
-                    debate_role[0] + ": " + "{first_prompt}",
-                    debate_role[1] + ": " + "{second_prompt}",
-                    debate_role[2] + ": "
-                    ])
-            )
-            third_bot_prompt = third_prompt_template.format(
-                first_prompt=first_response,
-                second_prompt=second_response
-            )
-            third_response = gpt_call(third_bot_prompt)
-
-            # preprocess
-            # if first_response contain the first debater for the con side's opinion, remove it.
-            first_response = erase_start_word_and_after(first_response, debate_role[1])
-            first_response = erase_start_word_and_after(first_response, debate_role[2])
-            first_response = erase_start_word_and_after(first_response, debate_role[3])
-            # if second_response contain the first debater for the con side's opinion, remove it.
-            #second_response = re.sub(debate_role[2] + ":.*", "", second_response)
-            second_response = erase_start_word_and_after(second_response, debate_role[2])
-            second_response = erase_start_word_and_after(second_response, debate_role[3])
-            # if third_response contain the first debater for the con side's opinion, remove it.
-            thir_response = erase_start_word_and_after(thir_response, debate_role[3])
-            #third_response = re.sub(debate_role[3] + ":.*", "", third_response)
-
-            bot_response = "\n".join([
-                bot_preset + "\n",
-                "-----------------------------------------------------------------",
-                "[First debater for the pro side]: " + "\n" + first_response + "\n",
-                "-----------------------------------------------------------------",
-                "[First debater for the con side]: " + "\n" + second_response + "\n",
-                "-----------------------------------------------------------------",
-                "[Second debater for the pro side]: " + "\n" + third_response + "\n",
-                "-----------------------------------------------------------------",
-                "It's your turn! Write your opinion!"
-            ])
-        else:
-            pass
-
-    # Answer and Ask Judgement.
-    if history_num == 1:
-
-        debate_role = [
-            "first debater for the pro side", 
-            "first debater for the con side", 
-            "second debater for the pro side",
-            "second debater for the con side"
-        ]
-
-        print("history1: ", history)
-
-        # user가 가장 첫번째로 답변했다면, 봇이 2, 3, 4 답변을 하고, 평가할지를 물어보면 됨.
-        if "User debate role: first debater for the pro side" in history:
-
-            # second
-            second_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    history,
-                    "User: {prompt}",
-                    debate_role[2] + ": "
-                    ])
-            )
-            second_bot_prompt = second_prompt_template.format(
-                prompt=prompt
-            )
-            second_response = gpt_call(second_bot_prompt)
-
-
-            # third
-            third_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    history,
-                    "User: {prompt}",
-                    debate_role[2] + ": "
-                    ])
-            )
-            third_bot_prompt = third_prompt_template.format(
-                prompt=prompt
-            )
-            third_response = gpt_call(third_bot_prompt)
-
-            # fourth
-            fourth_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    history,
-                    "User: {prompt}",
-                    debate_role[3] + ": "
-                    ])
-            )
-            fourth_bot_prompt = fourth_prompt_template.format(
-                prompt=prompt
-            )
-            fourth_response = gpt_call(fourth_bot_prompt)
-
-            ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
-            bot_response = "\n".join([
-                "[first debater for the con side]: " + "\n" +  second_response + "\n",
-                "-----------------------------------------------------------------",
-                "[second debater for the pro sid]: " + "\n" +  third_response + "\n",
-                "-----------------------------------------------------------------",
-                "[second debater for the con side]: " + "\n" +  fourth_response + "\n",
-                "-----------------------------------------------------------------",
-                ask_judgement
-            ])
-
-        # user가 두번째로 답변했다면, 봇이 3, 4 번째 답변을 하고, 평가할지를 물어보면 됨.
-        elif "User debate role: first debater for the con side" in history:
-
-            # third
-            third_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    history,
-                    "User: {prompt}",
-                    debate_role[2] + ": "
-                    ])
-            )
-            third_bot_prompt = third_prompt_template.format(
-                prompt=prompt
-            )
-            third_response = gpt_call(third_bot_prompt)
-
-            # fourth
-            fourth_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    history,
-                    "User: {prompt}",
-                    debate_role[2] + ": " + third_response,
-                    debate_role[3] + ": "
-                    ])
-            )
-            fourth_bot_prompt = fourth_prompt_template.format(
-                prompt=prompt
-            )
-            fourth_response = gpt_call(fourth_bot_prompt)
-
-            # ask_judgement
-            ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
-            bot_response = "\n".join([
-                "[second debater for the pro sid]: " + "\n" +  third_response + "\n",
-                "-----------------------------------------------------------------",
-                "[second debater for the con side]: " + "\n" +  fourth_response + "\n",
-                "-----------------------------------------------------------------",
-                ask_judgement
-            ])
-
-        # user가 세번째로 답변했다면, 봇이 4 번째 답변을 하고, 평가할지를 물어보면 됨.
-        elif "User debate role: second debater for the pro side" in history:
-
-            fourth_prompt_template = PromptTemplate(
-                input_variables=["prompt"],
-                template="\n".join([
-                    history,
-                    "User: {prompt}",
-                    debate_role[3] + ": "
-                    ])
-            )
-            fourth_bot_prompt = fourth_prompt_template.format(
-                prompt=prompt
-            )
-            fourth_response = gpt_call(fourth_bot_prompt)
-
-
-
-            ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
-            bot_response = "\n".join([
-                "[second debater for the con side]: " + "\n" + fourth_response + "\n",
-                "-----------------------------------------------------------------",
-                ask_judgement
-            ])
-
-        # user가 네번째로 답변했다면, 바로 평가할지를 물어보면 됨.
-        elif "User debate role: second debater for the con side" in history:
-            ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
-            bot_response = ask_judgement
-        else:
-            pass
-
-    # Judgement.
-    if history_num == 2:
-        judgement_word_list = "\n".join([
-            "!!Instruction!",
-            "You are now the judge of this debate. Evaluate the debate according to the rules below.",
-            "Rule 1. Decide between the pro and con teams.",
-            "Rule 2. Summarize the debate as a whole and what each debater said.",
-            "Rule 3. For each debater, explain what was persuasive and what made the differnce between winning and losing.",
-        ])
-
-        judgement_prompt_template = PromptTemplate(
-            input_variables=["prompt"],
-            template="\n".join([
-                history,
-                "{prompt}",
-                judgement_word_list,
-                "Judgement: "
-                ])
-        )
-        judgement_bot_prompt = judgement_prompt_template.format(
-                prompt=""
-        )
-        judgement_response = gpt_call(judgement_bot_prompt)
-
-        bot_response = "\n".join([
-                "[Judgement]: " + "\n" + judgement_response + "\n",
-            ])
-        
-    return bot_response
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageFile.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageFile.py
deleted file mode 100644
index 8e4f7dfb2c8854ee3a1f65efd6535732df1764aa..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageFile.py
+++ /dev/null
@@ -1,773 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# base class for image file handlers
-#
-# history:
-# 1995-09-09 fl   Created
-# 1996-03-11 fl   Fixed load mechanism.
-# 1996-04-15 fl   Added pcx/xbm decoders.
-# 1996-04-30 fl   Added encoders.
-# 1996-12-14 fl   Added load helpers
-# 1997-01-11 fl   Use encode_to_file where possible
-# 1997-08-27 fl   Flush output in _save
-# 1998-03-05 fl   Use memory mapping for some modes
-# 1999-02-04 fl   Use memory mapping also for "I;16" and "I;16B"
-# 1999-05-31 fl   Added image parser
-# 2000-10-12 fl   Set readonly flag on memory-mapped images
-# 2002-03-20 fl   Use better messages for common decoder errors
-# 2003-04-21 fl   Fall back on mmap/map_buffer if map is not available
-# 2003-10-30 fl   Added StubImageFile class
-# 2004-02-25 fl   Made incremental parser more robust
-#
-# Copyright (c) 1997-2004 by Secret Labs AB
-# Copyright (c) 1995-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import io
-import itertools
-import struct
-import sys
-
-from . import Image
-from ._util import is_path
-
-MAXBLOCK = 65536
-
-SAFEBLOCK = 1024 * 1024
-
-LOAD_TRUNCATED_IMAGES = False
-"""Whether or not to load truncated image files. User code may change this."""
-
-ERRORS = {
-    -1: "image buffer overrun error",
-    -2: "decoding error",
-    -3: "unknown error",
-    -8: "bad configuration",
-    -9: "out of memory error",
-}
-"""
-Dict of known error codes returned from :meth:`.PyDecoder.decode`,
-:meth:`.PyEncoder.encode` :meth:`.PyEncoder.encode_to_pyfd` and
-:meth:`.PyEncoder.encode_to_file`.
-"""
-
-
-#
-# --------------------------------------------------------------------
-# Helpers
-
-
-def raise_oserror(error):
-    try:
-        msg = Image.core.getcodecstatus(error)
-    except AttributeError:
-        msg = ERRORS.get(error)
-    if not msg:
-        msg = f"decoder error {error}"
-    msg += " when reading image file"
-    raise OSError(msg)
-
-
-def _tilesort(t):
-    # sort on offset
-    return t[2]
-
-
-#
-# --------------------------------------------------------------------
-# ImageFile base class
-
-
-class ImageFile(Image.Image):
-    """Base class for image file format handlers."""
-
-    def __init__(self, fp=None, filename=None):
-        super().__init__()
-
-        self._min_frame = 0
-
-        self.custom_mimetype = None
-
-        self.tile = None
-        """ A list of tile descriptors, or ``None`` """
-
-        self.readonly = 1  # until we know better
-
-        self.decoderconfig = ()
-        self.decodermaxblock = MAXBLOCK
-
-        if is_path(fp):
-            # filename
-            self.fp = open(fp, "rb")
-            self.filename = fp
-            self._exclusive_fp = True
-        else:
-            # stream
-            self.fp = fp
-            self.filename = filename
-            # can be overridden
-            self._exclusive_fp = None
-
-        try:
-            try:
-                self._open()
-            except (
-                IndexError,  # end of data
-                TypeError,  # end of data (ord)
-                KeyError,  # unsupported mode
-                EOFError,  # got header but not the first frame
-                struct.error,
-            ) as v:
-                raise SyntaxError(v) from v
-
-            if not self.mode or self.size[0] <= 0 or self.size[1] <= 0:
-                msg = "not identified by this driver"
-                raise SyntaxError(msg)
-        except BaseException:
-            # close the file only if we have opened it this constructor
-            if self._exclusive_fp:
-                self.fp.close()
-            raise
-
-    def get_format_mimetype(self):
-        if self.custom_mimetype:
-            return self.custom_mimetype
-        if self.format is not None:
-            return Image.MIME.get(self.format.upper())
-
-    def __setstate__(self, state):
-        self.tile = []
-        super().__setstate__(state)
-
-    def verify(self):
-        """Check file integrity"""
-
-        # raise exception if something's wrong.  must be called
-        # directly after open, and closes file when finished.
-        if self._exclusive_fp:
-            self.fp.close()
-        self.fp = None
-
-    def load(self):
-        """Load image data based on tile list"""
-
-        if self.tile is None:
-            msg = "cannot load this image"
-            raise OSError(msg)
-
-        pixel = Image.Image.load(self)
-        if not self.tile:
-            return pixel
-
-        self.map = None
-        use_mmap = self.filename and len(self.tile) == 1
-        # As of pypy 2.1.0, memory mapping was failing here.
-        use_mmap = use_mmap and not hasattr(sys, "pypy_version_info")
-
-        readonly = 0
-
-        # look for read/seek overrides
-        try:
-            read = self.load_read
-            # don't use mmap if there are custom read/seek functions
-            use_mmap = False
-        except AttributeError:
-            read = self.fp.read
-
-        try:
-            seek = self.load_seek
-            use_mmap = False
-        except AttributeError:
-            seek = self.fp.seek
-
-        if use_mmap:
-            # try memory mapping
-            decoder_name, extents, offset, args = self.tile[0]
-            if (
-                decoder_name == "raw"
-                and len(args) >= 3
-                and args[0] == self.mode
-                and args[0] in Image._MAPMODES
-            ):
-                try:
-                    # use mmap, if possible
-                    import mmap
-
-                    with open(self.filename) as fp:
-                        self.map = mmap.mmap(fp.fileno(), 0, access=mmap.ACCESS_READ)
-                    if offset + self.size[1] * args[1] > self.map.size():
-                        # buffer is not large enough
-                        raise OSError
-                    self.im = Image.core.map_buffer(
-                        self.map, self.size, decoder_name, offset, args
-                    )
-                    readonly = 1
-                    # After trashing self.im,
-                    # we might need to reload the palette data.
-                    if self.palette:
-                        self.palette.dirty = 1
-                except (AttributeError, OSError, ImportError):
-                    self.map = None
-
-        self.load_prepare()
-        err_code = -3  # initialize to unknown error
-        if not self.map:
-            # sort tiles in file order
-            self.tile.sort(key=_tilesort)
-
-            try:
-                # FIXME: This is a hack to handle TIFF's JpegTables tag.
-                prefix = self.tile_prefix
-            except AttributeError:
-                prefix = b""
-
-            # Remove consecutive duplicates that only differ by their offset
-            self.tile = [
-                list(tiles)[-1]
-                for _, tiles in itertools.groupby(
-                    self.tile, lambda tile: (tile[0], tile[1], tile[3])
-                )
-            ]
-            for decoder_name, extents, offset, args in self.tile:
-                seek(offset)
-                decoder = Image._getdecoder(
-                    self.mode, decoder_name, args, self.decoderconfig
-                )
-                try:
-                    decoder.setimage(self.im, extents)
-                    if decoder.pulls_fd:
-                        decoder.setfd(self.fp)
-                        err_code = decoder.decode(b"")[1]
-                    else:
-                        b = prefix
-                        while True:
-                            try:
-                                s = read(self.decodermaxblock)
-                            except (IndexError, struct.error) as e:
-                                # truncated png/gif
-                                if LOAD_TRUNCATED_IMAGES:
-                                    break
-                                else:
-                                    msg = "image file is truncated"
-                                    raise OSError(msg) from e
-
-                            if not s:  # truncated jpeg
-                                if LOAD_TRUNCATED_IMAGES:
-                                    break
-                                else:
-                                    msg = (
-                                        "image file is truncated "
-                                        f"({len(b)} bytes not processed)"
-                                    )
-                                    raise OSError(msg)
-
-                            b = b + s
-                            n, err_code = decoder.decode(b)
-                            if n < 0:
-                                break
-                            b = b[n:]
-                finally:
-                    # Need to cleanup here to prevent leaks
-                    decoder.cleanup()
-
-        self.tile = []
-        self.readonly = readonly
-
-        self.load_end()
-
-        if self._exclusive_fp and self._close_exclusive_fp_after_loading:
-            self.fp.close()
-        self.fp = None
-
-        if not self.map and not LOAD_TRUNCATED_IMAGES and err_code < 0:
-            # still raised if decoder fails to return anything
-            raise_oserror(err_code)
-
-        return Image.Image.load(self)
-
-    def load_prepare(self):
-        # create image memory if necessary
-        if not self.im or self.im.mode != self.mode or self.im.size != self.size:
-            self.im = Image.core.new(self.mode, self.size)
-        # create palette (optional)
-        if self.mode == "P":
-            Image.Image.load(self)
-
-    def load_end(self):
-        # may be overridden
-        pass
-
-    # may be defined for contained formats
-    # def load_seek(self, pos):
-    #     pass
-
-    # may be defined for blocked formats (e.g. PNG)
-    # def load_read(self, bytes):
-    #     pass
-
-    def _seek_check(self, frame):
-        if (
-            frame < self._min_frame
-            # Only check upper limit on frames if additional seek operations
-            # are not required to do so
-            or (
-                not (hasattr(self, "_n_frames") and self._n_frames is None)
-                and frame >= self.n_frames + self._min_frame
-            )
-        ):
-            msg = "attempt to seek outside sequence"
-            raise EOFError(msg)
-
-        return self.tell() != frame
-
-
-class StubImageFile(ImageFile):
-    """
-    Base class for stub image loaders.
-
-    A stub loader is an image loader that can identify files of a
-    certain format, but relies on external code to load the file.
-    """
-
-    def _open(self):
-        msg = "StubImageFile subclass must implement _open"
-        raise NotImplementedError(msg)
-
-    def load(self):
-        loader = self._load()
-        if loader is None:
-            msg = f"cannot find loader for this {self.format} file"
-            raise OSError(msg)
-        image = loader.load(self)
-        assert image is not None
-        # become the other object (!)
-        self.__class__ = image.__class__
-        self.__dict__ = image.__dict__
-        return image.load()
-
-    def _load(self):
-        """(Hook) Find actual image loader."""
-        msg = "StubImageFile subclass must implement _load"
-        raise NotImplementedError(msg)
-
-
-class Parser:
-    """
-    Incremental image parser.  This class implements the standard
-    feed/close consumer interface.
-    """
-
-    incremental = None
-    image = None
-    data = None
-    decoder = None
-    offset = 0
-    finished = 0
-
-    def reset(self):
-        """
-        (Consumer) Reset the parser.  Note that you can only call this
-        method immediately after you've created a parser; parser
-        instances cannot be reused.
-        """
-        assert self.data is None, "cannot reuse parsers"
-
-    def feed(self, data):
-        """
-        (Consumer) Feed data to the parser.
-
-        :param data: A string buffer.
-        :exception OSError: If the parser failed to parse the image file.
-        """
-        # collect data
-
-        if self.finished:
-            return
-
-        if self.data is None:
-            self.data = data
-        else:
-            self.data = self.data + data
-
-        # parse what we have
-        if self.decoder:
-            if self.offset > 0:
-                # skip header
-                skip = min(len(self.data), self.offset)
-                self.data = self.data[skip:]
-                self.offset = self.offset - skip
-                if self.offset > 0 or not self.data:
-                    return
-
-            n, e = self.decoder.decode(self.data)
-
-            if n < 0:
-                # end of stream
-                self.data = None
-                self.finished = 1
-                if e < 0:
-                    # decoding error
-                    self.image = None
-                    raise_oserror(e)
-                else:
-                    # end of image
-                    return
-            self.data = self.data[n:]
-
-        elif self.image:
-            # if we end up here with no decoder, this file cannot
-            # be incrementally parsed.  wait until we've gotten all
-            # available data
-            pass
-
-        else:
-            # attempt to open this file
-            try:
-                with io.BytesIO(self.data) as fp:
-                    im = Image.open(fp)
-            except OSError:
-                # traceback.print_exc()
-                pass  # not enough data
-            else:
-                flag = hasattr(im, "load_seek") or hasattr(im, "load_read")
-                if flag or len(im.tile) != 1:
-                    # custom load code, or multiple tiles
-                    self.decode = None
-                else:
-                    # initialize decoder
-                    im.load_prepare()
-                    d, e, o, a = im.tile[0]
-                    im.tile = []
-                    self.decoder = Image._getdecoder(im.mode, d, a, im.decoderconfig)
-                    self.decoder.setimage(im.im, e)
-
-                    # calculate decoder offset
-                    self.offset = o
-                    if self.offset <= len(self.data):
-                        self.data = self.data[self.offset :]
-                        self.offset = 0
-
-                self.image = im
-
-    def __enter__(self):
-        return self
-
-    def __exit__(self, *args):
-        self.close()
-
-    def close(self):
-        """
-        (Consumer) Close the stream.
-
-        :returns: An image object.
-        :exception OSError: If the parser failed to parse the image file either
-                            because it cannot be identified or cannot be
-                            decoded.
-        """
-        # finish decoding
-        if self.decoder:
-            # get rid of what's left in the buffers
-            self.feed(b"")
-            self.data = self.decoder = None
-            if not self.finished:
-                msg = "image was incomplete"
-                raise OSError(msg)
-        if not self.image:
-            msg = "cannot parse this image"
-            raise OSError(msg)
-        if self.data:
-            # incremental parsing not possible; reopen the file
-            # not that we have all data
-            with io.BytesIO(self.data) as fp:
-                try:
-                    self.image = Image.open(fp)
-                finally:
-                    self.image.load()
-        return self.image
-
-
-# --------------------------------------------------------------------
-
-
-def _save(im, fp, tile, bufsize=0):
-    """Helper to save image based on tile list
-
-    :param im: Image object.
-    :param fp: File object.
-    :param tile: Tile list.
-    :param bufsize: Optional buffer size
-    """
-
-    im.load()
-    if not hasattr(im, "encoderconfig"):
-        im.encoderconfig = ()
-    tile.sort(key=_tilesort)
-    # FIXME: make MAXBLOCK a configuration parameter
-    # It would be great if we could have the encoder specify what it needs
-    # But, it would need at least the image size in most cases. RawEncode is
-    # a tricky case.
-    bufsize = max(MAXBLOCK, bufsize, im.size[0] * 4)  # see RawEncode.c
-    try:
-        fh = fp.fileno()
-        fp.flush()
-        _encode_tile(im, fp, tile, bufsize, fh)
-    except (AttributeError, io.UnsupportedOperation) as exc:
-        _encode_tile(im, fp, tile, bufsize, None, exc)
-    if hasattr(fp, "flush"):
-        fp.flush()
-
-
-def _encode_tile(im, fp, tile, bufsize, fh, exc=None):
-    for e, b, o, a in tile:
-        if o > 0:
-            fp.seek(o)
-        encoder = Image._getencoder(im.mode, e, a, im.encoderconfig)
-        try:
-            encoder.setimage(im.im, b)
-            if encoder.pushes_fd:
-                encoder.setfd(fp)
-                errcode = encoder.encode_to_pyfd()[1]
-            else:
-                if exc:
-                    # compress to Python file-compatible object
-                    while True:
-                        errcode, data = encoder.encode(bufsize)[1:]
-                        fp.write(data)
-                        if errcode:
-                            break
-                else:
-                    # slight speedup: compress to real file object
-                    errcode = encoder.encode_to_file(fh, bufsize)
-            if errcode < 0:
-                msg = f"encoder error {errcode} when writing image file"
-                raise OSError(msg) from exc
-        finally:
-            encoder.cleanup()
-
-
-def _safe_read(fp, size):
-    """
-    Reads large blocks in a safe way.  Unlike fp.read(n), this function
-    doesn't trust the user.  If the requested size is larger than
-    SAFEBLOCK, the file is read block by block.
-
-    :param fp: File handle.  Must implement a read method.
-    :param size: Number of bytes to read.
-    :returns: A string containing size bytes of data.
-
-    Raises an OSError if the file is truncated and the read cannot be completed
-
-    """
-    if size <= 0:
-        return b""
-    if size <= SAFEBLOCK:
-        data = fp.read(size)
-        if len(data) < size:
-            msg = "Truncated File Read"
-            raise OSError(msg)
-        return data
-    data = []
-    remaining_size = size
-    while remaining_size > 0:
-        block = fp.read(min(remaining_size, SAFEBLOCK))
-        if not block:
-            break
-        data.append(block)
-        remaining_size -= len(block)
-    if sum(len(d) for d in data) < size:
-        msg = "Truncated File Read"
-        raise OSError(msg)
-    return b"".join(data)
-
-
-class PyCodecState:
-    def __init__(self):
-        self.xsize = 0
-        self.ysize = 0
-        self.xoff = 0
-        self.yoff = 0
-
-    def extents(self):
-        return self.xoff, self.yoff, self.xoff + self.xsize, self.yoff + self.ysize
-
-
-class PyCodec:
-    def __init__(self, mode, *args):
-        self.im = None
-        self.state = PyCodecState()
-        self.fd = None
-        self.mode = mode
-        self.init(args)
-
-    def init(self, args):
-        """
-        Override to perform codec specific initialization
-
-        :param args: Array of args items from the tile entry
-        :returns: None
-        """
-        self.args = args
-
-    def cleanup(self):
-        """
-        Override to perform codec specific cleanup
-
-        :returns: None
-        """
-        pass
-
-    def setfd(self, fd):
-        """
-        Called from ImageFile to set the Python file-like object
-
-        :param fd: A Python file-like object
-        :returns: None
-        """
-        self.fd = fd
-
-    def setimage(self, im, extents=None):
-        """
-        Called from ImageFile to set the core output image for the codec
-
-        :param im: A core image object
-        :param extents: a 4 tuple of (x0, y0, x1, y1) defining the rectangle
-            for this tile
-        :returns: None
-        """
-
-        # following c code
-        self.im = im
-
-        if extents:
-            (x0, y0, x1, y1) = extents
-        else:
-            (x0, y0, x1, y1) = (0, 0, 0, 0)
-
-        if x0 == 0 and x1 == 0:
-            self.state.xsize, self.state.ysize = self.im.size
-        else:
-            self.state.xoff = x0
-            self.state.yoff = y0
-            self.state.xsize = x1 - x0
-            self.state.ysize = y1 - y0
-
-        if self.state.xsize <= 0 or self.state.ysize <= 0:
-            msg = "Size cannot be negative"
-            raise ValueError(msg)
-
-        if (
-            self.state.xsize + self.state.xoff > self.im.size[0]
-            or self.state.ysize + self.state.yoff > self.im.size[1]
-        ):
-            msg = "Tile cannot extend outside image"
-            raise ValueError(msg)
-
-
-class PyDecoder(PyCodec):
-    """
-    Python implementation of a format decoder. Override this class and
-    add the decoding logic in the :meth:`decode` method.
-
-    See :ref:`Writing Your Own File Codec in Python`
-    """
-
-    _pulls_fd = False
-
-    @property
-    def pulls_fd(self):
-        return self._pulls_fd
-
-    def decode(self, buffer):
-        """
-        Override to perform the decoding process.
-
-        :param buffer: A bytes object with the data to be decoded.
-        :returns: A tuple of ``(bytes consumed, errcode)``.
-            If finished with decoding return -1 for the bytes consumed.
-            Err codes are from :data:`.ImageFile.ERRORS`.
-        """
-        raise NotImplementedError()
-
-    def set_as_raw(self, data, rawmode=None):
-        """
-        Convenience method to set the internal image from a stream of raw data
-
-        :param data: Bytes to be set
-        :param rawmode: The rawmode to be used for the decoder.
-            If not specified, it will default to the mode of the image
-        :returns: None
-        """
-
-        if not rawmode:
-            rawmode = self.mode
-        d = Image._getdecoder(self.mode, "raw", rawmode)
-        d.setimage(self.im, self.state.extents())
-        s = d.decode(data)
-
-        if s[0] >= 0:
-            msg = "not enough image data"
-            raise ValueError(msg)
-        if s[1] != 0:
-            msg = "cannot decode image data"
-            raise ValueError(msg)
-
-
-class PyEncoder(PyCodec):
-    """
-    Python implementation of a format encoder. Override this class and
-    add the decoding logic in the :meth:`encode` method.
-
-    See :ref:`Writing Your Own File Codec in Python`
-    """
-
-    _pushes_fd = False
-
-    @property
-    def pushes_fd(self):
-        return self._pushes_fd
-
-    def encode(self, bufsize):
-        """
-        Override to perform the encoding process.
-
-        :param bufsize: Buffer size.
-        :returns: A tuple of ``(bytes encoded, errcode, bytes)``.
-            If finished with encoding return 1 for the error code.
-            Err codes are from :data:`.ImageFile.ERRORS`.
-        """
-        raise NotImplementedError()
-
-    def encode_to_pyfd(self):
-        """
-        If ``pushes_fd`` is ``True``, then this method will be used,
-        and ``encode()`` will only be called once.
-
-        :returns: A tuple of ``(bytes consumed, errcode)``.
-            Err codes are from :data:`.ImageFile.ERRORS`.
-        """
-        if not self.pushes_fd:
-            return 0, -8  # bad configuration
-        bytes_consumed, errcode, data = self.encode(0)
-        if data:
-            self.fd.write(data)
-        return bytes_consumed, errcode
-
-    def encode_to_file(self, fh, bufsize):
-        """
-        :param fh: File handle.
-        :param bufsize: Buffer size.
-
-        :returns: If finished successfully, return 0.
-            Otherwise, return an error code. Err codes are from
-            :data:`.ImageFile.ERRORS`.
-        """
-        errcode = 0
-        while errcode == 0:
-            status, errcode, buf = self.encode(bufsize)
-            if status > 0:
-                fh.write(buf[status:])
-        return errcode
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/data.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/data.py
deleted file mode 100644
index e6ba9a976c2aa4cabbf0a6031400f0d910b59ac3..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/data.py
+++ /dev/null
@@ -1,78 +0,0 @@
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, Any
-
-import numpy as np
-
-if TYPE_CHECKING:
-    from contourpy._contourpy import CoordinateArray
-
-
-def simple(
-    shape: tuple[int, int], want_mask: bool = False,
-) -> tuple[CoordinateArray, CoordinateArray, CoordinateArray | np.ma.MaskedArray[Any, Any]]:
-    """Return simple test data consisting of the sum of two gaussians.
-
-    Args:
-        shape (tuple(int, int)): 2D shape of data to return.
-        want_mask (bool, optional): Whether test data should be masked or not, default ``False``.
-
-    Return:
-        Tuple of 3 arrays: ``x``, ``y``, ``z`` test data, ``z`` will be masked if
-        ``want_mask=True``.
-    """
-    ny, nx = shape
-    x = np.arange(nx, dtype=np.float64)
-    y = np.arange(ny, dtype=np.float64)
-    x, y = np.meshgrid(x, y)
-
-    xscale = nx - 1.0
-    yscale = ny - 1.0
-
-    # z is sum of 2D gaussians.
-    amp = np.asarray([1.0, -1.0, 0.8, -0.9, 0.7])
-    mid = np.asarray([[0.4, 0.2], [0.3, 0.8], [0.9, 0.75], [0.7, 0.3], [0.05, 0.7]])
-    width = np.asarray([0.4, 0.2, 0.2, 0.2, 0.1])
-
-    z = np.zeros_like(x)
-    for i in range(len(amp)):
-        z += amp[i]*np.exp(-((x/xscale - mid[i, 0])**2 + (y/yscale - mid[i, 1])**2) / width[i]**2)
-
-    if want_mask:
-        mask = np.logical_or(
-            ((x/xscale - 1.0)**2 / 0.2 + (y/yscale - 0.0)**2 / 0.1) < 1.0,
-            ((x/xscale - 0.2)**2 / 0.02 + (y/yscale - 0.45)**2 / 0.08) < 1.0,
-        )
-        z = np.ma.array(z, mask=mask)  # type: ignore[no-untyped-call]
-
-    return x, y, z
-
-
-def random(
-    shape: tuple[int, int], seed: int = 2187, mask_fraction: float = 0.0,
-) -> tuple[CoordinateArray, CoordinateArray, CoordinateArray | np.ma.MaskedArray[Any, Any]]:
-    """Return random test data..
-
-    Args:
-        shape (tuple(int, int)): 2D shape of data to return.
-        seed (int, optional): Seed for random number generator, default 2187.
-        mask_fraction (float, optional): Fraction of elements to mask, default 0.
-
-    Return:
-        Tuple of 3 arrays: ``x``, ``y``, ``z`` test data, ``z`` will be masked if
-        ``mask_fraction`` is greater than zero.
-    """
-    ny, nx = shape
-    x = np.arange(nx, dtype=np.float64)
-    y = np.arange(ny, dtype=np.float64)
-    x, y = np.meshgrid(x, y)
-
-    rng = np.random.default_rng(seed)
-    z = rng.uniform(size=shape)
-
-    if mask_fraction > 0.0:
-        mask_fraction = min(mask_fraction, 0.99)
-        mask = rng.uniform(size=shape) < mask_fraction
-        z = np.ma.array(z, mask=mask)  # type: ignore[no-untyped-call]
-
-    return x, y, z
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/xmlReader.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/xmlReader.py
deleted file mode 100644
index d8e502f141e9cb5df6ea11352b565c9a9cd4aa3d..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/xmlReader.py
+++ /dev/null
@@ -1,188 +0,0 @@
-from fontTools import ttLib
-from fontTools.misc.textTools import safeEval
-from fontTools.ttLib.tables.DefaultTable import DefaultTable
-import sys
-import os
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-class TTXParseError(Exception):
-    pass
-
-
-BUFSIZE = 0x4000
-
-
-class XMLReader(object):
-    def __init__(
-        self, fileOrPath, ttFont, progress=None, quiet=None, contentOnly=False
-    ):
-        if fileOrPath == "-":
-            fileOrPath = sys.stdin
-        if not hasattr(fileOrPath, "read"):
-            self.file = open(fileOrPath, "rb")
-            self._closeStream = True
-        else:
-            # assume readable file object
-            self.file = fileOrPath
-            self._closeStream = False
-        self.ttFont = ttFont
-        self.progress = progress
-        if quiet is not None:
-            from fontTools.misc.loggingTools import deprecateArgument
-
-            deprecateArgument("quiet", "configure logging instead")
-            self.quiet = quiet
-        self.root = None
-        self.contentStack = []
-        self.contentOnly = contentOnly
-        self.stackSize = 0
-
-    def read(self, rootless=False):
-        if rootless:
-            self.stackSize += 1
-        if self.progress:
-            self.file.seek(0, 2)
-            fileSize = self.file.tell()
-            self.progress.set(0, fileSize // 100 or 1)
-            self.file.seek(0)
-        self._parseFile(self.file)
-        if self._closeStream:
-            self.close()
-        if rootless:
-            self.stackSize -= 1
-
-    def close(self):
-        self.file.close()
-
-    def _parseFile(self, file):
-        from xml.parsers.expat import ParserCreate
-
-        parser = ParserCreate()
-        parser.StartElementHandler = self._startElementHandler
-        parser.EndElementHandler = self._endElementHandler
-        parser.CharacterDataHandler = self._characterDataHandler
-
-        pos = 0
-        while True:
-            chunk = file.read(BUFSIZE)
-            if not chunk:
-                parser.Parse(chunk, 1)
-                break
-            pos = pos + len(chunk)
-            if self.progress:
-                self.progress.set(pos // 100)
-            parser.Parse(chunk, 0)
-
-    def _startElementHandler(self, name, attrs):
-        if self.stackSize == 1 and self.contentOnly:
-            # We already know the table we're parsing, skip
-            # parsing the table tag and continue to
-            # stack '2' which begins parsing content
-            self.contentStack.append([])
-            self.stackSize = 2
-            return
-        stackSize = self.stackSize
-        self.stackSize = stackSize + 1
-        subFile = attrs.get("src")
-        if subFile is not None:
-            if hasattr(self.file, "name"):
-                # if file has a name, get its parent directory
-                dirname = os.path.dirname(self.file.name)
-            else:
-                # else fall back to using the current working directory
-                dirname = os.getcwd()
-            subFile = os.path.join(dirname, subFile)
-        if not stackSize:
-            if name != "ttFont":
-                raise TTXParseError("illegal root tag: %s" % name)
-            if self.ttFont.reader is None and not self.ttFont.tables:
-                sfntVersion = attrs.get("sfntVersion")
-                if sfntVersion is not None:
-                    if len(sfntVersion) != 4:
-                        sfntVersion = safeEval('"' + sfntVersion + '"')
-                    self.ttFont.sfntVersion = sfntVersion
-            self.contentStack.append([])
-        elif stackSize == 1:
-            if subFile is not None:
-                subReader = XMLReader(subFile, self.ttFont, self.progress)
-                subReader.read()
-                self.contentStack.append([])
-                return
-            tag = ttLib.xmlToTag(name)
-            msg = "Parsing '%s' table..." % tag
-            if self.progress:
-                self.progress.setLabel(msg)
-            log.info(msg)
-            if tag == "GlyphOrder":
-                tableClass = ttLib.GlyphOrder
-            elif "ERROR" in attrs or ("raw" in attrs and safeEval(attrs["raw"])):
-                tableClass = DefaultTable
-            else:
-                tableClass = ttLib.getTableClass(tag)
-                if tableClass is None:
-                    tableClass = DefaultTable
-            if tag == "loca" and tag in self.ttFont:
-                # Special-case the 'loca' table as we need the
-                #    original if the 'glyf' table isn't recompiled.
-                self.currentTable = self.ttFont[tag]
-            else:
-                self.currentTable = tableClass(tag)
-                self.ttFont[tag] = self.currentTable
-            self.contentStack.append([])
-        elif stackSize == 2 and subFile is not None:
-            subReader = XMLReader(subFile, self.ttFont, self.progress, contentOnly=True)
-            subReader.read()
-            self.contentStack.append([])
-            self.root = subReader.root
-        elif stackSize == 2:
-            self.contentStack.append([])
-            self.root = (name, attrs, self.contentStack[-1])
-        else:
-            l = []
-            self.contentStack[-1].append((name, attrs, l))
-            self.contentStack.append(l)
-
-    def _characterDataHandler(self, data):
-        if self.stackSize > 1:
-            # parser parses in chunks, so we may get multiple calls
-            # for the same text node; thus we need to append the data
-            # to the last item in the content stack:
-            # https://github.com/fonttools/fonttools/issues/2614
-            if (
-                data != "\n"
-                and self.contentStack[-1]
-                and isinstance(self.contentStack[-1][-1], str)
-                and self.contentStack[-1][-1] != "\n"
-            ):
-                self.contentStack[-1][-1] += data
-            else:
-                self.contentStack[-1].append(data)
-
-    def _endElementHandler(self, name):
-        self.stackSize = self.stackSize - 1
-        del self.contentStack[-1]
-        if not self.contentOnly:
-            if self.stackSize == 1:
-                self.root = None
-            elif self.stackSize == 2:
-                name, attrs, content = self.root
-                self.currentTable.fromXML(name, attrs, content, self.ttFont)
-                self.root = None
-
-
-class ProgressPrinter(object):
-    def __init__(self, title, maxval=100):
-        print(title)
-
-    def set(self, val, maxval=None):
-        pass
-
-    def increment(self, val=1):
-        pass
-
-    def setLabel(self, text):
-        print(text)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/transaction.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/transaction.py
deleted file mode 100644
index df98353d5754fc6b82a6d06d80b87e45ed698f1f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/transaction.py
+++ /dev/null
@@ -1,81 +0,0 @@
-class Transaction(object):
-    """Filesystem transaction write context
-
-    Gathers files for deferred commit or discard, so that several write
-    operations can be finalized semi-atomically. This works by having this
-    instance as the ``.transaction`` attribute of the given filesystem
-    """
-
-    def __init__(self, fs):
-        """
-        Parameters
-        ----------
-        fs: FileSystem instance
-        """
-        self.fs = fs
-        self.files = []
-
-    def __enter__(self):
-        self.start()
-
-    def __exit__(self, exc_type, exc_val, exc_tb):
-        """End transaction and commit, if exit is not due to exception"""
-        # only commit if there was no exception
-        self.complete(commit=exc_type is None)
-        self.fs._intrans = False
-        self.fs._transaction = None
-
-    def start(self):
-        """Start a transaction on this FileSystem"""
-        self.files = []  # clean up after previous failed completions
-        self.fs._intrans = True
-
-    def complete(self, commit=True):
-        """Finish transaction: commit or discard all deferred files"""
-        for f in self.files:
-            if commit:
-                f.commit()
-            else:
-                f.discard()
-        self.files = []
-        self.fs._intrans = False
-
-
-class FileActor(object):
-    def __init__(self):
-        self.files = []
-
-    def commit(self):
-        for f in self.files:
-            f.commit()
-        self.files.clear()
-
-    def discard(self):
-        for f in self.files:
-            f.discard()
-        self.files.clear()
-
-    def append(self, f):
-        self.files.append(f)
-
-
-class DaskTransaction(Transaction):
-    def __init__(self, fs):
-        """
-        Parameters
-        ----------
-        fs: FileSystem instance
-        """
-        import distributed
-
-        super().__init__(fs)
-        client = distributed.default_client()
-        self.files = client.submit(FileActor, actor=True).result()
-
-    def complete(self, commit=True):
-        """Finish transaction: commit or discard all deferred files"""
-        if commit:
-            self.files.commit().result()
-        else:
-            self.files.discard().result()
-        self.fs._intrans = False
diff --git a/spaces/Designstanic/meta-llama-Llama-2-7b-chat-hf/app.py b/spaces/Designstanic/meta-llama-Llama-2-7b-chat-hf/app.py
deleted file mode 100644
index f635fb44d8233c0a1be00d0721017eb1bb0ddc7d..0000000000000000000000000000000000000000
--- a/spaces/Designstanic/meta-llama-Llama-2-7b-chat-hf/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/meta-llama/Llama-2-7b-chat-hf").launch()
\ No newline at end of file
diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/inference.py b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/inference.py
deleted file mode 100644
index ce0f2b08df75e6d62f06c4119f1dc859930de032..0000000000000000000000000000000000000000
--- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/inference.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from __future__ import annotations
-
-import gc
-import pathlib
-
-import gradio as gr
-import PIL.Image
-import torch
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-from huggingface_hub import ModelCard
-
-
-class InferencePipeline:
-    def __init__(self, hf_token: str | None = None):
-        self.hf_token = hf_token
-        self.pipe = None
-        self.device = torch.device(
-            'cuda:0' if torch.cuda.is_available() else 'cpu')
-        self.lora_model_id = None
-        self.base_model_id = None
-
-    def clear(self) -> None:
-        self.lora_model_id = None
-        self.base_model_id = None
-        del self.pipe
-        self.pipe = None
-        torch.cuda.empty_cache()
-        gc.collect()
-
-    @staticmethod
-    def check_if_model_is_local(lora_model_id: str) -> bool:
-        return pathlib.Path(lora_model_id).exists()
-
-    @staticmethod
-    def get_model_card(model_id: str,
-                       hf_token: str | None = None) -> ModelCard:
-        if InferencePipeline.check_if_model_is_local(model_id):
-            card_path = (pathlib.Path(model_id) / 'README.md').as_posix()
-        else:
-            card_path = model_id
-        return ModelCard.load(card_path, token=hf_token)
-
-    @staticmethod
-    def get_base_model_info(lora_model_id: str,
-                            hf_token: str | None = None) -> str:
-        card = InferencePipeline.get_model_card(lora_model_id, hf_token)
-        return card.data.base_model
-
-    def load_pipe(self, lora_model_id: str) -> None:
-        if lora_model_id == self.lora_model_id:
-            return
-        base_model_id = self.get_base_model_info(lora_model_id, self.hf_token)
-        if base_model_id != self.base_model_id:
-            if self.device.type == 'cpu':
-                pipe = DiffusionPipeline.from_pretrained(
-                    base_model_id, use_auth_token=self.hf_token)
-            else:
-                pipe = DiffusionPipeline.from_pretrained(
-                    base_model_id,
-                    torch_dtype=torch.float16,
-                    use_auth_token=self.hf_token)
-                pipe = pipe.to(self.device)
-            pipe.scheduler = DPMSolverMultistepScheduler.from_config(
-                pipe.scheduler.config)
-            self.pipe = pipe
-        self.pipe.unet.load_attn_procs(  # type: ignore
-            lora_model_id, use_auth_token=self.hf_token)
-
-        self.lora_model_id = lora_model_id  # type: ignore
-        self.base_model_id = base_model_id  # type: ignore
-
-    def run(
-        self,
-        lora_model_id: str,
-        prompt: str,
-        lora_scale: float,
-        seed: int,
-        n_steps: int,
-        guidance_scale: float,
-    ) -> PIL.Image.Image:
-        if not torch.cuda.is_available():
-            raise gr.Error('CUDA is not available.')
-
-        self.load_pipe(lora_model_id)
-
-        generator = torch.Generator(device=self.device).manual_seed(seed)
-        out = self.pipe(
-            prompt,
-            num_inference_steps=n_steps,
-            guidance_scale=guidance_scale,
-            generator=generator,
-            cross_attention_kwargs={'scale': lora_scale},
-        )  # type: ignore
-        return out.images[0]
diff --git a/spaces/EleutherAI/magma/magma/language_model.py b/spaces/EleutherAI/magma/magma/language_model.py
deleted file mode 100644
index 1902540067123fb88d4608fa3278e1babfe9fbff..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/magma/magma/language_model.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import torch
-from transformers import GPTNeoForCausalLM, AutoConfig, GPT2LMHeadModel
-from .utils import print_main
-from pathlib import Path
-from transformers.modeling_utils import no_init_weights
-
-LANGUAGE_MODELS = [
-    "gptj",
-]
-
-
-def gptj_config():
-    config = AutoConfig.from_pretrained("EleutherAI/gpt-neo-2.7B")
-    config.attention_layers = ["global"] * 28
-    config.attention_types = [["global"], 28]
-    config.num_layers = 28
-    config.num_heads = 16
-    config.hidden_size = 256 * config.num_heads
-    config.vocab_size = 50400
-    config.rotary = True
-    config.rotary_dim = 64
-    config.jax = True
-    config.gradient_checkpointing = True
-    return config
-
-
-def get_gptj(
-    gradient_checkpointing: bool = True,
-    from_pretrained=False,
-) -> torch.nn.Module:
-    """
-    Loads GPTJ language model from HF
-    """
-    print_main("Loading GPTJ language model...")
-    config = gptj_config()
-    config.gradient_checkpointing = gradient_checkpointing
-    if gradient_checkpointing:
-        config.use_cache = False
-    config.model_device = "cpu"
-    if from_pretrained:
-        raise NotImplemented("GPTJ pretrained not implemented")
-    else:
-        with no_init_weights():
-            model = GPTNeoForCausalLM(config=config)
-    return model
diff --git a/spaces/EsoCode/text-generation-webui/modules/utils.py b/spaces/EsoCode/text-generation-webui/modules/utils.py
deleted file mode 100644
index 1535ecdc065307c2c443f592a9ad23d6777cb1aa..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/modules/utils.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import os
-import re
-from datetime import datetime
-from pathlib import Path
-
-from modules import shared
-from modules.logging_colors import logger
-
-
-def save_file(fname, contents):
-    if fname == '':
-        logger.error('File name is empty!')
-        return
-
-    root_folder = Path(__file__).resolve().parent.parent
-    abs_path = Path(fname).resolve()
-    rel_path = abs_path.relative_to(root_folder)
-    if rel_path.parts[0] == '..':
-        logger.error(f'Invalid file path: {fname}')
-        return
-
-    with open(abs_path, 'w', encoding='utf-8') as f:
-        f.write(contents)
-
-    logger.info(f'Saved {abs_path}.')
-
-
-def delete_file(fname):
-    if fname == '':
-        logger.error('File name is empty!')
-        return
-
-    root_folder = Path(__file__).resolve().parent.parent
-    abs_path = Path(fname).resolve()
-    rel_path = abs_path.relative_to(root_folder)
-    if rel_path.parts[0] == '..':
-        logger.error(f'Invalid file path: {fname}')
-        return
-
-    if abs_path.exists():
-        abs_path.unlink()
-        logger.info(f'Deleted {fname}.')
-
-
-def current_time():
-    return f"{datetime.now().strftime('%Y-%m-%d-%H%M%S')}"
-
-
-def atoi(text):
-    return int(text) if text.isdigit() else text.lower()
-
-
-# Replace multiple string pairs in a string
-def replace_all(text, dic):
-    for i, j in dic.items():
-        text = text.replace(i, j)
-
-    return text
-
-
-def natural_keys(text):
-    return [atoi(c) for c in re.split(r'(\d+)', text)]
-
-
-def get_available_models():
-    if shared.args.flexgen:
-        return sorted([re.sub('-np$', '', item.name) for item in list(Path(f'{shared.args.model_dir}/').glob('*')) if item.name.endswith('-np')], key=natural_keys)
-    else:
-        return sorted([re.sub('.pth$', '', item.name) for item in list(Path(f'{shared.args.model_dir}/').glob('*')) if not item.name.endswith(('.txt', '-np', '.pt', '.json', '.yaml'))], key=natural_keys)
-
-
-def get_available_presets():
-    return sorted(set((k.stem for k in Path('presets').glob('*.yaml'))), key=natural_keys)
-
-
-def get_available_prompts():
-    prompts = []
-    files = set((k.stem for k in Path('prompts').glob('*.txt')))
-    prompts += sorted([k for k in files if re.match('^[0-9]', k)], key=natural_keys, reverse=True)
-    prompts += sorted([k for k in files if re.match('^[^0-9]', k)], key=natural_keys)
-    prompts += ['Instruct-' + k for k in get_available_instruction_templates() if k != 'None']
-    prompts += ['None']
-    return prompts
-
-
-def get_available_characters():
-    paths = (x for x in Path('characters').iterdir() if x.suffix in ('.json', '.yaml', '.yml'))
-    return ['None'] + sorted(set((k.stem for k in paths if k.stem != "instruction-following")), key=natural_keys)
-
-
-def get_available_instruction_templates():
-    path = "characters/instruction-following"
-    paths = []
-    if os.path.exists(path):
-        paths = (x for x in Path(path).iterdir() if x.suffix in ('.json', '.yaml', '.yml'))
-
-    return ['None'] + sorted(set((k.stem for k in paths)), key=natural_keys)
-
-
-def get_available_extensions():
-    return sorted(set(map(lambda x: x.parts[1], Path('extensions').glob('*/script.py'))), key=natural_keys)
-
-
-def get_available_loras():
-    return sorted([item.name for item in list(Path(shared.args.lora_dir).glob('*')) if not item.name.endswith(('.txt', '-np', '.pt', '.json'))], key=natural_keys)
-
-
-def get_datasets(path: str, ext: str):
-    return ['None'] + sorted(set([k.stem for k in Path(path).glob(f'*.{ext}') if k.stem != 'put-trainer-datasets-here']), key=natural_keys)
-
-
-def get_available_chat_styles():
-    return sorted(set(('-'.join(k.stem.split('-')[1:]) for k in Path('css').glob('chat_style*.css'))), key=natural_keys)
diff --git a/spaces/FER-Universe/FER-Benchmarking/autotrain-diffusion-emotion-facial-expression-recognition-40429105176/README.md b/spaces/FER-Universe/FER-Benchmarking/autotrain-diffusion-emotion-facial-expression-recognition-40429105176/README.md
deleted file mode 100644
index e82a89a3ce2f028c928477e2e37c775496d700ab..0000000000000000000000000000000000000000
--- a/spaces/FER-Universe/FER-Benchmarking/autotrain-diffusion-emotion-facial-expression-recognition-40429105176/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-tags:
-- autotrain
-- vision
-- image-classification
-datasets:
-- kdhht2334/autotrain-data-diffusion-emotion-facial-expression-recognition
-widget:
-- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
-  example_title: Tiger
-- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
-  example_title: Teapot
-- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
-  example_title: Palace
-co2_eq_emissions:
-  emissions: 0.8103871386449576
----
-
-# Model Trained Using AutoTrain
-
-- Problem type: Multi-class Classification
-- Model ID: 40429105176
-- CO2 Emissions (in grams): 0.8104
-
-## Validation Metrics
-
-- Loss: 0.490
-- Accuracy: 0.847
-- Macro F1: 0.779
-- Micro F1: 0.847
-- Weighted F1: 0.843
-- Macro Precision: 0.784
-- Micro Precision: 0.847
-- Weighted Precision: 0.850
-- Macro Recall: 0.792
-- Micro Recall: 0.847
-- Weighted Recall: 0.847
\ No newline at end of file
diff --git a/spaces/Fengbinbin/gpt-academic/docs/README_EN.md b/spaces/Fengbinbin/gpt-academic/docs/README_EN.md
deleted file mode 100644
index db214f5327b8cdcd84ed1c57390c3b24ba83d78f..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/docs/README_EN.md
+++ /dev/null
@@ -1,291 +0,0 @@
-> **Note**
->
-> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
->
-
-#  ChatGPT Academic Optimization
-
-**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a [README in English](docs/README_EN.md) translated by this project itself.**
-
-> **Note**
->
-> 1. Please note that only **functions with red color** supports reading files, some functions are located in the **dropdown menu** of plugins. Additionally, we welcome and prioritize any new plugin PRs with **highest priority**!
->
-> 2. The functionality of each file in this project is detailed in the self-translation report [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the project. With the iteration of the version, you can also click on the relevant function plugins at any time to call GPT to regenerate the self-analysis report of the project. The FAQ summary is in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) section.
-> 
-
-
-
- -Function | Description ---- | --- -One-Click Polish | Supports one-click polishing and finding grammar errors in academic papers. -One-Key Translation Between Chinese and English | One-click translation between Chinese and English. -One-Key Code Interpretation | Can correctly display and interpret code. -[Custom Shortcut Keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys. -[Configure Proxy Server](https://www.bilibili.com/video/BV1rc411W7Dr) | Supports configuring proxy servers. -Modular Design | Supports custom high-order function plugins and [function plugins], and plugins support [hot updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Self-programming Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] [One-Key Read] (https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) The source code of this project is analyzed. -[Program Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] One-click can analyze the project tree of other Python/C/C++/Java/Lua/... projects -Read the Paper | [Function Plugin] One-click interpretation of the full text of latex paper and generation of abstracts -Latex Full Text Translation, Proofreading | [Function Plugin] One-click translation or proofreading of latex papers. -Batch Comment Generation | [Function Plugin] One-click batch generation of function comments -Chat Analysis Report Generation | [Function Plugin] After running, an automatic summary report will be generated -[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function Plugin] Enter the arxiv article url to translate the abstract and download the PDF with one click -[Full-text Translation Function of PDF Paper](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function Plugin] Extract the title & abstract of the PDF paper + translate the full text (multithreading) -[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function Plugin] Given any Google Scholar search page URL, let gpt help you choose interesting articles. -Formula / Picture / Table Display | Can display both the tex form and the rendering form of formulas at the same time, support formula and code highlighting -Multithreaded Function Plugin Support | Supports multi-threaded calling chatgpt, one-click processing of massive text or programs -Start Dark Gradio [Theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__dark-theme=true``` at the end of the browser url to switch to dark theme -[Multiple LLM Models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | It must feel nice to be served by both GPT3.5, GPT4, and [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B)! -Huggingface non-Science Net [Online Experience](https://huggingface.co/spaces/qingxu98/gpt-academic) | After logging in to huggingface, copy [this space](https://huggingface.co/spaces/qingxu98/gpt-academic) -... | ... - -
- - -- New interface (switch between "left-right layout" and "up-down layout" by modifying the LAYOUT option in config.py) -
- -
- - -- All buttons are dynamically generated by reading functional.py and can add custom functionality at will, freeing up clipboard -
- -
- -- Proofreading / correcting -
- -
- -- If the output contains formulas, it will be displayed in both the tex form and the rendering form at the same time, which is convenient for copying and reading -
- -
- -- Don't want to read the project code? Just take the whole project to chatgpt -
- -
- -- Multiple major language model mixing calls (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
- -
- -Multiple major language model mixing call [huggingface beta version](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (the huggingface version does not support chatglm) - - ---- - -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. Download project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure API_KEY and proxy settings - - -In `config.py`, configure the overseas Proxy and OpenAI API KEY as follows: -``` -1. If you are in China, you need to set up an overseas proxy to use the OpenAI API smoothly. Please read config.py carefully for setup details (1. Modify USE_PROXY to True; 2. Modify proxies according to the instructions). -2. Configure the OpenAI API KEY. You need to register and obtain an API KEY on the OpenAI website. Once you get the API KEY, you can configure it in the config.py file. -3. Issues related to proxy networks (network timeouts, proxy failures) are summarized at https://github.com/binary-husky/chatgpt_academic/issues/1 -``` -(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py` and use the same-name configuration in `config.py` to overwrite it. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configuration in `config.py` to` config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure.)) - - -3. Install dependencies -```sh -# (Option One) Recommended -python -m pip install -r requirements.txt - -# (Option Two) If you use anaconda, the steps are similar: -# (Option Two.1) conda create -n gptac_venv python=3.11 -# (Option Two.2) conda activate gptac_venv -# (Option Two.3) python -m pip install -r requirements.txt - -# Note: Use official pip source or Ali pip source. Other pip sources (such as some university pips) may have problems, and temporary replacement methods are as follows: -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -If you need to support Tsinghua ChatGLM, you need to install more dependencies (if you are not familiar with python or your computer configuration is not good, we recommend not to try): -```sh -python -m pip install -r request_llm/requirements_chatglm.txt -``` - -4. Run -```sh -python main.py -``` - -5. Test function plugins -``` -- Test Python project analysis - In the input area, enter `./crazy_functions/test_project/python/dqn`, and then click "Analyze the entire Python project" -- Test self-code interpretation - Click "[Multithreading Demo] Interpretation of This Project Itself (Source Code Interpretation)" -- Test experimental function template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions. - Click "[Function Plugin Template Demo] Today in History" -- There are more functions to choose from in the function plugin area drop-down menu. -``` - -## Installation-Method 2: Use Docker (Linux) - -1. ChatGPT only (recommended for most people) -``` sh -# download project -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# configure overseas Proxy and OpenAI API KEY -Edit config.py with any text editor -# Install -docker build -t gpt-academic . -# Run -docker run --rm -it --net=host gpt-academic - -# Test function plug-in -## Test function plugin template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions. -Click "[Function Plugin Template Demo] Today in History" -## Test Abstract Writing for Latex Projects -Enter ./crazy_functions/test_project/latex/attention in the input area, and then click "Read Tex Paper and Write Abstract" -## Test Python Project Analysis -Enter ./crazy_functions/test_project/python/dqn in the input area and click "Analyze the entire Python project." - -More functions are available in the function plugin area drop-down menu. -``` - -2. ChatGPT+ChatGLM (requires strong familiarity with docker + strong computer configuration) - -``` sh -# Modify dockerfile -cd docs && nano Dockerfile+ChatGLM -# How to build | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# How to run | 如何运行 (1) 直接运行: -docker run --rm -it --net=host --gpus=all gpt-academic -# How to run | 如何运行 (2) 我想运行之前进容器做一些调整: -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - - -## Installation-Method 3: Other Deployment Methods - -1. Remote Cloud Server Deployment -Please visit [Deployment Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -2. Use WSL2 (Windows Subsystem for Linux) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - - -## Installation-Proxy Configuration -### Method 1: Conventional method -[Configure Proxy](https://github.com/binary-husky/chatgpt_academic/issues/1) - -### Method Two: Step-by-step tutorial for newcomers -[Step-by-step tutorial for newcomers](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - ---- - -## Customizing Convenient Buttons (Customizing Academic Shortcuts) -Open `core_functional.py` with any text editor and add an item as follows, then restart the program (if the button has been successfully added and visible, both the prefix and suffix support hot modification without the need to restart the program to take effect). For example: -``` -"Super English to Chinese translation": { - # Prefix, which will be added before your input. For example, to describe your requirements, such as translation, code interpretation, polishing, etc. - "Prefix": "Please translate the following content into Chinese and use a markdown table to interpret the proprietary terms in the text one by one:\n\n", - - # Suffix, which will be added after your input. For example, combined with the prefix, you can put your input content in quotes. - "Suffix": "", -}, -``` -
- -
- ---- - - -## Some Function Displays - -### Image Display: - - -You are a professional academic paper translator. - -
- -
- -### If a program can understand and analyze itself: - -
- -
- -
- -
- -### Analysis of any Python/Cpp project: -
- -
- -
- -
- -### One-click reading comprehension and summary generation of Latex papers -
- -
- -### Automatic report generation -
- - - -
- -### Modular functional design -
- - -
- -### Source code translation to English - -
- -
- -## Todo and version planning: -- version 3.2+ (todo): Function plugin supports more parameter interfaces -- version 3.1: Support for inquiring multiple GPT models at the same time! Support for api2d, support for multiple apikeys load balancing -- version 3.0: Support for chatglm and other small llms -- version 2.6: Refactored the plugin structure, improved interactivity, added more plugins -- version 2.5: Self-updating, solves the problem of text being too long and token overflowing when summarizing large project source code -- version 2.4: (1) Added PDF full text translation function; (2) Added function to switch input area position; (3) Added vertical layout option; (4) Multi-threaded function plugin optimization. -- version 2.3: Enhanced multi-threaded interactivity -- version 2.2: Function plugin supports hot reloading -- version 2.1: Foldable layout -- version 2.0: Introduction of modular function plugins -- version 1.0: Basic functions - -## Reference and learning - -``` -The code design of this project has referenced many other excellent projects, including: - -# Reference project 1: Borrowed many tips from ChuanhuChatGPT -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Reference project 2: Tsinghua ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B -``` - diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/preprocess.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/preprocess.py deleted file mode 100644 index 19f11110ea822eeb140fb885c600536290a1adff..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/preprocess.py +++ /dev/null @@ -1,346 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch - -from infer.lib.uvr5_pack.lib_v5 import nets_61968KB as Nets -from infer.lib.uvr5_pack.lib_v5 import spec_utils -from infer.lib.uvr5_pack.lib_v5.model_param_init import ModelParameters -from infer.lib.uvr5_pack.lib_v5.nets_new import CascadedNet -from infer.lib.uvr5_pack.utils import inference - - -class AudioPre: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = Nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class AudioPreDeEcho: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) diff --git a/spaces/FridaZuley/RVC_HFKawaii/julius/bands.py b/spaces/FridaZuley/RVC_HFKawaii/julius/bands.py deleted file mode 100644 index ef2162440b69e960770aa7bf81b9aaec48a63243..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/julius/bands.py +++ /dev/null @@ -1,119 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -Decomposition of a signal over frequency bands in the waveform domain. -""" -from typing import Optional, Sequence -import torch - -from .core import mel_frequencies -from .lowpass import LowPassFilters -from .utils import simple_repr - - -class SplitBands(torch.nn.Module): - """ - Decomposes a signal over the given frequency bands in the waveform domain using - a cascade of low pass filters as implemented by `julius.lowpass.LowPassFilters`. - You can either specify explicitely the frequency cutoffs, or just the number of bands, - in which case the frequency cutoffs will be spread out evenly in mel scale. - - Args: - sample_rate (float): Sample rate of the input signal in Hz. - n_bands (int or None): number of bands, when not giving them explictely with `cutoffs`. - In that case, the cutoff frequencies will be evenly spaced in mel-space. - cutoffs (list[float] or None): list of frequency cutoffs in Hz. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. See `LowPassFilters` for more informations. - fft (bool or None): See `LowPassFilters` for more info. - - ..note:: - The sum of all the bands will always be the input signal. - - ..warning:: - Unlike `julius.lowpass.LowPassFilters`, the cutoffs frequencies must be provided in Hz along - with the sample rate. - - Shape: - - - Input: `[*, T]` - - Output: `[B, *, T']`, with `T'=T` if `pad` is True. - If `n_bands` was provided, `B = n_bands` otherwise `B = len(cutoffs) + 1` - - >>> bands = SplitBands(sample_rate=128, n_bands=10) - >>> x = torch.randn(6, 4, 1024) - >>> list(bands(x).shape) - [10, 6, 4, 1024] - """ - - def __init__(self, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - if (cutoffs is None) + (n_bands is None) != 1: - raise ValueError("You must provide either n_bands, or cutoffs, but not boths.") - - self.sample_rate = sample_rate - self.n_bands = n_bands - self._cutoffs = list(cutoffs) if cutoffs is not None else None - self.pad = pad - self.zeros = zeros - self.fft = fft - - if cutoffs is None: - if n_bands is None: - raise ValueError("You must provide one of n_bands or cutoffs.") - if not n_bands >= 1: - raise ValueError(f"n_bands must be greater than one (got {n_bands})") - cutoffs = mel_frequencies(n_bands + 1, 0, sample_rate / 2)[1:-1] - else: - if max(cutoffs) > 0.5 * sample_rate: - raise ValueError("A cutoff above sample_rate/2 does not make sense.") - if len(cutoffs) > 0: - self.lowpass = LowPassFilters( - [c / sample_rate for c in cutoffs], pad=pad, zeros=zeros, fft=fft) - else: - # Here I cannot make both TorchScript and MyPy happy. - # I miss the good old times, before all this madness was created. - self.lowpass = None # type: ignore - - def forward(self, input): - if self.lowpass is None: - return input[None] - lows = self.lowpass(input) - low = lows[0] - bands = [low] - for low_and_band in lows[1:]: - # Get a bandpass filter by substracting lowpasses - band = low_and_band - low - bands.append(band) - low = low_and_band - # Last band is whatever is left in the signal - bands.append(input - low) - return torch.stack(bands) - - @property - def cutoffs(self): - if self._cutoffs is not None: - return self._cutoffs - elif self.lowpass is not None: - return [c * self.sample_rate for c in self.lowpass.cutoffs] - else: - return [] - - def __repr__(self): - return simple_repr(self, overrides={"cutoffs": self._cutoffs}) - - -def split_bands(signal: torch.Tensor, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `SplitBands`, refer to this class for more information. - - >>> x = torch.randn(6, 4, 1024) - >>> list(split_bands(x, sample_rate=64, cutoffs=[12, 24]).shape) - [3, 6, 4, 1024] - """ - return SplitBands(sample_rate, n_bands, cutoffs, pad, zeros, fft).to(signal)(signal) diff --git a/spaces/GMFTBY/PandaGPT/model/ImageBind/models/imagebind_model.py b/spaces/GMFTBY/PandaGPT/model/ImageBind/models/imagebind_model.py deleted file mode 100644 index ba1981e8790b98131e2a89388142a79c6de94628..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/model/ImageBind/models/imagebind_model.py +++ /dev/null @@ -1,521 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import urllib -from functools import partial -from types import SimpleNamespace - -import torch -import torch.nn as nn - -from .helpers import ( - EinOpsRearrange, - LearnableLogitScaling, - Normalize, - SelectElement, - SelectEOSAndProject, -) -from .multimodal_preprocessors import ( - AudioPreprocessor, - IMUPreprocessor, - PadIm2Video, - PatchEmbedGeneric, - RGBDTPreprocessor, - SpatioTemporalPosEmbeddingHelper, - TextPreprocessor, - ThermalPreprocessor, -) - -from .transformer import MultiheadAttention, SimpleTransformer - - -ModalityType = SimpleNamespace( - VISION="vision", - TEXT="text", - AUDIO="audio", - THERMAL="thermal", - DEPTH="depth", - IMU="imu", -) - - -class ImageBindModel(nn.Module): - def __init__( - self, - video_frames=2, - kernel_size=(2, 14, 14), - audio_kernel_size=16, - audio_stride=10, - out_embed_dim=768, - vision_embed_dim=1024, - vision_num_blocks=24, - vision_num_heads=16, - audio_embed_dim=768, - audio_num_blocks=12, - audio_num_heads=12, - audio_num_mel_bins=128, - audio_target_len=204, - audio_drop_path=0.1, - text_embed_dim=768, - text_num_blocks=12, - text_num_heads=12, - depth_embed_dim=384, - depth_kernel_size=16, - depth_num_blocks=12, - depth_num_heads=8, - depth_drop_path=0.0, - thermal_embed_dim=768, - thermal_kernel_size=16, - thermal_num_blocks=12, - thermal_num_heads=12, - thermal_drop_path=0.0, - imu_embed_dim=512, - imu_kernel_size=8, - imu_num_blocks=6, - imu_num_heads=8, - imu_drop_path=0.7, - ): - super().__init__() - - self.modality_preprocessors = self._create_modality_preprocessors( - video_frames, - vision_embed_dim, - kernel_size, - text_embed_dim, - audio_embed_dim, - audio_kernel_size, - audio_stride, - audio_num_mel_bins, - audio_target_len, - depth_embed_dim, - depth_kernel_size, - thermal_embed_dim, - thermal_kernel_size, - imu_embed_dim, - ) - - self.modality_trunks = self._create_modality_trunks( - vision_embed_dim, - vision_num_blocks, - vision_num_heads, - text_embed_dim, - text_num_blocks, - text_num_heads, - audio_embed_dim, - audio_num_blocks, - audio_num_heads, - audio_drop_path, - depth_embed_dim, - depth_num_blocks, - depth_num_heads, - depth_drop_path, - thermal_embed_dim, - thermal_num_blocks, - thermal_num_heads, - thermal_drop_path, - imu_embed_dim, - imu_num_blocks, - imu_num_heads, - imu_drop_path, - ) - - self.modality_heads = self._create_modality_heads( - out_embed_dim, - vision_embed_dim, - text_embed_dim, - audio_embed_dim, - depth_embed_dim, - thermal_embed_dim, - imu_embed_dim, - ) - - self.modality_postprocessors = self._create_modality_postprocessors( - out_embed_dim - ) - - def _create_modality_preprocessors( - self, - video_frames=2, - vision_embed_dim=1024, - kernel_size=(2, 14, 14), - text_embed_dim=768, - audio_embed_dim=768, - audio_kernel_size=16, - audio_stride=10, - audio_num_mel_bins=128, - audio_target_len=204, - depth_embed_dim=768, - depth_kernel_size=16, - thermal_embed_dim=768, - thermal_kernel_size=16, - imu_embed_dim=512, - ): - rgbt_stem = PatchEmbedGeneric( - proj_stem=[ - PadIm2Video(pad_type="repeat", ntimes=2), - nn.Conv3d( - in_channels=3, - kernel_size=kernel_size, - out_channels=vision_embed_dim, - stride=kernel_size, - bias=False, - ), - ] - ) - rgbt_preprocessor = RGBDTPreprocessor( - img_size=[3, video_frames, 224, 224], - num_cls_tokens=1, - pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True), - rgbt_stem=rgbt_stem, - depth_stem=None, - ) - - text_preprocessor = TextPreprocessor( - context_length=77, - vocab_size=49408, - embed_dim=text_embed_dim, - causal_masking=True, - ) - - audio_stem = PatchEmbedGeneric( - proj_stem=[ - nn.Conv2d( - in_channels=1, - kernel_size=audio_kernel_size, - stride=audio_stride, - out_channels=audio_embed_dim, - bias=False, - ), - ], - norm_layer=nn.LayerNorm(normalized_shape=audio_embed_dim), - ) - audio_preprocessor = AudioPreprocessor( - img_size=[1, audio_num_mel_bins, audio_target_len], - num_cls_tokens=1, - pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True), - audio_stem=audio_stem, - ) - - depth_stem = PatchEmbedGeneric( - [ - nn.Conv2d( - kernel_size=depth_kernel_size, - in_channels=1, - out_channels=depth_embed_dim, - stride=depth_kernel_size, - bias=False, - ), - ], - norm_layer=nn.LayerNorm(normalized_shape=depth_embed_dim), - ) - - depth_preprocessor = RGBDTPreprocessor( - img_size=[1, 224, 224], - num_cls_tokens=1, - pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True), - rgbt_stem=None, - depth_stem=depth_stem, - ) - - thermal_stem = PatchEmbedGeneric( - [ - nn.Conv2d( - kernel_size=thermal_kernel_size, - in_channels=1, - out_channels=thermal_embed_dim, - stride=thermal_kernel_size, - bias=False, - ), - ], - norm_layer=nn.LayerNorm(normalized_shape=thermal_embed_dim), - ) - thermal_preprocessor = ThermalPreprocessor( - img_size=[1, 224, 224], - num_cls_tokens=1, - pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True), - thermal_stem=thermal_stem, - ) - - imu_stem = PatchEmbedGeneric( - [ - nn.Linear( - in_features=48, - out_features=imu_embed_dim, - bias=False, - ), - ], - norm_layer=nn.LayerNorm(normalized_shape=imu_embed_dim), - ) - - imu_preprocessor = IMUPreprocessor( - img_size=[6, 2000], - num_cls_tokens=1, - kernel_size=8, - embed_dim=imu_embed_dim, - pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True), - imu_stem=imu_stem, - ) - - modality_preprocessors = { - ModalityType.VISION: rgbt_preprocessor, - ModalityType.TEXT: text_preprocessor, - ModalityType.AUDIO: audio_preprocessor, - ModalityType.DEPTH: depth_preprocessor, - ModalityType.THERMAL: thermal_preprocessor, - ModalityType.IMU: imu_preprocessor, - } - - return nn.ModuleDict(modality_preprocessors) - - def _create_modality_trunks( - self, - vision_embed_dim=1024, - vision_num_blocks=24, - vision_num_heads=16, - text_embed_dim=768, - text_num_blocks=12, - text_num_heads=12, - audio_embed_dim=768, - audio_num_blocks=12, - audio_num_heads=12, - audio_drop_path=0.0, - depth_embed_dim=768, - depth_num_blocks=12, - depth_num_heads=12, - depth_drop_path=0.0, - thermal_embed_dim=768, - thermal_num_blocks=12, - thermal_num_heads=12, - thermal_drop_path=0.0, - imu_embed_dim=512, - imu_num_blocks=6, - imu_num_heads=8, - imu_drop_path=0.7, - ): - def instantiate_trunk( - embed_dim, num_blocks, num_heads, pre_transformer_ln, add_bias_kv, drop_path - ): - return SimpleTransformer( - embed_dim=embed_dim, - num_blocks=num_blocks, - ffn_dropout_rate=0.0, - drop_path_rate=drop_path, - attn_target=partial( - MultiheadAttention, - embed_dim=embed_dim, - num_heads=num_heads, - bias=True, - add_bias_kv=add_bias_kv, - ), - pre_transformer_layer=nn.Sequential( - nn.LayerNorm(embed_dim, eps=1e-6) - if pre_transformer_ln - else nn.Identity(), - EinOpsRearrange("b l d -> l b d"), - ), - post_transformer_layer=EinOpsRearrange("l b d -> b l d"), - ) - - modality_trunks = {} - modality_trunks[ModalityType.VISION] = instantiate_trunk( - vision_embed_dim, - vision_num_blocks, - vision_num_heads, - pre_transformer_ln=True, - add_bias_kv=False, - drop_path=0.0, - ) - modality_trunks[ModalityType.TEXT] = instantiate_trunk( - text_embed_dim, - text_num_blocks, - text_num_heads, - pre_transformer_ln=False, - add_bias_kv=False, - drop_path=0.0, - ) - modality_trunks[ModalityType.AUDIO] = instantiate_trunk( - audio_embed_dim, - audio_num_blocks, - audio_num_heads, - pre_transformer_ln=False, - add_bias_kv=True, - drop_path=audio_drop_path, - ) - modality_trunks[ModalityType.DEPTH] = instantiate_trunk( - depth_embed_dim, - depth_num_blocks, - depth_num_heads, - pre_transformer_ln=False, - add_bias_kv=True, - drop_path=depth_drop_path, - ) - modality_trunks[ModalityType.THERMAL] = instantiate_trunk( - thermal_embed_dim, - thermal_num_blocks, - thermal_num_heads, - pre_transformer_ln=False, - add_bias_kv=True, - drop_path=thermal_drop_path, - ) - modality_trunks[ModalityType.IMU] = instantiate_trunk( - imu_embed_dim, - imu_num_blocks, - imu_num_heads, - pre_transformer_ln=False, - add_bias_kv=True, - drop_path=imu_drop_path, - ) - - return nn.ModuleDict(modality_trunks) - - def _create_modality_heads( - self, - out_embed_dim, - vision_embed_dim, - text_embed_dim, - audio_embed_dim, - depth_embed_dim, - thermal_embed_dim, - imu_embed_dim, - ): - modality_heads = {} - - modality_heads[ModalityType.VISION] = nn.Sequential( - nn.LayerNorm(normalized_shape=vision_embed_dim, eps=1e-6), - SelectElement(index=0), - nn.Linear(vision_embed_dim, out_embed_dim, bias=False), - ) - - modality_heads[ModalityType.TEXT] = SelectEOSAndProject( - proj=nn.Sequential( - nn.LayerNorm(normalized_shape=text_embed_dim, eps=1e-6), - nn.Linear(text_embed_dim, out_embed_dim, bias=False), - ) - ) - - modality_heads[ModalityType.AUDIO] = nn.Sequential( - nn.LayerNorm(normalized_shape=audio_embed_dim, eps=1e-6), - SelectElement(index=0), - nn.Linear(audio_embed_dim, out_embed_dim, bias=False), - ) - - modality_heads[ModalityType.DEPTH] = nn.Sequential( - nn.LayerNorm(normalized_shape=depth_embed_dim, eps=1e-6), - SelectElement(index=0), - nn.Linear(depth_embed_dim, out_embed_dim, bias=False), - ) - - modality_heads[ModalityType.THERMAL] = nn.Sequential( - nn.LayerNorm(normalized_shape=thermal_embed_dim, eps=1e-6), - SelectElement(index=0), - nn.Linear(thermal_embed_dim, out_embed_dim, bias=False), - ) - - modality_heads[ModalityType.IMU] = nn.Sequential( - nn.LayerNorm(normalized_shape=imu_embed_dim, eps=1e-6), - SelectElement(index=0), - nn.Dropout(p=0.5), - nn.Linear(imu_embed_dim, out_embed_dim, bias=False), - ) - - return nn.ModuleDict(modality_heads) - - def _create_modality_postprocessors(self, out_embed_dim): - modality_postprocessors = {} - - modality_postprocessors[ModalityType.VISION] = Normalize(dim=-1) - modality_postprocessors[ModalityType.TEXT] = nn.Sequential( - Normalize(dim=-1), LearnableLogitScaling(learnable=True) - ) - modality_postprocessors[ModalityType.AUDIO] = nn.Sequential( - Normalize(dim=-1), - LearnableLogitScaling(logit_scale_init=20.0, learnable=False), - ) - modality_postprocessors[ModalityType.DEPTH] = nn.Sequential( - Normalize(dim=-1), - LearnableLogitScaling(logit_scale_init=5.0, learnable=False), - ) - modality_postprocessors[ModalityType.THERMAL] = nn.Sequential( - Normalize(dim=-1), - LearnableLogitScaling(logit_scale_init=10.0, learnable=False), - ) - modality_postprocessors[ModalityType.IMU] = nn.Sequential( - Normalize(dim=-1), - LearnableLogitScaling(logit_scale_init=5.0, learnable=False), - ) - return nn.ModuleDict(modality_postprocessors) - - def forward(self, inputs): - outputs = {} - for modality_key, modality_value in inputs.items(): - reduce_list = ( - modality_value.ndim >= 5 - ) # Audio and Video inputs consist of multiple clips - if reduce_list: - B, S = modality_value.shape[:2] - modality_value = modality_value.reshape( - B * S, *modality_value.shape[2:] - ) - - if modality_value is not None: - modality_value = self.modality_preprocessors[modality_key]( - **{modality_key: modality_value} - ) - trunk_inputs = modality_value["trunk"] - head_inputs = modality_value["head"] - modality_value = self.modality_trunks[modality_key](**trunk_inputs) - modality_value = self.modality_heads[modality_key]( - modality_value, **head_inputs - ) - if modality_key in [ModalityType.AUDIO]: - modality_value = self.modality_postprocessors[modality_key][0]( - modality_value - ) - else: - modality_value = self.modality_postprocessors[modality_key]( - modality_value - ) - - if reduce_list: - modality_value = modality_value.reshape(B, S, -1) - modality_value = modality_value.mean(dim=1) - - outputs[modality_key] = modality_value - - return outputs - - -def imagebind_huge(pretrained=False, store_path=r'.checkpoints'): - model = ImageBindModel( - vision_embed_dim=1280, - vision_num_blocks=32, - vision_num_heads=16, - text_embed_dim=1024, - text_num_blocks=24, - text_num_heads=16, - out_embed_dim=1024, - audio_drop_path=0.1, - imu_drop_path=0.7, - ) - - if pretrained: - if not os.path.exists("{}/imagebind_huge.pth".format(store_path)): - print( - "Downloading imagebind weights to {}/imagebind_huge.pth ...".format(store_path) - ) - os.makedirs(store_path, exist_ok=True) - torch.hub.download_url_to_file( - "https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth", - "{}/imagebind_huge.pth".format(store_path), - progress=True, - ) - - model.load_state_dict(torch.load("{}/imagebind_huge.pth".format(store_path))) - - return model, 1024 diff --git a/spaces/GaenKoki/voicevox/test/test_mora_to_text.py b/spaces/GaenKoki/voicevox/test/test_mora_to_text.py deleted file mode 100644 index 691681dd1b202731eb5dde45e083b4d6c7526743..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/test/test_mora_to_text.py +++ /dev/null @@ -1,29 +0,0 @@ -from unittest import TestCase - -# TODO: import from voicevox_engine.synthesis_engine.mora -from voicevox_engine.synthesis_engine.synthesis_engine_base import mora_to_text - - -class TestMoraToText(TestCase): - def test_voice(self): - self.assertEqual(mora_to_text("a"), "ア") - self.assertEqual(mora_to_text("i"), "イ") - self.assertEqual(mora_to_text("ka"), "カ") - self.assertEqual(mora_to_text("N"), "ン") - self.assertEqual(mora_to_text("cl"), "ッ") - self.assertEqual(mora_to_text("gye"), "ギェ") - self.assertEqual(mora_to_text("ye"), "イェ") - self.assertEqual(mora_to_text("wo"), "ウォ") - - def test_unvoice(self): - self.assertEqual(mora_to_text("A"), "ア") - self.assertEqual(mora_to_text("I"), "イ") - self.assertEqual(mora_to_text("kA"), "カ") - self.assertEqual(mora_to_text("gyE"), "ギェ") - self.assertEqual(mora_to_text("yE"), "イェ") - self.assertEqual(mora_to_text("wO"), "ウォ") - - def test_invalid_mora(self): - """変なモーラが来ても例外を投げない""" - self.assertEqual(mora_to_text("x"), "x") - self.assertEqual(mora_to_text(""), "") diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/multicolor_block_bridge.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/multicolor_block_bridge.py deleted file mode 100644 index 8c40472513c445e02bdac4ab4f45160f25961481..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/multicolor_block_bridge.py +++ /dev/null @@ -1,73 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class MulticolorBlockBridge(Task): - """Build a bridge by stacking three red, three blue, and three green blocks on a pallet. - Arrange in a sequence from left to right: red, blue, and green. - Then, place three cylinders of corresponding colors on top of the stacked blocks, forming a bridge. - The cylinders should roll from the top block to the pallet, creating a challenge of precision and control.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "Build a bridge by stacking three red, three blue, and three green blocks on a pallet. Arrange in a sequence from left to right: red, blue, and green. Then, place three cylinders of corresponding colors on top of the stacked blocks, forming a bridge." - self.task_completed_desc = "done building the bridge." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add pallet. - # x, y, z dimensions for the asset size - pallet_size = (0.15, 0.15, 0.01) - pallet_urdf = 'pallet/pallet.urdf' - pallet_pose = self.get_random_pose(env, pallet_size) - env.add_object(pallet_urdf, pallet_pose, 'fixed') - - # Add blocks. - # x, y, z dimensions for the asset size - block_size = (0.04, 0.04, 0.04) - block_urdf = 'block/block.urdf' - block_colors = [utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green']] - blocks = [] - for i in range(9): # 3 blocks of each color - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=block_colors[i // 3]) - blocks.append(block_id) - - # Add cylinders. - # x, y, z dimensions for the asset size - cylinder_size = (0.04, 0.04, 0.04) - cylinder_template = 'cylinder/cylinder-template.urdf' - cylinders = [] - for i in range(3): # 1 cylinder of each color - cylinder_pose = self.get_random_pose(env, cylinder_size) - replace = {'DIM': cylinder_size, 'HALF': (cylinder_size[0] / 2, cylinder_size[1] / 2, cylinder_size[2] / 2)} - cylinder_urdf = self.fill_template(cylinder_template, replace) - cylinder_id = env.add_object(cylinder_urdf, cylinder_pose, color=block_colors[i]) - cylinders.append(cylinder_id) - - # Associate placement locations for goals. - place_pos = [(0, -0.05, 0.03), (0, 0, 0.03), (0, 0.05, 0.03)] - targs = [(utils.apply(pallet_pose, i), pallet_pose[1]) for i in place_pos] - - # Goal: blocks are stacked on the pallet in the order red, blue, green. - for i in range(9): - self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[targs[i // 3]], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 9, symmetries=[np.pi/2], - language_goal=self.lang_template) - - # Goal: cylinders are placed on top of the stacked blocks. - for i in range(3): - self.add_goal(objs=[cylinders[i]], matches=np.ones((1, 1)), targ_poses=[targs[i]], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2], - language_goal=self.lang_template) \ No newline at end of file diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/toolbox/utterance.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/toolbox/utterance.py deleted file mode 100644 index 844c8a2adb0c8eba2992eaf5ea357d7add3c1896..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/toolbox/utterance.py +++ /dev/null @@ -1,5 +0,0 @@ -from collections import namedtuple - -Utterance = namedtuple("Utterance", "name speaker_name wav spec embed partial_embeds synth") -Utterance.__eq__ = lambda x, y: x.name == y.name -Utterance.__hash__ = lambda x: hash(x.name) diff --git a/spaces/Godrose0728/Aisound02/attentions.py b/spaces/Godrose0728/Aisound02/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/Aisound02/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Gradio-Blocks/Leaderboard/app.py b/spaces/Gradio-Blocks/Leaderboard/app.py deleted file mode 100644 index 6dd08ee2d2edde831d0fcd37836b369a0d3abf70..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Leaderboard/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import requests -import pandas as pd -import gradio as gr -from huggingface_hub.hf_api import SpaceInfo - - - -path = f"https://huggingface.co/api/spaces" - - - -def get_blocks_party_spaces(): - r = requests.get(path) - d = r.json() - spaces = [SpaceInfo(**x) for x in d] - blocks_spaces = {} - for i in range(0,len(spaces)): - if spaces[i].id.split('/')[0] == 'Gradio-Blocks' and hasattr(spaces[i], 'likes') and spaces[i].id != 'Gradio-Blocks/Leaderboard' and spaces[i].id != 'Gradio-Blocks/README': - blocks_spaces[spaces[i].id]=spaces[i].likes - df = pd.DataFrame( - [{"Spaces_Name": Spaces, "likes": likes} for Spaces,likes in blocks_spaces.items()]) - df = df.sort_values(by=['likes'],ascending=False) - return df - - -block = gr.Blocks() - -with block: - gr.Markdown("""Leaderboard for the most popular Blocks Event Spaces. To learn more and join, see Blocks Party Event""") - with gr.Tabs(): - with gr.TabItem("Blocks Party Leaderboard"): - with gr.Row(): - data = gr.outputs.Dataframe(type="pandas") - with gr.Row(): - data_run = gr.Button("Refresh") - data_run.click(get_blocks_party_spaces, inputs=None, outputs=data) - - block.load(get_blocks_party_spaces, inputs=None, outputs=data) -block.launch() - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_caffe_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_caffe_fpn_1x_coco.py deleted file mode 100644 index b371ed757bf7dd95ef9ecfc2e609ca5ab03795d6..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_caffe_fpn_1x_coco.py +++ /dev/null @@ -1,38 +0,0 @@ -_base_ = ['./cascade_mask_rcnn_r50_fpn_1x_coco.py'] - -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - norm_cfg=dict(requires_grad=False), norm_eval=True, style='caffe')) - -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/deepfashion/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/deepfashion/README.md deleted file mode 100644 index c182bea0f2924a4d96bca6ea15eebeb36fce8027..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/deepfashion/README.md +++ /dev/null @@ -1,56 +0,0 @@ -# DeepFashion - -[DATASET] - -[MMFashion](https://github.com/open-mmlab/mmfashion) develops "fashion parsing and segmentation" module -based on the dataset -[DeepFashion-Inshop](https://drive.google.com/drive/folders/0B7EVK8r0v71pVDZFQXRsMDZCX1E?usp=sharing). -Its annotation follows COCO style. -To use it, you need to first download the data. Note that we only use "img_highres" in this task. -The file tree should be like this: - -```sh -mmdetection -├── mmdet -├── tools -├── configs -├── data -│ ├── DeepFashion -│ │ ├── In-shop -│ │ ├── Anno -│ │ │   ├── segmentation -│ │ │   | ├── DeepFashion_segmentation_train.json -│ │ │   | ├── DeepFashion_segmentation_query.json -│ │ │   | ├── DeepFashion_segmentation_gallery.json -│ │ │   ├── list_bbox_inshop.txt -│ │ │   ├── list_description_inshop.json -│ │ │   ├── list_item_inshop.txt -│ │ │   └── list_landmarks_inshop.txt -│ │ ├── Eval -│ │ │ └── list_eval_partition.txt -│ │ ├── Img -│ │ │ ├── img -│ │ │ │ ├──XXX.jpg -│ │ │ ├── img_highres -│ │ │ └── ├──XXX.jpg - -``` - -After that you can train the Mask RCNN r50 on DeepFashion-In-shop dataset by launching training with the `mask_rcnn_r50_fpn_1x.py` config -or creating your own config file. - -``` -@inproceedings{liuLQWTcvpr16DeepFashion, - author = {Liu, Ziwei and Luo, Ping and Qiu, Shi and Wang, Xiaogang and Tang, Xiaoou}, - title = {DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations}, - booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, - month = {June}, - year = {2016} -} -``` - -## Model Zoo - -| Backbone | Model type | Dataset | bbox detection Average Precision | segmentation Average Precision | Config | Download (Google) | -| :---------: | :----------: | :-----------------: | :--------------------------------: | :----------------------------: | :---------:| :-------------------------: | -| ResNet50 | Mask RCNN | DeepFashion-In-shop | 0.599 | 0.584 |[config](https://github.com/open-mmlab/mmdetection/blob/master/configs/deepfashion/mask_rcnn_r50_fpn_15e_deepfashion.py)| [model](https://drive.google.com/open?id=1q6zF7J6Gb-FFgM87oIORIt6uBozaXp5r) | [log](https://drive.google.com/file/d/1qTK4Dr4FFLa9fkdI6UVko408gkrfTRLP/view?usp=sharing) | diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/export/pytorch2onnx.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/export/pytorch2onnx.py deleted file mode 100644 index 809a817e67446b3c0c7894dcefb3c4bbc29afb7e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/export/pytorch2onnx.py +++ /dev/null @@ -1,154 +0,0 @@ -from functools import partial - -import mmcv -import numpy as np -import torch -from mmcv.runner import load_checkpoint - - -def generate_inputs_and_wrap_model(config_path, - checkpoint_path, - input_config, - cfg_options=None): - """Prepare sample input and wrap model for ONNX export. - - The ONNX export API only accept args, and all inputs should be - torch.Tensor or corresponding types (such as tuple of tensor). - So we should call this function before exporting. This function will: - - 1. generate corresponding inputs which are used to execute the model. - 2. Wrap the model's forward function. - - For example, the MMDet models' forward function has a parameter - ``return_loss:bool``. As we want to set it as False while export API - supports neither bool type or kwargs. So we have to replace the forward - like: ``model.forward = partial(model.forward, return_loss=False)`` - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - input_config (dict): the exactly data in this dict depends on the - framework. For MMSeg, we can just declare the input shape, - and generate the dummy data accordingly. However, for MMDet, - we may pass the real img path, or the NMS will return None - as there is no legal bbox. - - Returns: - tuple: (model, tensor_data) wrapped model which can be called by \ - model(*tensor_data) and a list of inputs which are used to execute \ - the model while exporting. - """ - - model = build_model_from_cfg( - config_path, checkpoint_path, cfg_options=cfg_options) - one_img, one_meta = preprocess_example_input(input_config) - tensor_data = [one_img] - model.forward = partial( - model.forward, img_metas=[[one_meta]], return_loss=False) - - # pytorch has some bug in pytorch1.3, we have to fix it - # by replacing these existing op - opset_version = 11 - # put the import within the function thus it will not cause import error - # when not using this function - try: - from mmcv.onnx.symbolic import register_extra_symbolics - except ModuleNotFoundError: - raise NotImplementedError('please update mmcv to version>=v1.0.4') - register_extra_symbolics(opset_version) - - return model, tensor_data - - -def build_model_from_cfg(config_path, checkpoint_path, cfg_options=None): - """Build a model from config and load the given checkpoint. - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - - Returns: - torch.nn.Module: the built model - """ - from mmdet.models import build_detector - - cfg = mmcv.Config.fromfile(config_path) - if cfg_options is not None: - cfg.merge_from_dict(cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # build the model - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - load_checkpoint(model, checkpoint_path, map_location='cpu') - model.cpu().eval() - return model - - -def preprocess_example_input(input_config): - """Prepare an example input image for ``generate_inputs_and_wrap_model``. - - Args: - input_config (dict): customized config describing the example input. - - Returns: - tuple: (one_img, one_meta), tensor of the example input image and \ - meta information for the example input image. - - Examples: - >>> from mmdet.core.export import preprocess_example_input - >>> input_config = { - >>> 'input_shape': (1,3,224,224), - >>> 'input_path': 'demo/demo.jpg', - >>> 'normalize_cfg': { - >>> 'mean': (123.675, 116.28, 103.53), - >>> 'std': (58.395, 57.12, 57.375) - >>> } - >>> } - >>> one_img, one_meta = preprocess_example_input(input_config) - >>> print(one_img.shape) - torch.Size([1, 3, 224, 224]) - >>> print(one_meta) - {'img_shape': (224, 224, 3), - 'ori_shape': (224, 224, 3), - 'pad_shape': (224, 224, 3), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False} - """ - input_path = input_config['input_path'] - input_shape = input_config['input_shape'] - one_img = mmcv.imread(input_path) - one_img = mmcv.imresize(one_img, input_shape[2:][::-1]) - show_img = one_img.copy() - if 'normalize_cfg' in input_config.keys(): - normalize_cfg = input_config['normalize_cfg'] - mean = np.array(normalize_cfg['mean'], dtype=np.float32) - std = np.array(normalize_cfg['std'], dtype=np.float32) - to_rgb = normalize_cfg.get('to_rgb', True) - one_img = mmcv.imnormalize(one_img, mean, std, to_rgb=to_rgb) - one_img = one_img.transpose(2, 0, 1) - one_img = torch.from_numpy(one_img).unsqueeze(0).float().requires_grad_( - True) - (_, C, H, W) = input_shape - one_meta = { - 'img_shape': (H, W, C), - 'ori_shape': (H, W, C), - 'pad_shape': (H, W, C), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False, - 'show_img': show_img, - } - - return one_img, one_meta diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py deleted file mode 100644 index e067b0121cf8b8230c0c9c6b8cfd41f56be4e298..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py +++ /dev/null @@ -1,671 +0,0 @@ -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply, multiclass_nms -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.models import HEADS -from mmdet.models.dense_heads import ATSSHead - -EPS = 1e-12 -try: - import sklearn.mixture as skm -except ImportError: - skm = None - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class PAAHead(ATSSHead): - """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU - Prediction for Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - topk (int): Select topk samples with smallest loss in - each level. - score_voting (bool): Whether to use score voting in post-process. - covariance_type : String describing the type of covariance parameters - to be used in :class:`sklearn.mixture.GaussianMixture`. - It must be one of: - - - 'full': each component has its own general covariance matrix - - 'tied': all components share the same general covariance matrix - - 'diag': each component has its own diagonal covariance matrix - - 'spherical': each component has its own single variance - Default: 'diag'. From 'full' to 'spherical', the gmm fitting - process is faster yet the performance could be influenced. For most - cases, 'diag' should be a good choice. - """ - - def __init__(self, - *args, - topk=9, - score_voting=True, - covariance_type='diag', - **kwargs): - # topk used in paa reassign process - self.topk = topk - self.with_score_voting = score_voting - self.covariance_type = covariance_type - super(PAAHead, self).__init__(*args, **kwargs) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def loss(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss gmm_assignment. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - ) - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds, - pos_gt_index) = cls_reg_targets - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - iou_preds = levels_to_images(iou_preds) - iou_preds = [item.reshape(-1, 1) for item in iou_preds] - pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list, - cls_scores, bbox_preds, labels, - labels_weight, bboxes_target, - bboxes_weight, pos_inds) - - with torch.no_grad(): - reassign_labels, reassign_label_weight, \ - reassign_bbox_weights, num_pos = multi_apply( - self.paa_reassign, - pos_losses_list, - labels, - labels_weight, - bboxes_weight, - pos_inds, - pos_gt_index, - anchor_list) - num_pos = sum(num_pos) - # convert all tensor list to a flatten tensor - cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1)) - bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1)) - iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1)) - labels = torch.cat(reassign_labels, 0).view(-1) - flatten_anchors = torch.cat( - [torch.cat(item, 0) for item in anchor_list]) - labels_weight = torch.cat(reassign_label_weight, 0).view(-1) - bboxes_target = torch.cat(bboxes_target, - 0).view(-1, bboxes_target[0].size(-1)) - - pos_inds_flatten = ((labels >= 0) - & - (labels < self.num_classes)).nonzero().reshape(-1) - - losses_cls = self.loss_cls( - cls_scores, - labels, - labels_weight, - avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0 - if num_pos: - pos_bbox_pred = self.bbox_coder.decode( - flatten_anchors[pos_inds_flatten], - bbox_preds[pos_inds_flatten]) - pos_bbox_target = bboxes_target[pos_inds_flatten] - iou_target = bbox_overlaps( - pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True) - losses_iou = self.loss_centerness( - iou_preds[pos_inds_flatten], - iou_target.unsqueeze(-1), - avg_factor=num_pos) - losses_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - iou_target.clamp(min=EPS), - avg_factor=iou_target.sum()) - else: - losses_iou = iou_preds.sum() * 0 - losses_bbox = bbox_preds.sum() * 0 - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou) - - def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight, - bbox_target, bbox_weight, pos_inds): - """Calculate loss of all potential positive samples obtained from first - match process. - - Args: - anchors (list[Tensor]): Anchors of each scale. - cls_score (Tensor): Box scores of single image with shape - (num_anchors, num_classes) - bbox_pred (Tensor): Box energies / deltas of single image - with shape (num_anchors, 4) - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_target (dict): Regression target of each anchor with - shape (num_anchors, 4). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - - Returns: - Tensor: Losses of all positive samples in single image. - """ - if not len(pos_inds): - return cls_score.new([]), - anchors_all_level = torch.cat(anchors, 0) - pos_scores = cls_score[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_label = label[pos_inds] - pos_label_weight = label_weight[pos_inds] - pos_bbox_target = bbox_target[pos_inds] - pos_bbox_weight = bbox_weight[pos_inds] - pos_anchors = anchors_all_level[pos_inds] - pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred) - - # to keep loss dimension - loss_cls = self.loss_cls( - pos_scores, - pos_label, - pos_label_weight, - avg_factor=self.loss_cls.loss_weight, - reduction_override='none') - - loss_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - pos_bbox_weight, - avg_factor=self.loss_cls.loss_weight, - reduction_override='none') - - loss_cls = loss_cls.sum(-1) - pos_loss = loss_bbox + loss_cls - return pos_loss, - - def paa_reassign(self, pos_losses, label, label_weight, bbox_weight, - pos_inds, pos_gt_inds, anchors): - """Fit loss to GMM distribution and separate positive, ignore, negative - samples again with GMM model. - - Args: - pos_losses (Tensor): Losses of all positive samples in - single image. - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - pos_gt_inds (Tensor): Gt_index of all positive samples got - from first assign process. - anchors (list[Tensor]): Anchors of each scale. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - label (Tensor): classification target of each anchor after - paa assign, with shape (num_anchors,) - - label_weight (Tensor): Classification loss weight of each - anchor after paa assign, with shape (num_anchors). - - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - - num_pos (int): The number of positive samples after paa - assign. - """ - if not len(pos_inds): - return label, label_weight, bbox_weight, 0 - label = label.clone() - label_weight = label_weight.clone() - bbox_weight = bbox_weight.clone() - num_gt = pos_gt_inds.max() + 1 - num_level = len(anchors) - num_anchors_each_level = [item.size(0) for item in anchors] - num_anchors_each_level.insert(0, 0) - inds_level_interval = np.cumsum(num_anchors_each_level) - pos_level_mask = [] - for i in range(num_level): - mask = (pos_inds >= inds_level_interval[i]) & ( - pos_inds < inds_level_interval[i + 1]) - pos_level_mask.append(mask) - pos_inds_after_paa = [label.new_tensor([])] - ignore_inds_after_paa = [label.new_tensor([])] - for gt_ind in range(num_gt): - pos_inds_gmm = [] - pos_loss_gmm = [] - gt_mask = pos_gt_inds == gt_ind - for level in range(num_level): - level_mask = pos_level_mask[level] - level_gt_mask = level_mask & gt_mask - value, topk_inds = pos_losses[level_gt_mask].topk( - min(level_gt_mask.sum(), self.topk), largest=False) - pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds]) - pos_loss_gmm.append(value) - pos_inds_gmm = torch.cat(pos_inds_gmm) - pos_loss_gmm = torch.cat(pos_loss_gmm) - # fix gmm need at least two sample - if len(pos_inds_gmm) < 2: - continue - device = pos_inds_gmm.device - pos_loss_gmm, sort_inds = pos_loss_gmm.sort() - pos_inds_gmm = pos_inds_gmm[sort_inds] - pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy() - min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max() - means_init = np.array([min_loss, max_loss]).reshape(2, 1) - weights_init = np.array([0.5, 0.5]) - precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full - if self.covariance_type == 'spherical': - precisions_init = precisions_init.reshape(2) - elif self.covariance_type == 'diag': - precisions_init = precisions_init.reshape(2, 1) - elif self.covariance_type == 'tied': - precisions_init = np.array([[1.0]]) - if skm is None: - raise ImportError('Please run "pip install sklearn" ' - 'to install sklearn first.') - gmm = skm.GaussianMixture( - 2, - weights_init=weights_init, - means_init=means_init, - precisions_init=precisions_init, - covariance_type=self.covariance_type) - gmm.fit(pos_loss_gmm) - gmm_assignment = gmm.predict(pos_loss_gmm) - scores = gmm.score_samples(pos_loss_gmm) - gmm_assignment = torch.from_numpy(gmm_assignment).to(device) - scores = torch.from_numpy(scores).to(device) - - pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme( - gmm_assignment, scores, pos_inds_gmm) - pos_inds_after_paa.append(pos_inds_temp) - ignore_inds_after_paa.append(ignore_inds_temp) - - pos_inds_after_paa = torch.cat(pos_inds_after_paa) - ignore_inds_after_paa = torch.cat(ignore_inds_after_paa) - reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1) - reassign_ids = pos_inds[reassign_mask] - label[reassign_ids] = self.num_classes - label_weight[ignore_inds_after_paa] = 0 - bbox_weight[reassign_ids] = 0 - num_pos = len(pos_inds_after_paa) - return label, label_weight, bbox_weight, num_pos - - def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm): - """A general separation scheme for gmm model. - - It separates a GMM distribution of candidate samples into three - parts, 0 1 and uncertain areas, and you can implement other - separation schemes by rewriting this function. - - Args: - gmm_assignment (Tensor): The prediction of GMM which is of shape - (num_samples,). The 0/1 value indicates the distribution - that each sample comes from. - scores (Tensor): The probability of sample coming from the - fit GMM distribution. The tensor is of shape (num_samples,). - pos_inds_gmm (Tensor): All the indexes of samples which are used - to fit GMM model. The tensor is of shape (num_samples,) - - Returns: - tuple[Tensor]: The indices of positive and ignored samples. - - - pos_inds_temp (Tensor): Indices of positive samples. - - ignore_inds_temp (Tensor): Indices of ignore samples. - """ - # The implementation is (c) in Fig.3 in origin paper instead of (b). - # You can refer to issues such as - # https://github.com/kkhoot/PAA/issues/8 and - # https://github.com/kkhoot/PAA/issues/9. - fgs = gmm_assignment == 0 - pos_inds_temp = fgs.new_tensor([], dtype=torch.long) - ignore_inds_temp = fgs.new_tensor([], dtype=torch.long) - if fgs.nonzero().numel(): - _, pos_thr_ind = scores[fgs].topk(1) - pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1] - ignore_inds_temp = pos_inds_gmm.new_tensor([]) - return pos_inds_temp, ignore_inds_temp - - def get_targets( - self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - ): - """Get targets for PAA head. - - This method is almost the same as `AnchorHead.get_targets()`. We direct - return the results from _get_targets_single instead map it to levels - by images_to_levels function. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels (list[Tensor]): Labels of all anchors, each with - shape (num_anchors,). - - label_weights (list[Tensor]): Label weights of all anchor. - each with shape (num_anchors,). - - bbox_targets (list[Tensor]): BBox targets of all anchors. - each with shape (num_anchors, 4). - - bbox_weights (list[Tensor]): BBox weights of all anchors. - each with shape (num_anchors, 4). - - pos_inds (list[Tensor]): Contains all index of positive - sample in all anchor. - - gt_inds (list[Tensor]): Contains all gt_index of positive - sample in all anchor. - """ - - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - - (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds, - valid_neg_inds, sampling_result) = results - - # Due to valid flag of anchors, we have to calculate the real pos_inds - # in origin anchor set. - pos_inds = [] - for i, single_labels in enumerate(labels): - pos_mask = (0 <= single_labels) & ( - single_labels < self.num_classes) - pos_inds.append(pos_mask.nonzero().view(-1)) - - gt_inds = [item.pos_assigned_gt_inds for item in sampling_result] - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - gt_inds) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - This method is same as `AnchorHead._get_targets_single()`. - """ - assert unmap_outputs, 'We must map outputs back to the original' \ - 'set of anchors in PAAhead' - return super(ATSSHead, self)._get_targets_single( - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True) - - def _get_bboxes(self, - cls_scores, - bbox_preds, - iou_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into labeled boxes. - - This method is almost same as `ATSSHead._get_bboxes()`. - We use sqrt(iou_preds * cls_scores) in NMS process instead of just - cls_scores. Besides, score voting is used when `` score_voting`` - is set to True. - """ - assert with_nms, 'PAA only supports "with_nms=True" now' - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - batch_size = cls_scores[0].shape[0] - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_iou_preds = [] - for cls_score, bbox_pred, iou_preds, anchors in zip( - cls_scores, bbox_preds, iou_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(0, 2, 3, 1).reshape( - batch_size, -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - iou_preds = iou_preds.permute(0, 2, 3, 1).reshape(batch_size, - -1).sigmoid() - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[1] > nms_pre: - max_scores, _ = (scores * iou_preds[..., None]).sqrt().max(-1) - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - iou_preds = iou_preds[batch_inds, topk_inds] - else: - anchors = anchors.expand_as(bbox_pred) - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_iou_preds.append(iou_preds) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - batch_mlvl_iou_preds = torch.cat(mlvl_iou_preds, dim=1) - batch_mlvl_nms_scores = (batch_mlvl_scores * - batch_mlvl_iou_preds[..., None]).sqrt() - - det_results = [] - for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes, - batch_mlvl_nms_scores): - det_bbox, det_label = multiclass_nms( - mlvl_bboxes, - mlvl_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=None) - if self.with_score_voting and len(det_bbox) > 0: - det_bbox, det_label = self.score_voting( - det_bbox, det_label, mlvl_bboxes, mlvl_scores, - cfg.score_thr) - det_results.append(tuple([det_bbox, det_label])) - - return det_results - - def score_voting(self, det_bboxes, det_labels, mlvl_bboxes, - mlvl_nms_scores, score_thr): - """Implementation of score voting method works on each remaining boxes - after NMS procedure. - - Args: - det_bboxes (Tensor): Remaining boxes after NMS procedure, - with shape (k, 5), each dimension means - (x1, y1, x2, y2, score). - det_labels (Tensor): The label of remaining boxes, with shape - (k, 1),Labels are 0-based. - mlvl_bboxes (Tensor): All boxes before the NMS procedure, - with shape (num_anchors,4). - mlvl_nms_scores (Tensor): The scores of all boxes which is used - in the NMS procedure, with shape (num_anchors, num_class) - mlvl_iou_preds (Tensor): The predictions of IOU of all boxes - before the NMS procedure, with shape (num_anchors, 1) - score_thr (float): The score threshold of bboxes. - - Returns: - tuple: Usually returns a tuple containing voting results. - - - det_bboxes_voted (Tensor): Remaining boxes after - score voting procedure, with shape (k, 5), each - dimension means (x1, y1, x2, y2, score). - - det_labels_voted (Tensor): Label of remaining bboxes - after voting, with shape (num_anchors,). - """ - candidate_mask = mlvl_nms_scores > score_thr - candidate_mask_nonzeros = candidate_mask.nonzero() - candidate_inds = candidate_mask_nonzeros[:, 0] - candidate_labels = candidate_mask_nonzeros[:, 1] - candidate_bboxes = mlvl_bboxes[candidate_inds] - candidate_scores = mlvl_nms_scores[candidate_mask] - det_bboxes_voted = [] - det_labels_voted = [] - for cls in range(self.cls_out_channels): - candidate_cls_mask = candidate_labels == cls - if not candidate_cls_mask.any(): - continue - candidate_cls_scores = candidate_scores[candidate_cls_mask] - candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask] - det_cls_mask = det_labels == cls - det_cls_bboxes = det_bboxes[det_cls_mask].view( - -1, det_bboxes.size(-1)) - det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4], - candidate_cls_bboxes) - for det_ind in range(len(det_cls_bboxes)): - single_det_ious = det_candidate_ious[det_ind] - pos_ious_mask = single_det_ious > 0.01 - pos_ious = single_det_ious[pos_ious_mask] - pos_bboxes = candidate_cls_bboxes[pos_ious_mask] - pos_scores = candidate_cls_scores[pos_ious_mask] - pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) * - pos_scores)[:, None] - voted_box = torch.sum( - pis * pos_bboxes, dim=0) / torch.sum( - pis, dim=0) - voted_score = det_cls_bboxes[det_ind][-1:][None, :] - det_bboxes_voted.append( - torch.cat((voted_box[None, :], voted_score), dim=1)) - det_labels_voted.append(cls) - - det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0) - det_labels_voted = det_labels.new_tensor(det_labels_voted) - return det_bboxes_voted, det_labels_voted diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/lm.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/lm.py deleted file mode 100644 index c8aad8f06797eef3293605056e1de14d07c56c2a..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/lm.py +++ /dev/null @@ -1,527 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (Optional[float]): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (Optional[str]): Method for weight initialization. - depthwise_init (Optional[str]): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initalize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): indices of the codes to model. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks' - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - if self.two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error. - if use_sampling and temp > 0.0: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: bool = False, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T]. - conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None. - num_samples (int or None): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - remove_prompts (bool): Whether to remove prompts from generation or not. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistents. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train anf test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/HIT-TMG/dialogue-bart-large-chinese-DuSinc/app.py b/spaces/HIT-TMG/dialogue-bart-large-chinese-DuSinc/app.py deleted file mode 100644 index 95a69e19ba022d63c50de9441a90a82247b24c55..0000000000000000000000000000000000000000 --- a/spaces/HIT-TMG/dialogue-bart-large-chinese-DuSinc/app.py +++ /dev/null @@ -1,70 +0,0 @@ -import gradio as gr -from typing import List, Optional -from transformers import BertTokenizer, BartForConditionalGeneration - -title = "HIT-TMG/dialogue-bart-large-chinese-DuSinc" -description = """ -This is a fine-tuned version of HIT-TMG/dialogue-bart-large-chinese on the DuSinc dataset. -But it only has chit-chat ability without knowledge since we haven't introduced knowledge retrieval interface yet.\n -See some details of model card at https://huggingface.co/HIT-TMG/dialogue-bart-large-chinese-DuSinc . \n\n -Besides starting the conversation from scratch, you can also input the whole dialogue history utterance by utterance seperated by '[SEP]'. \n -""" - - -tokenizer = BertTokenizer.from_pretrained("HIT-TMG/dialogue-bart-large-chinese-DuSinc") -model = BartForConditionalGeneration.from_pretrained("HIT-TMG/dialogue-bart-large-chinese-DuSinc") - -tokenizer.truncation_side = 'left' -max_length = 512 - -examples = [ - ["你有什么爱好吗"], - ["你好。[SEP]嘿嘿你好,请问你最近在忙什么呢?[SEP]我最近养了一只狗狗,我在训练它呢。"] -] - - -def chat_func(input_utterance: str, history: Optional[List[str]] = None): - if history is not None: - history.extend(input_utterance.split(tokenizer.sep_token)) - else: - history = input_utterance.split(tokenizer.sep_token) - - history_str = "[history] " + tokenizer.sep_token.join(history) - - input_ids = tokenizer(history_str, - return_tensors='pt', - truncation=True, - max_length=max_length, - ).input_ids - - output_ids = model.generate(input_ids, - max_new_tokens=30, - top_k=32, - num_beams=4, - repetition_penalty=1.2, - no_repeat_ngram_size=4)[0] - - response = tokenizer.decode(output_ids, skip_special_tokens=True) - - history.append(response) - - - if len(history) % 2 == 0: - display_utterances = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2)] - else: - display_utterances = [("", history[0])] + [(history[i], history[i + 1]) for i in range(1, len(history) - 1, 2)] - - return display_utterances, history - - -demo = gr.Interface(fn=chat_func, - title=title, - description=description, - inputs=[gr.Textbox(lines=1, placeholder="Input current utterance"), "state"], - examples=examples, - outputs=["chatbot", "state"]) - - -if __name__ == "__main__": - demo.launch() - diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/aceplotablate.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/aceplotablate.py deleted file mode 100644 index 585195eaf973760a7d78e4da6539c343049141de..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/aceplotablate.py +++ /dev/null @@ -1,54 +0,0 @@ -import os, sys, argparse, json, shutil -from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas -from matplotlib.figure import Figure -from matplotlib.ticker import MaxNLocator -import matplotlib - -def main(): - parser = argparse.ArgumentParser(description='ACE optimization utility', - prog='python -m netdissect.aceoptimize') - parser.add_argument('--classname', type=str, default=None, - help='intervention classname') - parser.add_argument('--layer', type=str, default='layer4', - help='layer name') - parser.add_argument('--outdir', type=str, default=None, - help='dissection directory') - parser.add_argument('--metric', type=str, default=None, - help='experiment variant') - args = parser.parse_args() - - if args.metric is None: - args.metric = 'ace' - - run_command(args) - -def run_command(args): - fig = Figure(figsize=(4.5,3.5)) - FigureCanvas(fig) - ax = fig.add_subplot(111) - for metric in [args.metric, 'iou']: - jsonname = os.path.join(args.outdir, args.layer, 'fullablation', - '%s-%s.json' % (args.classname, metric)) - with open(jsonname) as f: - summary = json.load(f) - baseline = summary['baseline'] - effects = summary['ablation_effects'][:26] - norm_effects = [0] + [1.0 - e / baseline for e in effects] - ax.plot(norm_effects, label= - 'Units by ACE' if 'ace' in metric else 'Top units by IoU') - ax.set_title('Effect of ablating units for %s' % (args.classname)) - ax.grid(True) - ax.legend() - ax.set_ylabel('Portion of %s pixels removed' % args.classname) - ax.set_xlabel('Number of units ablated') - ax.set_ylim(0, 1.0) - ax.set_xlim(0, 25) - fig.tight_layout() - dirname = os.path.join(args.outdir, args.layer, 'fullablation') - fig.savefig(os.path.join(dirname, 'effect-%s-%s.png' % - (args.classname, args.metric))) - fig.savefig(os.path.join(dirname, 'effect-%s-%s.pdf' % - (args.classname, args.metric))) - -if __name__ == '__main__': - main() diff --git a/spaces/Haitangtangtangtang/AnimeBackgroundGAN/network/__init__.py b/spaces/Haitangtangtangtang/AnimeBackgroundGAN/network/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/clib/libbleu/libbleu.cpp b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/clib/libbleu/libbleu.cpp deleted file mode 100644 index 939d9e1174e398fa48c840009b592c753a67939a..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/clib/libbleu/libbleu.cpp +++ /dev/null @@ -1,157 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include -#include -#include -#include - -// NOLINTNEXTLINE -typedef struct { - size_t reflen; - size_t predlen; - size_t match1; - size_t count1; - size_t match2; - size_t count2; - size_t match3; - size_t count3; - size_t match4; - size_t count4; -} bleu_stat; - -// left trim (remove pad) -void bleu_ltrim(size_t* len, int** sent, int pad) { - size_t start = 0; - while (start < *len) { - if (*(*sent + start) != pad) { - break; - } - start++; - } - *sent += start; - *len -= start; -} - -// right trim remove (eos) -void bleu_rtrim(size_t* len, int** sent, int pad, int eos) { - size_t end = *len - 1; - while (end > 0) { - if (*(*sent + end) != eos && *(*sent + end) != pad) { - break; - } - end--; - } - *len = end + 1; -} - -// left and right trim -void bleu_trim(size_t* len, int** sent, int pad, int eos) { - bleu_ltrim(len, sent, pad); - bleu_rtrim(len, sent, pad, eos); -} - -size_t bleu_hash(int len, int* data) { - size_t h = 14695981039346656037ul; - size_t prime = 0x100000001b3; - char* b = (char*)data; - size_t blen = sizeof(int) * len; - - while (blen-- > 0) { - h ^= *b++; - h *= prime; - } - - return h; -} - -void bleu_addngram( - size_t* ntotal, - size_t* nmatch, - size_t n, - size_t reflen, - int* ref, - size_t predlen, - int* pred) { - if (predlen < n) { - return; - } - - predlen = predlen - n + 1; - (*ntotal) += predlen; - - if (reflen < n) { - return; - } - - reflen = reflen - n + 1; - - std::map count; - while (predlen > 0) { - size_t w = bleu_hash(n, pred++); - count[w]++; - predlen--; - } - - while (reflen > 0) { - size_t w = bleu_hash(n, ref++); - if (count[w] > 0) { - (*nmatch)++; - count[w] -= 1; - } - reflen--; - } -} - -extern "C" { - -#ifdef _WIN64 -__declspec(dllexport) -#endif - void bleu_zero_init(bleu_stat* stat) { - std::memset(stat, 0, sizeof(bleu_stat)); -} - -#ifdef _WIN64 -__declspec(dllexport) -#endif - void bleu_one_init(bleu_stat* stat) { - bleu_zero_init(stat); - stat->count1 = 0; - stat->count2 = 1; - stat->count3 = 1; - stat->count4 = 1; - stat->match1 = 0; - stat->match2 = 1; - stat->match3 = 1; - stat->match4 = 1; -} - -#ifdef _WIN64 -__declspec(dllexport) -#endif - void bleu_add( - bleu_stat* stat, - size_t reflen, - int* ref, - size_t predlen, - int* pred, - int pad, - int eos) { - - bleu_trim(&reflen, &ref, pad, eos); - bleu_trim(&predlen, &pred, pad, eos); - stat->reflen += reflen; - stat->predlen += predlen; - - bleu_addngram(&stat->count1, &stat->match1, 1, reflen, ref, predlen, pred); - bleu_addngram(&stat->count2, &stat->match2, 2, reflen, ref, predlen, pred); - bleu_addngram(&stat->count3, &stat->match3, 3, reflen, ref, predlen, pred); - bleu_addngram(&stat->count4, &stat->match4, 4, reflen, ref, predlen, pred); -} -} diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/monotonic_align/monotonic_align/__init__.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/monotonic_align/monotonic_align/__init__.py deleted file mode 100644 index 47a4dbf3177302af6b8e7d08b0b78343b1329efa..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/monotonic_align/monotonic_align/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -import pkg_resources - -__version__ = pkg_resources.get_distribution("monotonic_align").version - -from monotonic_align.mas import * diff --git a/spaces/Harveenchadha/oiTrans/subword-nmt/subword_nmt/subword_nmt.py b/spaces/Harveenchadha/oiTrans/subword-nmt/subword_nmt/subword_nmt.py deleted file mode 100644 index 29104f4d8029524a80d6fa649b69a8acec0b8abc..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/subword-nmt/subword_nmt/subword_nmt.py +++ /dev/null @@ -1,97 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import io -import sys -import codecs -import argparse - -from .learn_bpe import learn_bpe -from .apply_bpe import BPE, read_vocabulary -from .get_vocab import get_vocab -from .learn_joint_bpe_and_vocab import learn_joint_bpe_and_vocab - -from .learn_bpe import create_parser as create_learn_bpe_parser -from .apply_bpe import create_parser as create_apply_bpe_parser -from .get_vocab import create_parser as create_get_vocab_parser -from .learn_joint_bpe_and_vocab import create_parser as create_learn_joint_bpe_and_vocab_parser - -# hack for python2/3 compatibility -argparse.open = io.open - -def main(): - parser = argparse.ArgumentParser( - formatter_class=argparse.RawTextHelpFormatter, - description="subword-nmt: unsupervised word segmentation for neural machine translation and text generation ") - subparsers = parser.add_subparsers(dest='command', - help="""command to run. Run one of the commands with '-h' for more info. - -learn-bpe: learn BPE merge operations on input text. -apply-bpe: apply given BPE operations to input text. -get-vocab: extract vocabulary and word frequencies from input text. -learn-joint-bpe-and-vocab: executes recommended workflow for joint BPE.""") - - learn_bpe_parser = create_learn_bpe_parser(subparsers) - apply_bpe_parser = create_apply_bpe_parser(subparsers) - get_vocab_parser = create_get_vocab_parser(subparsers) - learn_joint_bpe_and_vocab_parser = create_learn_joint_bpe_and_vocab_parser(subparsers) - - args = parser.parse_args() - - if args.command == 'learn-bpe': - # read/write files as UTF-8 - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - - learn_bpe(args.input, args.output, args.symbols, args.min_frequency, args.verbose, - is_dict=args.dict_input, total_symbols=args.total_symbols) - elif args.command == 'apply-bpe': - # read/write files as UTF-8 - args.codes = codecs.open(args.codes.name, encoding='utf-8') - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - if args.vocabulary: - args.vocabulary = codecs.open(args.vocabulary.name, encoding='utf-8') - - if args.vocabulary: - vocabulary = read_vocabulary(args.vocabulary, args.vocabulary_threshold) - else: - vocabulary = None - - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - if args.glossaries: - args.glossaries = [g.decode('UTF-8') for g in args.glossaries] - - bpe = BPE(args.codes, args.merges, args.separator, vocabulary, args.glossaries) - - for line in args.input: - args.output.write(bpe.process_line(line, args.dropout)) - - elif args.command == 'get-vocab': - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - get_vocab(args.input, args.output) - elif args.command == 'learn-joint-bpe-and-vocab': - learn_joint_bpe_and_vocab(args) - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - else: - raise Exception('Invalid command provided') - - -# python 2/3 compatibility -if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) -else: - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer) diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/kaldi/kaldi_decoder.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/kaldi/kaldi_decoder.py deleted file mode 100644 index 5f62cc58ae8c0c5a3ba7d17713fedf0abc302942..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/kaldi/kaldi_decoder.py +++ /dev/null @@ -1,244 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ThreadPoolExecutor -import logging -from omegaconf import MISSING -import os -import torch -from typing import Optional -import warnings - - -from dataclasses import dataclass -from fairseq.dataclass import FairseqDataclass -from .kaldi_initializer import KaldiInitializerConfig, initalize_kaldi - - -logger = logging.getLogger(__name__) - - -@dataclass -class KaldiDecoderConfig(FairseqDataclass): - hlg_graph_path: Optional[str] = None - output_dict: str = MISSING - - kaldi_initializer_config: Optional[KaldiInitializerConfig] = None - - acoustic_scale: float = 0.5 - max_active: int = 10000 - beam_delta: float = 0.5 - hash_ratio: float = 2.0 - - is_lattice: bool = False - lattice_beam: float = 10.0 - prune_interval: int = 25 - determinize_lattice: bool = True - prune_scale: float = 0.1 - max_mem: int = 0 - phone_determinize: bool = True - word_determinize: bool = True - minimize: bool = True - - num_threads: int = 1 - - -class KaldiDecoder(object): - def __init__( - self, - cfg: KaldiDecoderConfig, - beam: int, - nbest: int = 1, - ): - try: - from kaldi.asr import FasterRecognizer, LatticeFasterRecognizer - from kaldi.base import set_verbose_level - from kaldi.decoder import ( - FasterDecoder, - FasterDecoderOptions, - LatticeFasterDecoder, - LatticeFasterDecoderOptions, - ) - from kaldi.lat.functions import DeterminizeLatticePhonePrunedOptions - from kaldi.fstext import read_fst_kaldi, SymbolTable - except: - warnings.warn( - "pykaldi is required for this functionality. Please install from https://github.com/pykaldi/pykaldi" - ) - - # set_verbose_level(2) - - self.acoustic_scale = cfg.acoustic_scale - self.nbest = nbest - - if cfg.hlg_graph_path is None: - assert ( - cfg.kaldi_initializer_config is not None - ), "Must provide hlg graph path or kaldi initializer config" - cfg.hlg_graph_path = initalize_kaldi(cfg.kaldi_initializer_config) - - assert os.path.exists(cfg.hlg_graph_path), cfg.hlg_graph_path - - if cfg.is_lattice: - self.dec_cls = LatticeFasterDecoder - opt_cls = LatticeFasterDecoderOptions - self.rec_cls = LatticeFasterRecognizer - else: - assert self.nbest == 1, "nbest > 1 requires lattice decoder" - self.dec_cls = FasterDecoder - opt_cls = FasterDecoderOptions - self.rec_cls = FasterRecognizer - - self.decoder_options = opt_cls() - self.decoder_options.beam = beam - self.decoder_options.max_active = cfg.max_active - self.decoder_options.beam_delta = cfg.beam_delta - self.decoder_options.hash_ratio = cfg.hash_ratio - - if cfg.is_lattice: - self.decoder_options.lattice_beam = cfg.lattice_beam - self.decoder_options.prune_interval = cfg.prune_interval - self.decoder_options.determinize_lattice = cfg.determinize_lattice - self.decoder_options.prune_scale = cfg.prune_scale - det_opts = DeterminizeLatticePhonePrunedOptions() - det_opts.max_mem = cfg.max_mem - det_opts.phone_determinize = cfg.phone_determinize - det_opts.word_determinize = cfg.word_determinize - det_opts.minimize = cfg.minimize - self.decoder_options.det_opts = det_opts - - self.output_symbols = {} - with open(cfg.output_dict, "r") as f: - for line in f: - items = line.rstrip().split() - assert len(items) == 2 - self.output_symbols[int(items[1])] = items[0] - - logger.info(f"Loading FST from {cfg.hlg_graph_path}") - self.fst = read_fst_kaldi(cfg.hlg_graph_path) - self.symbol_table = SymbolTable.read_text(cfg.output_dict) - - self.executor = ThreadPoolExecutor(max_workers=cfg.num_threads) - - def generate(self, models, sample, **unused): - """Generate a batch of inferences.""" - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - emissions, padding = self.get_emissions(models, encoder_input) - return self.decode(emissions, padding) - - def get_emissions(self, models, encoder_input): - """Run encoder and normalize emissions""" - model = models[0] - - all_encoder_out = [m(**encoder_input) for m in models] - - if len(all_encoder_out) > 1: - - if "encoder_out" in all_encoder_out[0]: - encoder_out = { - "encoder_out": sum(e["encoder_out"] for e in all_encoder_out) - / len(all_encoder_out), - "encoder_padding_mask": all_encoder_out[0]["encoder_padding_mask"], - } - padding = encoder_out["encoder_padding_mask"] - else: - encoder_out = { - "logits": sum(e["logits"] for e in all_encoder_out) - / len(all_encoder_out), - "padding_mask": all_encoder_out[0]["padding_mask"], - } - padding = encoder_out["padding_mask"] - else: - encoder_out = all_encoder_out[0] - padding = ( - encoder_out["padding_mask"] - if "padding_mask" in encoder_out - else encoder_out["encoder_padding_mask"] - ) - - if hasattr(model, "get_logits"): - emissions = model.get_logits(encoder_out, normalize=True) - else: - emissions = model.get_normalized_probs(encoder_out, log_probs=True) - - return ( - emissions.cpu().float().transpose(0, 1), - padding.cpu() if padding is not None and padding.any() else None, - ) - - def decode_one(self, logits, padding): - from kaldi.matrix import Matrix - - decoder = self.dec_cls(self.fst, self.decoder_options) - asr = self.rec_cls( - decoder, self.symbol_table, acoustic_scale=self.acoustic_scale - ) - - if padding is not None: - logits = logits[~padding] - - mat = Matrix(logits.numpy()) - - out = asr.decode(mat) - - if self.nbest > 1: - from kaldi.fstext import shortestpath - from kaldi.fstext.utils import ( - convert_compact_lattice_to_lattice, - convert_lattice_to_std, - convert_nbest_to_list, - get_linear_symbol_sequence, - ) - - lat = out["lattice"] - - sp = shortestpath(lat, nshortest=self.nbest) - - sp = convert_compact_lattice_to_lattice(sp) - sp = convert_lattice_to_std(sp) - seq = convert_nbest_to_list(sp) - - results = [] - for s in seq: - _, o, w = get_linear_symbol_sequence(s) - words = list(self.output_symbols[z] for z in o) - results.append( - { - "tokens": words, - "words": words, - "score": w.value, - "emissions": logits, - } - ) - return results - else: - words = out["text"].split() - return [ - { - "tokens": words, - "words": words, - "score": out["likelihood"], - "emissions": logits, - } - ] - - def decode(self, emissions, padding): - if padding is None: - padding = [None] * len(emissions) - - ret = list( - map( - lambda e, p: self.executor.submit(self.decode_one, e, p), - emissions, - padding, - ) - ) - return ret diff --git a/spaces/ICML2022/resefa/models/pggan_discriminator.py b/spaces/ICML2022/resefa/models/pggan_discriminator.py deleted file mode 100644 index 30b0868dd6a753ba7f2712c10b4f19708b67eee3..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/models/pggan_discriminator.py +++ /dev/null @@ -1,465 +0,0 @@ -# python3.7 -"""Contains the implementation of discriminator described in PGGAN. - -Paper: https://arxiv.org/pdf/1710.10196.pdf - -Official TensorFlow implementation: -https://github.com/tkarras/progressive_growing_of_gans -""" - -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F - -__all__ = ['PGGANDiscriminator'] - -# Resolutions allowed. -_RESOLUTIONS_ALLOWED = [8, 16, 32, 64, 128, 256, 512, 1024] - -# Default gain factor for weight scaling. -_WSCALE_GAIN = np.sqrt(2.0) - -# pylint: disable=missing-function-docstring - -class PGGANDiscriminator(nn.Module): - """Defines the discriminator network in PGGAN. - - NOTE: The discriminator takes images with `RGB` channel order and pixel - range [-1, 1] as inputs. - - Settings for the network: - - (1) resolution: The resolution of the input image. - (2) init_res: Smallest resolution of the convolutional backbone. - (default: 4) - (3) image_channels: Number of channels of the input image. (default: 3) - (4) label_dim: Dimension of the additional label for conditional generation. - In one-hot conditioning case, it is equal to the number of classes. If - set to 0, conditioning training will be disabled. (default: 0) - (5) fused_scale: Whether to fused `conv2d` and `downsample` together, - resulting in `conv2d` with strides. (default: False) - (6) use_wscale: Whether to use weight scaling. (default: True) - (7) wscale_gain: The factor to control weight scaling. (default: sqrt(2.0)) - (8) mbstd_groups: Group size for the minibatch standard deviation layer. - `0` means disable. (default: 16) - (9) fmaps_base: Factor to control number of feature maps for each layer. - (default: 16 << 10) - (10) fmaps_max: Maximum number of feature maps in each layer. (default: 512) - (11) eps: A small value to avoid divide overflow. (default: 1e-8) - """ - - def __init__(self, - resolution, - init_res=4, - image_channels=3, - label_dim=0, - fused_scale=False, - use_wscale=True, - wscale_gain=np.sqrt(2.0), - mbstd_groups=16, - fmaps_base=16 << 10, - fmaps_max=512, - eps=1e-8): - """Initializes with basic settings. - - Raises: - ValueError: If the `resolution` is not supported. - """ - super().__init__() - - if resolution not in _RESOLUTIONS_ALLOWED: - raise ValueError(f'Invalid resolution: `{resolution}`!\n' - f'Resolutions allowed: {_RESOLUTIONS_ALLOWED}.') - - self.init_res = init_res - self.init_res_log2 = int(np.log2(self.init_res)) - self.resolution = resolution - self.final_res_log2 = int(np.log2(self.resolution)) - self.image_channels = image_channels - self.label_dim = label_dim - self.fused_scale = fused_scale - self.use_wscale = use_wscale - self.wscale_gain = wscale_gain - self.mbstd_groups = mbstd_groups - self.fmaps_base = fmaps_base - self.fmaps_max = fmaps_max - self.eps = eps - - # Level-of-details (used for progressive training). - self.register_buffer('lod', torch.zeros(())) - self.pth_to_tf_var_mapping = {'lod': 'lod'} - - for res_log2 in range(self.final_res_log2, self.init_res_log2 - 1, -1): - res = 2 ** res_log2 - in_channels = self.get_nf(res) - out_channels = self.get_nf(res // 2) - block_idx = self.final_res_log2 - res_log2 - - # Input convolution layer for each resolution. - self.add_module( - f'input{block_idx}', - ConvLayer(in_channels=self.image_channels, - out_channels=in_channels, - kernel_size=1, - add_bias=True, - downsample=False, - fused_scale=False, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - activation_type='lrelu')) - self.pth_to_tf_var_mapping[f'input{block_idx}.weight'] = ( - f'FromRGB_lod{block_idx}/weight') - self.pth_to_tf_var_mapping[f'input{block_idx}.bias'] = ( - f'FromRGB_lod{block_idx}/bias') - - # Convolution block for each resolution (except the last one). - if res != self.init_res: - self.add_module( - f'layer{2 * block_idx}', - ConvLayer(in_channels=in_channels, - out_channels=in_channels, - kernel_size=3, - add_bias=True, - downsample=False, - fused_scale=False, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - activation_type='lrelu')) - tf_layer0_name = 'Conv0' - self.add_module( - f'layer{2 * block_idx + 1}', - ConvLayer(in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - add_bias=True, - downsample=True, - fused_scale=fused_scale, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - activation_type='lrelu')) - tf_layer1_name = 'Conv1_down' if fused_scale else 'Conv1' - - # Convolution block for last resolution. - else: - self.mbstd = MiniBatchSTDLayer(groups=mbstd_groups, eps=eps) - self.add_module( - f'layer{2 * block_idx}', - ConvLayer( - in_channels=in_channels + 1, - out_channels=in_channels, - kernel_size=3, - add_bias=True, - downsample=False, - fused_scale=False, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - activation_type='lrelu')) - tf_layer0_name = 'Conv' - self.add_module( - f'layer{2 * block_idx + 1}', - DenseLayer(in_channels=in_channels * res * res, - out_channels=out_channels, - add_bias=True, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - activation_type='lrelu')) - tf_layer1_name = 'Dense0' - - self.pth_to_tf_var_mapping[f'layer{2 * block_idx}.weight'] = ( - f'{res}x{res}/{tf_layer0_name}/weight') - self.pth_to_tf_var_mapping[f'layer{2 * block_idx}.bias'] = ( - f'{res}x{res}/{tf_layer0_name}/bias') - self.pth_to_tf_var_mapping[f'layer{2 * block_idx + 1}.weight'] = ( - f'{res}x{res}/{tf_layer1_name}/weight') - self.pth_to_tf_var_mapping[f'layer{2 * block_idx + 1}.bias'] = ( - f'{res}x{res}/{tf_layer1_name}/bias') - - # Final dense layer. - self.output = DenseLayer(in_channels=out_channels, - out_channels=1 + self.label_dim, - add_bias=True, - use_wscale=self.use_wscale, - wscale_gain=1.0, - activation_type='linear') - self.pth_to_tf_var_mapping['output.weight'] = ( - f'{res}x{res}/Dense1/weight') - self.pth_to_tf_var_mapping['output.bias'] = ( - f'{res}x{res}/Dense1/bias') - - def get_nf(self, res): - """Gets number of feature maps according to the given resolution.""" - return min(self.fmaps_base // res, self.fmaps_max) - - def forward(self, image, lod=None): - expected_shape = (self.image_channels, self.resolution, self.resolution) - if image.ndim != 4 or image.shape[1:] != expected_shape: - raise ValueError(f'The input tensor should be with shape ' - f'[batch_size, channel, height, width], where ' - f'`channel` equals to {self.image_channels}, ' - f'`height`, `width` equal to {self.resolution}!\n' - f'But `{image.shape}` is received!') - - lod = self.lod.item() if lod is None else lod - if lod + self.init_res_log2 > self.final_res_log2: - raise ValueError(f'Maximum level-of-details (lod) is ' - f'{self.final_res_log2 - self.init_res_log2}, ' - f'but `{lod}` is received!') - - lod = self.lod.item() - for res_log2 in range(self.final_res_log2, self.init_res_log2 - 1, -1): - block_idx = current_lod = self.final_res_log2 - res_log2 - if current_lod <= lod < current_lod + 1: - x = getattr(self, f'input{block_idx}')(image) - elif current_lod - 1 < lod < current_lod: - alpha = lod - np.floor(lod) - y = getattr(self, f'input{block_idx}')(image) - x = y * alpha + x * (1 - alpha) - if lod < current_lod + 1: - if res_log2 == self.init_res_log2: - x = self.mbstd(x) - x = getattr(self, f'layer{2 * block_idx}')(x) - x = getattr(self, f'layer{2 * block_idx + 1}')(x) - if lod > current_lod: - image = F.avg_pool2d( - image, kernel_size=2, stride=2, padding=0) - x = self.output(x) - - return {'score': x} - - -class MiniBatchSTDLayer(nn.Module): - """Implements the minibatch standard deviation layer.""" - - def __init__(self, groups, eps): - super().__init__() - self.groups = groups - self.eps = eps - - def extra_repr(self): - return f'groups={self.groups}, epsilon={self.eps}' - - def forward(self, x): - if self.groups <= 1: - return x - - N, C, H, W = x.shape - G = min(self.groups, N) # Number of groups. - - y = x.reshape(G, -1, C, H, W) # [GnCHW] - y = y - y.mean(dim=0) # [GnCHW] - y = y.square().mean(dim=0) # [nCHW] - y = (y + self.eps).sqrt() # [nCHW] - y = y.mean(dim=(1, 2, 3), keepdim=True) # [n111] - y = y.repeat(G, 1, H, W) # [N1HW] - x = torch.cat([x, y], dim=1) # [N(C+1)HW] - - return x - - -class DownsamplingLayer(nn.Module): - """Implements the downsampling layer. - - Basically, this layer can be used to downsample feature maps with average - pooling. - """ - - def __init__(self, scale_factor): - super().__init__() - self.scale_factor = scale_factor - - def extra_repr(self): - return f'factor={self.scale_factor}' - - def forward(self, x): - if self.scale_factor <= 1: - return x - return F.avg_pool2d(x, - kernel_size=self.scale_factor, - stride=self.scale_factor, - padding=0) - - -class ConvLayer(nn.Module): - """Implements the convolutional layer. - - Basically, this layer executes convolution, activation, and downsampling (if - needed) in sequence. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - add_bias, - downsample, - fused_scale, - use_wscale, - wscale_gain, - activation_type): - """Initializes with layer settings. - - Args: - in_channels: Number of channels of the input tensor. - out_channels: Number of channels of the output tensor. - kernel_size: Size of the convolutional kernels. - add_bias: Whether to add bias onto the convolutional result. - downsample: Whether to downsample the result after convolution. - fused_scale: Whether to fused `conv2d` and `downsample` together, - resulting in `conv2d` with strides. - use_wscale: Whether to use weight scaling. - wscale_gain: Gain factor for weight scaling. - activation_type: Type of activation. - """ - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.add_bias = add_bias - self.downsample = downsample - self.fused_scale = fused_scale - self.use_wscale = use_wscale - self.wscale_gain = wscale_gain - self.activation_type = activation_type - - if downsample and not fused_scale: - self.down = DownsamplingLayer(scale_factor=2) - else: - self.down = nn.Identity() - - if downsample and fused_scale: - self.use_stride = True - self.stride = 2 - self.padding = 1 - else: - self.use_stride = False - self.stride = 1 - self.padding = kernel_size // 2 - - weight_shape = (out_channels, in_channels, kernel_size, kernel_size) - fan_in = kernel_size * kernel_size * in_channels - wscale = wscale_gain / np.sqrt(fan_in) - if use_wscale: - self.weight = nn.Parameter(torch.randn(*weight_shape)) - self.wscale = wscale - else: - self.weight = nn.Parameter(torch.randn(*weight_shape) * wscale) - self.wscale = 1.0 - - if add_bias: - self.bias = nn.Parameter(torch.zeros(out_channels)) - else: - self.bias = None - - assert activation_type in ['linear', 'relu', 'lrelu'] - - def extra_repr(self): - return (f'in_ch={self.in_channels}, ' - f'out_ch={self.out_channels}, ' - f'ksize={self.kernel_size}, ' - f'wscale_gain={self.wscale_gain:.3f}, ' - f'bias={self.add_bias}, ' - f'downsample={self.scale_factor}, ' - f'fused_scale={self.fused_scale}, ' - f'act={self.activation_type}') - - def forward(self, x): - weight = self.weight - if self.wscale != 1.0: - weight = weight * self.wscale - - if self.use_stride: - weight = F.pad(weight, (1, 1, 1, 1, 0, 0, 0, 0), 'constant', 0.0) - weight = (weight[:, :, 1:, 1:] + weight[:, :, :-1, 1:] + - weight[:, :, 1:, :-1] + weight[:, :, :-1, :-1]) * 0.25 - x = F.conv2d(x, - weight=weight, - bias=self.bias, - stride=self.stride, - padding=self.padding) - - if self.activation_type == 'linear': - pass - elif self.activation_type == 'relu': - x = F.relu(x, inplace=True) - elif self.activation_type == 'lrelu': - x = F.leaky_relu(x, negative_slope=0.2, inplace=True) - else: - raise NotImplementedError(f'Not implemented activation type ' - f'`{self.activation_type}`!') - x = self.down(x) - - return x - - -class DenseLayer(nn.Module): - """Implements the dense layer.""" - - def __init__(self, - in_channels, - out_channels, - add_bias, - use_wscale, - wscale_gain, - activation_type): - """Initializes with layer settings. - - Args: - in_channels: Number of channels of the input tensor. - out_channels: Number of channels of the output tensor. - add_bias: Whether to add bias onto the fully-connected result. - use_wscale: Whether to use weight scaling. - wscale_gain: Gain factor for weight scaling. - activation_type: Type of activation. - - Raises: - NotImplementedError: If the `activation_type` is not supported. - """ - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.add_bias = add_bias - self.use_wscale = use_wscale - self.wscale_gain = wscale_gain - self.activation_type = activation_type - - weight_shape = (out_channels, in_channels) - wscale = wscale_gain / np.sqrt(in_channels) - if use_wscale: - self.weight = nn.Parameter(torch.randn(*weight_shape)) - self.wscale = wscale - else: - self.weight = nn.Parameter(torch.randn(*weight_shape) * wscale) - self.wscale = 1.0 - - if add_bias: - self.bias = nn.Parameter(torch.zeros(out_channels)) - else: - self.bias = None - - assert activation_type in ['linear', 'relu', 'lrelu'] - - def forward(self, x): - if x.ndim != 2: - x = x.flatten(start_dim=1) - - weight = self.weight - if self.wscale != 1.0: - weight = weight * self.wscale - - x = F.linear(x, weight=weight, bias=self.bias) - - if self.activation_type == 'linear': - pass - elif self.activation_type == 'relu': - x = F.relu(x, inplace=True) - elif self.activation_type == 'lrelu': - x = F.leaky_relu(x, negative_slope=0.2, inplace=True) - else: - raise NotImplementedError(f'Not implemented activation type ' - f'`{self.activation_type}`!') - - return x - -# pylint: enable=missing-function-docstring diff --git a/spaces/Illumotion/Koboldcpp/examples/grammar-parser.h b/spaces/Illumotion/Koboldcpp/examples/grammar-parser.h deleted file mode 100644 index 9037d72728a42ed772f384f3d7ddcef01d0d15f5..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/grammar-parser.h +++ /dev/null @@ -1,29 +0,0 @@ -// Implements a parser for an extended Backus-Naur form (BNF), producing the -// binary context-free grammar format specified by llama.h. Supports character -// ranges, grouping, and repetition operators. As an example, a grammar for -// arithmetic might look like: -// -// root ::= expr -// expr ::= term ([-+*/] term)* -// term ::= num | "(" space expr ")" space -// num ::= [0-9]+ space -// space ::= [ \t\n]* - -#pragma once -#include "llama.h" -#include -#include -#include -#include - -namespace grammar_parser { - struct parse_state { - std::map symbol_ids; - std::vector> rules; - - std::vector c_rules(); - }; - - parse_state parse(const char * src); - void print_grammar(FILE * file, const parse_state & state); -} diff --git a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/non_leaking.py b/spaces/JUNGU/VToonify/vtoonify/model/stylegan/non_leaking.py deleted file mode 100644 index d0447535fed22d3ad4ac719b2b5ac6b7c58e6435..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/non_leaking.py +++ /dev/null @@ -1,469 +0,0 @@ -import math - -import torch -from torch import autograd -from torch.nn import functional as F -import numpy as np - -from model.stylegan.distributed import reduce_sum -from model.stylegan.op import upfirdn2d - - -class AdaptiveAugment: - def __init__(self, ada_aug_target, ada_aug_len, update_every, device): - self.ada_aug_target = ada_aug_target - self.ada_aug_len = ada_aug_len - self.update_every = update_every - - self.ada_update = 0 - self.ada_aug_buf = torch.tensor([0.0, 0.0], device=device) - self.r_t_stat = 0 - self.ada_aug_p = 0 - - @torch.no_grad() - def tune(self, real_pred): - self.ada_aug_buf += torch.tensor( - (torch.sign(real_pred).sum().item(), real_pred.shape[0]), - device=real_pred.device, - ) - self.ada_update += 1 - - if self.ada_update % self.update_every == 0: - self.ada_aug_buf = reduce_sum(self.ada_aug_buf) - pred_signs, n_pred = self.ada_aug_buf.tolist() - - self.r_t_stat = pred_signs / n_pred - - if self.r_t_stat > self.ada_aug_target: - sign = 1 - - else: - sign = -1 - - self.ada_aug_p += sign * n_pred / self.ada_aug_len - self.ada_aug_p = min(1, max(0, self.ada_aug_p)) - self.ada_aug_buf.mul_(0) - self.ada_update = 0 - - return self.ada_aug_p - - -SYM6 = ( - 0.015404109327027373, - 0.0034907120842174702, - -0.11799011114819057, - -0.048311742585633, - 0.4910559419267466, - 0.787641141030194, - 0.3379294217276218, - -0.07263752278646252, - -0.021060292512300564, - 0.04472490177066578, - 0.0017677118642428036, - -0.007800708325034148, -) - - -def translate_mat(t_x, t_y, device="cpu"): - batch = t_x.shape[0] - - mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1) - translate = torch.stack((t_x, t_y), 1) - mat[:, :2, 2] = translate - - return mat - - -def rotate_mat(theta, device="cpu"): - batch = theta.shape[0] - - mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1) - sin_t = torch.sin(theta) - cos_t = torch.cos(theta) - rot = torch.stack((cos_t, -sin_t, sin_t, cos_t), 1).view(batch, 2, 2) - mat[:, :2, :2] = rot - - return mat - - -def scale_mat(s_x, s_y, device="cpu"): - batch = s_x.shape[0] - - mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1) - mat[:, 0, 0] = s_x - mat[:, 1, 1] = s_y - - return mat - - -def translate3d_mat(t_x, t_y, t_z): - batch = t_x.shape[0] - - mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - translate = torch.stack((t_x, t_y, t_z), 1) - mat[:, :3, 3] = translate - - return mat - - -def rotate3d_mat(axis, theta): - batch = theta.shape[0] - - u_x, u_y, u_z = axis - - eye = torch.eye(3).unsqueeze(0) - cross = torch.tensor([(0, -u_z, u_y), (u_z, 0, -u_x), (-u_y, u_x, 0)]).unsqueeze(0) - outer = torch.tensor(axis) - outer = (outer.unsqueeze(1) * outer).unsqueeze(0) - - sin_t = torch.sin(theta).view(-1, 1, 1) - cos_t = torch.cos(theta).view(-1, 1, 1) - - rot = cos_t * eye + sin_t * cross + (1 - cos_t) * outer - - eye_4 = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - eye_4[:, :3, :3] = rot - - return eye_4 - - -def scale3d_mat(s_x, s_y, s_z): - batch = s_x.shape[0] - - mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - mat[:, 0, 0] = s_x - mat[:, 1, 1] = s_y - mat[:, 2, 2] = s_z - - return mat - - -def luma_flip_mat(axis, i): - batch = i.shape[0] - - eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - axis = torch.tensor(axis + (0,)) - flip = 2 * torch.ger(axis, axis) * i.view(-1, 1, 1) - - return eye - flip - - -def saturation_mat(axis, i): - batch = i.shape[0] - - eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - axis = torch.tensor(axis + (0,)) - axis = torch.ger(axis, axis) - saturate = axis + (eye - axis) * i.view(-1, 1, 1) - - return saturate - - -def lognormal_sample(size, mean=0, std=1, device="cpu"): - return torch.empty(size, device=device).log_normal_(mean=mean, std=std) - - -def category_sample(size, categories, device="cpu"): - category = torch.tensor(categories, device=device) - sample = torch.randint(high=len(categories), size=(size,), device=device) - - return category[sample] - - -def uniform_sample(size, low, high, device="cpu"): - return torch.empty(size, device=device).uniform_(low, high) - - -def normal_sample(size, mean=0, std=1, device="cpu"): - return torch.empty(size, device=device).normal_(mean, std) - - -def bernoulli_sample(size, p, device="cpu"): - return torch.empty(size, device=device).bernoulli_(p) - - -def random_mat_apply(p, transform, prev, eye, device="cpu"): - size = transform.shape[0] - select = bernoulli_sample(size, p, device=device).view(size, 1, 1) - select_transform = select * transform + (1 - select) * eye - - return select_transform @ prev - - -def sample_affine(p, size, height, width, device="cpu"): - G = torch.eye(3, device=device).unsqueeze(0).repeat(size, 1, 1) - eye = G - - # flip - param = category_sample(size, (0, 1)) - Gc = scale_mat(1 - 2.0 * param, torch.ones(size), device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('flip', G, scale_mat(1 - 2.0 * param, torch.ones(size)), sep='\n') - - # 90 rotate - #param = category_sample(size, (0, 3)) - #Gc = rotate_mat(-math.pi / 2 * param, device=device) - #G = random_mat_apply(p, Gc, G, eye, device=device) - # print('90 rotate', G, rotate_mat(-math.pi / 2 * param), sep='\n') - - # integer translate - param = uniform_sample(size, -0.125, 0.125) - param_height = torch.round(param * height) / height - param_width = torch.round(param * width) / width - Gc = translate_mat(param_width, param_height, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('integer translate', G, translate_mat(param_width, param_height), sep='\n') - - # isotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, param, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('isotropic scale', G, scale_mat(param, param), sep='\n') - - p_rot = 1 - math.sqrt(1 - p) - - # pre-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param, device=device) - G = random_mat_apply(p_rot, Gc, G, eye, device=device) - # print('pre-rotate', G, rotate_mat(-param), sep='\n') - - # anisotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, 1 / param, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('anisotropic scale', G, scale_mat(param, 1 / param), sep='\n') - - # post-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param, device=device) - G = random_mat_apply(p_rot, Gc, G, eye, device=device) - # print('post-rotate', G, rotate_mat(-param), sep='\n') - - # fractional translate - param = normal_sample(size, std=0.125) - Gc = translate_mat(param, param, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('fractional translate', G, translate_mat(param, param), sep='\n') - - return G - - -def sample_color(p, size): - C = torch.eye(4).unsqueeze(0).repeat(size, 1, 1) - eye = C - axis_val = 1 / math.sqrt(3) - axis = (axis_val, axis_val, axis_val) - - # brightness - param = normal_sample(size, std=0.2) - Cc = translate3d_mat(param, param, param) - C = random_mat_apply(p, Cc, C, eye) - - # contrast - param = lognormal_sample(size, std=0.5 * math.log(2)) - Cc = scale3d_mat(param, param, param) - C = random_mat_apply(p, Cc, C, eye) - - # luma flip - param = category_sample(size, (0, 1)) - Cc = luma_flip_mat(axis, param) - C = random_mat_apply(p, Cc, C, eye) - - # hue rotation - param = uniform_sample(size, -math.pi, math.pi) - Cc = rotate3d_mat(axis, param) - C = random_mat_apply(p, Cc, C, eye) - - # saturation - param = lognormal_sample(size, std=1 * math.log(2)) - Cc = saturation_mat(axis, param) - C = random_mat_apply(p, Cc, C, eye) - - return C - - -def make_grid(shape, x0, x1, y0, y1, device): - n, c, h, w = shape - grid = torch.empty(n, h, w, 3, device=device) - grid[:, :, :, 0] = torch.linspace(x0, x1, w, device=device) - grid[:, :, :, 1] = torch.linspace(y0, y1, h, device=device).unsqueeze(-1) - grid[:, :, :, 2] = 1 - - return grid - - -def affine_grid(grid, mat): - n, h, w, _ = grid.shape - return (grid.view(n, h * w, 3) @ mat.transpose(1, 2)).view(n, h, w, 2) - - -def get_padding(G, height, width, kernel_size): - device = G.device - - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = torch.tensor( - [(-cx, -cy, 1), (cx, -cy, 1), (cx, cy, 1), (-cx, cy, 1)], device=device - ) - cp = G @ cp.T - - pad_k = kernel_size // 4 - - pad = cp[:, :2, :].permute(1, 0, 2).flatten(1) - pad = torch.cat((-pad, pad)).max(1).values - pad = pad + torch.tensor([pad_k * 2 - cx, pad_k * 2 - cy] * 2, device=device) - pad = pad.max(torch.tensor([0, 0] * 2, device=device)) - pad = pad.min(torch.tensor([width - 1, height - 1] * 2, device=device)) - - pad_x1, pad_y1, pad_x2, pad_y2 = pad.ceil().to(torch.int32) - - return pad_x1, pad_x2, pad_y1, pad_y2 - - -def try_sample_affine_and_pad(img, p, kernel_size, G=None): - batch, _, height, width = img.shape - - G_try = G - - if G is None: - G_try = torch.inverse(sample_affine(p, batch, height, width)) - - pad_x1, pad_x2, pad_y1, pad_y2 = get_padding(G_try, height, width, kernel_size) - - img_pad = F.pad(img, (pad_x1, pad_x2, pad_y1, pad_y2), mode="reflect") - - return img_pad, G_try, (pad_x1, pad_x2, pad_y1, pad_y2) - - -class GridSampleForward(autograd.Function): - @staticmethod - def forward(ctx, input, grid): - out = F.grid_sample( - input, grid, mode="bilinear", padding_mode="zeros", align_corners=False - ) - ctx.save_for_backward(input, grid) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = GridSampleBackward.apply(grad_output, input, grid) - - return grad_input, grad_grid - - -class GridSampleBackward(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation("aten::grid_sampler_2d_backward") - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad_grad_input, grad_grad_grid): - grid, = ctx.saved_tensors - grad_grad_output = None - - if ctx.needs_input_grad[0]: - grad_grad_output = GridSampleForward.apply(grad_grad_input, grid) - - return grad_grad_output, None, None - - -grid_sample = GridSampleForward.apply - - -def scale_mat_single(s_x, s_y): - return torch.tensor(((s_x, 0, 0), (0, s_y, 0), (0, 0, 1)), dtype=torch.float32) - - -def translate_mat_single(t_x, t_y): - return torch.tensor(((1, 0, t_x), (0, 1, t_y), (0, 0, 1)), dtype=torch.float32) - - -def random_apply_affine(img, p, G=None, antialiasing_kernel=SYM6): - kernel = antialiasing_kernel - len_k = len(kernel) - - kernel = torch.as_tensor(kernel).to(img) - # kernel = torch.ger(kernel, kernel).to(img) - kernel_flip = torch.flip(kernel, (0,)) - - img_pad, G, (pad_x1, pad_x2, pad_y1, pad_y2) = try_sample_affine_and_pad( - img, p, len_k, G - ) - - G_inv = ( - translate_mat_single((pad_x1 - pad_x2).item() / 2, (pad_y1 - pad_y2).item() / 2) - @ G - ) - up_pad = ( - (len_k + 2 - 1) // 2, - (len_k - 2) // 2, - (len_k + 2 - 1) // 2, - (len_k - 2) // 2, - ) - img_2x = upfirdn2d(img_pad, kernel.unsqueeze(0), up=(2, 1), pad=(*up_pad[:2], 0, 0)) - img_2x = upfirdn2d(img_2x, kernel.unsqueeze(1), up=(1, 2), pad=(0, 0, *up_pad[2:])) - G_inv = scale_mat_single(2, 2) @ G_inv @ scale_mat_single(1 / 2, 1 / 2) - G_inv = translate_mat_single(-0.5, -0.5) @ G_inv @ translate_mat_single(0.5, 0.5) - batch_size, channel, height, width = img.shape - pad_k = len_k // 4 - shape = (batch_size, channel, (height + pad_k * 2) * 2, (width + pad_k * 2) * 2) - G_inv = ( - scale_mat_single(2 / img_2x.shape[3], 2 / img_2x.shape[2]) - @ G_inv - @ scale_mat_single(1 / (2 / shape[3]), 1 / (2 / shape[2])) - ) - grid = F.affine_grid(G_inv[:, :2, :].to(img_2x), shape, align_corners=False) - img_affine = grid_sample(img_2x, grid) - d_p = -pad_k * 2 - down_pad = ( - d_p + (len_k - 2 + 1) // 2, - d_p + (len_k - 2) // 2, - d_p + (len_k - 2 + 1) // 2, - d_p + (len_k - 2) // 2, - ) - img_down = upfirdn2d( - img_affine, kernel_flip.unsqueeze(0), down=(2, 1), pad=(*down_pad[:2], 0, 0) - ) - img_down = upfirdn2d( - img_down, kernel_flip.unsqueeze(1), down=(1, 2), pad=(0, 0, *down_pad[2:]) - ) - - return img_down, G - - -def apply_color(img, mat): - batch = img.shape[0] - img = img.permute(0, 2, 3, 1) - mat_mul = mat[:, :3, :3].transpose(1, 2).view(batch, 1, 3, 3) - mat_add = mat[:, :3, 3].view(batch, 1, 1, 3) - img = img @ mat_mul + mat_add - img = img.permute(0, 3, 1, 2) - - return img - - -def random_apply_color(img, p, C=None): - if C is None: - C = sample_color(p, img.shape[0]) - - img = apply_color(img, C.to(img)) - - return img, C - - -def augment(img, p, transform_matrix=(None, None)): - img, G = random_apply_affine(img, p, transform_matrix[0]) - if img.shape[1] == 3: - img, C = random_apply_color(img, p, transform_matrix[1]) - else: - tmp, C = random_apply_color(img[:,0:3], p, transform_matrix[1]) - img = torch.cat((tmp, img[:,3:]), dim=1) - - return img, (G, C) diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py deleted file mode 100644 index f6dfcf4c9983b431f0a978701e5ddd9598faf381..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py +++ /dev/null @@ -1,435 +0,0 @@ -''' -VQGAN code, adapted from the original created by the Unleashing Transformers authors: -https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py - -''' -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import copy -from basicsr.utils import get_root_logger -from basicsr.utils.registry import ARCH_REGISTRY - -def normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -@torch.jit.script -def swish(x): - return x*torch.sigmoid(x) - - -# Define VQVAE classes -class VectorQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, beta): - super(VectorQuantizer, self).__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.beta = beta # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - self.embedding = nn.Embedding(self.codebook_size, self.emb_dim) - self.embedding.weight.data.uniform_(-1.0 / self.codebook_size, 1.0 / self.codebook_size) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.emb_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d = (z_flattened ** 2).sum(dim=1, keepdim=True) + (self.embedding.weight**2).sum(1) - \ - 2 * torch.matmul(z_flattened, self.embedding.weight.t()) - - mean_distance = torch.mean(d) - # find closest encodings - # min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - min_encoding_scores, min_encoding_indices = torch.topk(d, 1, dim=1, largest=False) - # [0-1], higher score, higher confidence - min_encoding_scores = torch.exp(-min_encoding_scores/10) - - min_encodings = torch.zeros(min_encoding_indices.shape[0], self.codebook_size).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * torch.mean((z_q - z.detach()) ** 2) - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, { - "perplexity": perplexity, - "min_encodings": min_encodings, - "min_encoding_indices": min_encoding_indices, - "min_encoding_scores": min_encoding_scores, - "mean_distance": mean_distance - } - - def get_codebook_feat(self, indices, shape): - # input indices: batch*token_num -> (batch*token_num)*1 - # shape: batch, height, width, channel - indices = indices.view(-1,1) - min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices) - min_encodings.scatter_(1, indices, 1) - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: # reshape back to match original input shape - z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, num_hiddens, straight_through=False, kl_weight=5e-4, temp_init=1.0): - super().__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - self.proj = nn.Conv2d(num_hiddens, codebook_size, 1) # projects last encoder layer to quantized logits - self.embed = nn.Embedding(codebook_size, emb_dim) - - def forward(self, z): - hard = self.straight_through if self.training else True - - logits = self.proj(z) - - soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard) - - z_q = torch.einsum("b n h w, n d -> b d h w", soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean() - min_encoding_indices = soft_one_hot.argmax(dim=1) - - return z_q, diff, { - "min_encoding_indices": min_encoding_indices - } - - -class Downsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = torch.nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0) - - def forward(self, x): - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - return x - - -class Upsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) - - def forward(self, x): - x = F.interpolate(x, scale_factor=2.0, mode="nearest") - x = self.conv(x) - - return x - - -class ResBlock(nn.Module): - def __init__(self, in_channels, out_channels=None): - super(ResBlock, self).__init__() - self.in_channels = in_channels - self.out_channels = in_channels if out_channels is None else out_channels - self.norm1 = normalize(in_channels) - self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - self.norm2 = normalize(out_channels) - self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - if self.in_channels != self.out_channels: - self.conv_out = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, x_in): - x = x_in - x = self.norm1(x) - x = swish(x) - x = self.conv1(x) - x = self.norm2(x) - x = swish(x) - x = self.conv2(x) - if self.in_channels != self.out_channels: - x_in = self.conv_out(x_in) - - return x + x_in - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h*w) - q = q.permute(0, 2, 1) - k = k.reshape(b, c, h*w) - w_ = torch.bmm(q, k) - w_ = w_ * (int(c)**(-0.5)) - w_ = F.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h*w) - w_ = w_.permute(0, 2, 1) - h_ = torch.bmm(v, w_) - h_ = h_.reshape(b, c, h, w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Encoder(nn.Module): - def __init__(self, in_channels, nf, emb_dim, ch_mult, num_res_blocks, resolution, attn_resolutions): - super().__init__() - self.nf = nf - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.attn_resolutions = attn_resolutions - - curr_res = self.resolution - in_ch_mult = (1,)+tuple(ch_mult) - - blocks = [] - # initial convultion - blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1)) - - # residual and downsampling blocks, with attention on smaller res (16x16) - for i in range(self.num_resolutions): - block_in_ch = nf * in_ch_mult[i] - block_out_ch = nf * ch_mult[i] - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - if curr_res in attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != self.num_resolutions - 1: - blocks.append(Downsample(block_in_ch)) - curr_res = curr_res // 2 - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - # normalise and convert to latent size - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, emb_dim, kernel_size=3, stride=1, padding=1)) - self.blocks = nn.ModuleList(blocks) - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -class Generator(nn.Module): - def __init__(self, nf, emb_dim, ch_mult, res_blocks, img_size, attn_resolutions): - super().__init__() - self.nf = nf - self.ch_mult = ch_mult - self.num_resolutions = len(self.ch_mult) - self.num_res_blocks = res_blocks - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.in_channels = emb_dim - self.out_channels = 3 - block_in_ch = self.nf * self.ch_mult[-1] - curr_res = self.resolution // 2 ** (self.num_resolutions-1) - - blocks = [] - # initial conv - blocks.append(nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1)) - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - for i in reversed(range(self.num_resolutions)): - block_out_ch = self.nf * self.ch_mult[i] - - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - - if curr_res in self.attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != 0: - blocks.append(Upsample(block_in_ch)) - curr_res = curr_res * 2 - - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1)) - - self.blocks = nn.ModuleList(blocks) - - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -@ARCH_REGISTRY.register() -class VQAutoEncoder(nn.Module): - def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=[16], codebook_size=1024, emb_dim=256, - beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None): - super().__init__() - logger = get_root_logger() - self.in_channels = 3 - self.nf = nf - self.n_blocks = res_blocks - self.codebook_size = codebook_size - self.embed_dim = emb_dim - self.ch_mult = ch_mult - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.quantizer_type = quantizer - self.encoder = Encoder( - self.in_channels, - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - if self.quantizer_type == "nearest": - self.beta = beta #0.25 - self.quantize = VectorQuantizer(self.codebook_size, self.embed_dim, self.beta) - elif self.quantizer_type == "gumbel": - self.gumbel_num_hiddens = emb_dim - self.straight_through = gumbel_straight_through - self.kl_weight = gumbel_kl_weight - self.quantize = GumbelQuantizer( - self.codebook_size, - self.embed_dim, - self.gumbel_num_hiddens, - self.straight_through, - self.kl_weight - ) - self.generator = Generator( - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_ema' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_ema']) - logger.info(f'vqgan is loaded from: {model_path} [params_ema]') - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - logger.info(f'vqgan is loaded from: {model_path} [params]') - else: - raise ValueError(f'Wrong params!') - - - def forward(self, x): - x = self.encoder(x) - quant, codebook_loss, quant_stats = self.quantize(x) - x = self.generator(quant) - return x, codebook_loss, quant_stats - - - -# patch based discriminator -@ARCH_REGISTRY.register() -class VQGANDiscriminator(nn.Module): - def __init__(self, nc=3, ndf=64, n_layers=4, model_path=None): - super().__init__() - - layers = [nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, True)] - ndf_mult = 1 - ndf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n, 8) - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=2, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n_layers, 8) - - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=1, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - layers += [ - nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1)] # output 1 channel prediction map - self.main = nn.Sequential(*layers) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_d' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_d']) - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - else: - raise ValueError(f'Wrong params!') - - def forward(self, x): - return self.main(x) \ No newline at end of file diff --git a/spaces/JavierIA/gccopen/app.py b/spaces/JavierIA/gccopen/app.py deleted file mode 100644 index 64b18c6c3a76db1c3ffe0ded30e12bddc6310461..0000000000000000000000000000000000000000 --- a/spaces/JavierIA/gccopen/app.py +++ /dev/null @@ -1,196 +0,0 @@ -from turtle import title -import torch -import argparse -import gradio as gr -from PIL import Image -from numpy import random -from pathlib import Path -import os -import time -import torch.backends.cudnn as cudnn -from models.experimental import attempt_load -import cv2 -from utils.datasets import LoadStreams, LoadImages -from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier,scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path -from utils.plots import plot_one_box -from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel - - -os.system('git clone https://github.com/WongKinYiu/yolov7') - -""" -Ejemplo de uso: -para la empresa GCC , la imagen debe contener cualquiera equipo contra acidentes personales - * Cascos de Seguridad. ... - * Tapones para oídos y Orejeras. ... - * Lentes de Seguridad. ... - * Respiradores. ... - * Chaleco de Seguridad. ... - * Guantes de seguridad. ... - * Botas de Seguridad. ... - * Fuentes. -""" - -def Custom_detect(img): - model='best' - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default=model+".pt", help='model.pt path(s)') - parser.add_argument('--source', type=str, default='Temp_file/', help='source') - parser.add_argument('--img-size', type=int, default=100, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='display results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default='runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--trace', action='store_true', help='trace model') - opt = parser.parse_args() - img.save("Temp_file/test.jpg") - source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, opt.trace - save_img = True - webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) - save_dir = Path(increment_path(Path(opt.project)/opt.name,exist_ok=opt.exist_ok)) - - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) - set_logging() - device = select_device(opt.device) - half = device.type != 'cpu' - model = attempt_load(weights, map_location=device) - stride = int(model.stride.max()) - imgsz = check_img_size(imgsz, s=stride) - if trace: - model = TracedModel(model, device, opt.img_size) - if half: - model.half() - - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval() - vid_path, vid_writer = None, None - if webcam: - view_img = check_imshow() - cudnn.benchmark = True - dataset = LoadStreams(source, img_size=imgsz, stride=stride) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride) - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in names] - if device.type != 'cpu': - model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) - t0 = time.time() - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() - img /= 255.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Inference - t1 = time_synchronized() - pred = model(img, augment=opt.augment)[0] - - pred = non_max_suppression(pred,opt.conf_thres,opt.iou_thres,classes=opt.classes, agnostic=opt.agnostic_nms) - t2 = time_synchronized() - - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - for i, det in enumerate(pred): - if webcam: - p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count - else: - p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0) - - p = Path(p) - save_path = str(save_dir / p.name) - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt - s += '%gx%g ' % img.shape[2:] - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] - if len(det): - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " - - - for *xyxy, conf, cls in reversed(det): - if save_txt: - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() - line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh) - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img or view_img: - label = f'{names[int(cls)]} {conf:.2f}' - plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3) - if view_img: - cv2.imshow(str(p), im0) - cv2.waitKey(1) - - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - else: - if vid_path != save_path: - vid_path = save_path - if isinstance(vid_writer, cv2.VideoWriter): - vid_writer.release() - if vid_cap: - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path += '.mp4' - vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer.write(im0) - - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - - print(f'Done. ({time.time() - t0:.3f}s)') - - return Image.fromarray(im0[:,:,::-1]) -#add description -description = """ - - - - - - - - - """ - -inp = gr.Image(type="pil") -output = gr.Image(type="pil") -banner = gr.Image("banner.jpg", width=400) -examples=[["Examples/Image1.jpg","Image1"],["Examples/Image2.jpg","Image2"]] - -io=gr.Interface(fn=Custom_detect, inputs=inp, outputs=output, title='Prueba de GuardIA',examples=examples,cache_examples=False,description=description) - -io.launch() \ No newline at end of file diff --git a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/raw_data_plugin_input.py b/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/raw_data_plugin_input.py deleted file mode 100644 index f9af9b2834bc6b05396400772d6160f430df0a38..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/raw_data_plugin_input.py +++ /dev/null @@ -1,67 +0,0 @@ -from __future__ import annotations - -import base64 -from typing import Any - -from steamship.base.mime_types import TEXT_MIME_TYPES, MimeTypes -from steamship.base.model import CamelModel -from steamship.utils.signed_urls import url_to_bytes - - -def is_base64(sb): - # noinspection PyBroadException - try: - if isinstance(sb, str): - # If there's Any unicode here, an exception will be thrown and the function will return false - sb_bytes = bytes(sb, "ascii") - elif isinstance(sb, bytes): - sb_bytes = sb - else: - raise ValueError("Argument must be string or bytes") - return base64.b64encode(base64.b64decode(sb_bytes)) == sb_bytes - except Exception: - return False - - -class RawDataPluginInput(CamelModel): - """Input for a plugin that accepts raw data, plus a mime type. - - A plugin author need only ever concern themselves with two fields: - - `data` - Raw bytes - ` `default_mime_type` - The best guess as to `data`'s MIME Type unless otherwise known to be different. - - In practice, however, the lifecycle of this object involves a bit more under the hood: - - - **Potentially Base64 Decoding Data**. When decoding from a dict, the `data` field is assumed to be Base64 encoded. - This is to support JSON as a transport encoding over the wire. The constructor automatically performs the - decoding, and the Steamship Engine automatically performs the encoding, so the Plugin Author can mostly ignore - this fact. - - - **Potentially late-fetching the `data` from a `url`**. Some files are too large to comfortably send as Base64 - within JSON. The Steamship Engine sometimes chooses to send an empty `data` field paired with a non-empty - `url` field. When this happens, the constructor proactively, synchronously fetches the contents of that `url` - and assigns it to the `data` field, throwing a SteamshipError if the fetch fails. Again, this is done - automatically so the Plugin Author can mostly ignore this fact. - """ - - plugin_instance: str = None - data: Any = None - default_mime_type: MimeTypes = None - - def __init__(self, **kwargs): - data = kwargs.get("data") - url = kwargs.get("url") - - if data is not None and is_base64(data): - data_bytes = base64.b64decode(data) - if kwargs.get("defaultMimeType") in TEXT_MIME_TYPES: - kwargs["data"] = data_bytes.decode("utf-8") - else: - kwargs["data"] = data_bytes - elif url is not None: - kwargs["data"] = url_to_bytes(url) # Resolve the URL into the data field - kwargs.pop( - "url" - ) # Remove the URL field to preserve a simple interface for the consumer - - super().__init__(**kwargs) diff --git a/spaces/Jikiwi/sovits-models/cluster/train_cluster.py b/spaces/Jikiwi/sovits-models/cluster/train_cluster.py deleted file mode 100644 index 4ac025d400414226e66849407f477ae786c3d5d3..0000000000000000000000000000000000000000 --- a/spaces/Jikiwi/sovits-models/cluster/train_cluster.py +++ /dev/null @@ -1,89 +0,0 @@ -import os -from glob import glob -from pathlib import Path -import torch -import logging -import argparse -import torch -import numpy as np -from sklearn.cluster import KMeans, MiniBatchKMeans -import tqdm -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) -import time -import random - -def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False): - - logger.info(f"Loading features from {in_dir}") - features = [] - nums = 0 - for path in tqdm.tqdm(in_dir.glob("*.soft.pt")): - features.append(torch.load(path).squeeze(0).numpy().T) - # print(features[-1].shape) - features = np.concatenate(features, axis=0) - print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype) - features = features.astype(np.float32) - logger.info(f"Clustering features of shape: {features.shape}") - t = time.time() - if use_minibatch: - kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features) - else: - kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features) - print(time.time()-t, "s") - - x = { - "n_features_in_": kmeans.n_features_in_, - "_n_threads": kmeans._n_threads, - "cluster_centers_": kmeans.cluster_centers_, - } - print("end") - - return x - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument('--dataset', type=Path, default="./dataset/44k", - help='path of training data directory') - parser.add_argument('--output', type=Path, default="logs/44k", - help='path of model output directory') - - args = parser.parse_args() - - checkpoint_dir = args.output - dataset = args.dataset - n_clusters = 10000 - - ckpt = {} - for spk in os.listdir(dataset): - if os.path.isdir(dataset/spk): - print(f"train kmeans for {spk}...") - in_dir = dataset/spk - x = train_cluster(in_dir, n_clusters, verbose=False) - ckpt[spk] = x - - checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt" - checkpoint_path.parent.mkdir(exist_ok=True, parents=True) - torch.save( - ckpt, - checkpoint_path, - ) - - - # import cluster - # for spk in tqdm.tqdm(os.listdir("dataset")): - # if os.path.isdir(f"dataset/{spk}"): - # print(f"start kmeans inference for {spk}...") - # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)): - # mel_path = feature_path.replace(".discrete.npy",".mel.npy") - # mel_spectrogram = np.load(mel_path) - # feature_len = mel_spectrogram.shape[-1] - # c = np.load(feature_path) - # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy() - # feature = c.T - # feature_class = cluster.get_cluster_result(feature, spk) - # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class) - - diff --git a/spaces/Junity/TokaiTeio-SVC/utils.py b/spaces/Junity/TokaiTeio-SVC/utils.py deleted file mode 100644 index f837bfa331a53621131f957d1e90a51cebcdef23..0000000000000000000000000000000000000000 --- a/spaces/Junity/TokaiTeio-SVC/utils.py +++ /dev/null @@ -1,502 +0,0 @@ -import os -import glob -import re -import sys -import argparse -import logging -import json -import subprocess -import random - -import librosa -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from modules.commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -# def normalize_f0(f0, random_scale=True): -# f0_norm = f0.clone() # create a copy of the input Tensor -# batch_size, _, frame_length = f0_norm.shape -# for i in range(batch_size): -# means = torch.mean(f0_norm[i, 0, :]) -# if random_scale: -# factor = random.uniform(0.8, 1.2) -# else: -# factor = 1 -# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor -# return f0_norm -# def normalize_f0(f0, random_scale=True): -# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True) -# if random_scale: -# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device) -# else: -# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device) -# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) -# return f0_norm -def normalize_f0(f0, x_mask, uv, random_scale=True): - # calculate means based on x_mask - uv_sum = torch.sum(uv, dim=1, keepdim=True) - uv_sum[uv_sum == 0] = 9999 - means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum - - if random_scale: - factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device) - else: - factor = torch.ones(f0.shape[0], 1).to(f0.device) - # normalize f0 based on means and factor - f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) - if torch.isnan(f0_norm).any(): - exit(0) - return f0_norm * x_mask - - -def plot_data_to_numpy(x, y): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - plt.plot(x) - plt.plot(y) - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - - -def interpolate_f0(f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - -def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import parselmouth - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - time_step = hop_length / sampling_rate * 1000 - f0_min = 50 - f0_max = 1100 - f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - return f0 - -def resize_f0(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import pyworld - if p_len is None: - p_len = wav_numpy.shape[0]//hop_length - f0, t = pyworld.dio( - wav_numpy.astype(np.double), - fs=sampling_rate, - f0_ceil=800, - frame_period=1000 * hop_length / sampling_rate, - ) - f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return resize_f0(f0, p_len) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(): - vec_path = "hubert/checkpoint_best_legacy_500.pt" - print("load model(s) from {}".format(vec_path)) - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - model = models[0] - model.eval() - return model - -def get_hubert_content(hmodel, wav_16k_tensor): - feats = wav_16k_tensor - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav_16k_tensor.device), - "padding_mask": padding_mask.to(wav_16k_tensor.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = hmodel.extract_features(**inputs) - feats = hmodel.final_proj(logits[0]) - return feats.transpose(1, 2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "dec" in k or "disc" in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - -def clean_checkpoints(path_to_models='vdecoder/hifigan-a/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -def repeat_expand_2d(content, target_len): - # content : [h, t] - - src_len = content.shape[-1] - target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device) - temp = torch.arange(src_len+1) * target_len / src_len - current_pos = 0 - for i in range(target_len): - if i < temp[current_pos+1]: - target[:, i] = content[:, current_pos] - else: - current_pos += 1 - target[:, i] = content[:, current_pos] - - return target - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers_123812KB .py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers_123812KB .py deleted file mode 100644 index 4fc1b5cb85a3327f60cbb9f5deffbeeaaac516ad..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers_123812KB .py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Kangarroar/ApplioRVC-Inference/tools/app.py b/spaces/Kangarroar/ApplioRVC-Inference/tools/app.py deleted file mode 100644 index 602fbb71a49f2537295337cdcecf501abdd74153..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/tools/app.py +++ /dev/null @@ -1,148 +0,0 @@ -import logging -import os - -# os.system("wget -P cvec/ https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt") -import gradio as gr -from dotenv import load_dotenv - -from configs.config import Config -from i18n import I18nAuto -from infer.modules.vc.pipeline import Pipeline -VC = Pipeline - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) -logger = logging.getLogger(__name__) - -i18n = I18nAuto() -#(i18n) - -load_dotenv() -config = Config() -vc = VC(config) - -weight_root = os.getenv("weight_root") -weight_uvr5_root = os.getenv("weight_uvr5_root") -index_root = os.getenv("index_root") -names = [] -hubert_model = None -for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) -index_paths = [] -for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("在线demo"): - gr.Markdown( - value=""" - RVC 在线demo - """ - ) - sid = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names)) - with gr.Column(): - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label=i18n("请选择说话人id"), - value=0, - visible=False, - interactive=True, - ) - sid.change(fn=vc.get_vc, inputs=[sid], outputs=[spk_item]) - gr.Markdown( - value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ") - ) - vc_input3 = gr.Audio(label="上传音频(长度小于90秒)") - vc_transform0 = gr.Number(label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0) - f0method0 = gr.Radio( - label=i18n("选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"), - choices=["pm", "harvest", "crepe", "rmvpe"], - value="pm", - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index1 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=False, - visible=False, - ) - file_index2 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=0.88, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n("保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"), - value=0.33, - step=0.01, - interactive=True, - ) - f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调")) - but0 = gr.Button(i18n("转换"), variant="primary") - vc_output1 = gr.Textbox(label=i18n("输出信息")) - vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)")) - but0.click( - vc.vc_single, - [ - spk_item, - vc_input3, - vc_transform0, - f0_file, - f0method0, - file_index1, - file_index2, - # file_big_npy1, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - [vc_output1, vc_output2], - ) - - -app.launch() diff --git a/spaces/KaygNas/cut-it/src/Ground.ts b/spaces/KaygNas/cut-it/src/Ground.ts deleted file mode 100644 index e47d2509f0cb20d3d878cc644a7a7d6cb74e1448..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/src/Ground.ts +++ /dev/null @@ -1,117 +0,0 @@ -import { Matrix, MeshBuilder } from '@babylonjs/core' -import type { GroundMesh, Mesh, Scene } from '@babylonjs/core' -import { AdvancedDynamicTexture, Container, Control, Rectangle } from '@babylonjs/gui' -import type { Image } from './Image' -import { warn } from './utils' - -export class Ground { - mesh: GroundMesh - image: Image - private _pictureTexture: AdvancedDynamicTexture - constructor(scene: Scene, image: Image) { - this.mesh = this._createMesh(scene) - this._pictureTexture = AdvancedDynamicTexture.CreateForMesh(this.mesh) - this.image = image - image.observable.add((event) => { - if (event.type === 'loaded') - this._displayImage() - else if (event.type === 'detected') - this._displayDetections() - else if (event.type === 'classified') - this._displayClassification() - }) - } - - private _createMesh(scene: Scene) { - const ground = MeshBuilder.CreateGround('Ground', { width: 25, height: 25 }, scene) - return ground - } - - private async _displayImage() { - const { image, mesh, _pictureTexture } = this - if (!image.isLoaded()) - return - this._clearImage() - this._clearDetections() - this._clearClassification() - const _image = image.image - const bbox = mesh.getBoundingInfo().boundingBox - const scaleX = (_image.imageWidth / _image.imageHeight) / (bbox.extendSize.x / bbox.extendSize.z) - const scaling = Matrix.Scaling(scaleX, 1, 1) - mesh.bakeTransformIntoVertices(scaling) - mesh.getChildMeshes().forEach(child => (child as Mesh).bakeTransformIntoVertices(scaling)) - const container = new Container('ImageContainer') - container.addControl(_image) - _pictureTexture.addControl(container) - } - - private _clearImage() { - const container = this._pictureTexture.getControlByName('ImageContainer') - if (container) - this._pictureTexture.removeControl(container) - } - - private async _displayDetections() { - if (!this.image.isDetected()) { - warn('Image must be loaded before setting detections.') - return - } - this._clearDetections() - const { _pictureTexture } = this - const { image: _image, detections } = this.image - const container = new Container('DetectionsContainer') - detections!.forEach((detection) => { - const rect = new Rectangle(detection.label) - rect.width = (detection.box.xmax - detection.box.xmin) / _image.imageWidth - rect.height = (detection.box.ymax - detection.box.ymin) / _image.imageHeight - rect.left = `${detection.box.xmin / _image.imageWidth * 100}%` - rect.top = `${detection.box.ymin / _image.imageHeight * 100}%` - rect.color = '#ffff00' - rect.thickness = 4 - rect.background = '#ffff0026' - rect.verticalAlignment = Control.VERTICAL_ALIGNMENT_TOP - rect.horizontalAlignment = Control.HORIZONTAL_ALIGNMENT_LEFT - container.addControl(rect) - }) - _pictureTexture.addControl(container) - } - - private _clearDetections() { - const container = this._pictureTexture.getControlByName('DetectionsContainer') - if (container) - this._pictureTexture.removeControl(container) - } - - private _displayClassification() { - if (!this.image.isClassified()) { - warn('Image must be classified before displayed.') - return - } - this._clearClassification() - const { _pictureTexture } = this - const { classification: { detection }, image: _image } = this.image - const rect = new Rectangle(detection.label) - rect.width = (detection.box.xmax - detection.box.xmin) / _image.imageWidth - rect.height = (detection.box.ymax - detection.box.ymin) / _image.imageHeight - rect.left = `${detection.box.xmin / _image.imageWidth * 100}%` - rect.top = `${detection.box.ymin / _image.imageHeight * 100}%` - rect.color = '#00ff00' - rect.thickness = 4 - rect.background = '#00ff0026' - rect.verticalAlignment = Control.VERTICAL_ALIGNMENT_TOP - rect.horizontalAlignment = Control.HORIZONTAL_ALIGNMENT_LEFT - const container = new Container('ClassificationContainer') - container.addControl(rect) - _pictureTexture.addControl(container) - } - - private _clearClassification() { - const container = this._pictureTexture.getControlByName('ClassificationContainer') - if (container) - this._pictureTexture.removeControl(container) - } - - static create(scene: Scene, image: Image) { - return new Ground(scene, image) - } -} diff --git a/spaces/Kevin676/AutoGPT/ui/api.py b/spaces/Kevin676/AutoGPT/ui/api.py deleted file mode 100644 index 3b46ad32148b23f06c6eb64c88708fc2bf92e4dc..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/ui/api.py +++ /dev/null @@ -1,146 +0,0 @@ -import os, sys -import utils -import uuid -import json -import subprocess, threading - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -REPO_DIR = os.path.dirname(FILE_DIR) -STATE_DIR = os.path.join(FILE_DIR, "state") -sys.path.append(REPO_DIR) -if not os.path.exists(STATE_DIR): - os.mkdir(STATE_DIR) -import time - - -def get_openai_api_key(): - return os.getenv("OPENAI_API_KEY") - - -running_apis = [] - - -def get_state(state_file): - with open(state_file, "r") as f: - state = json.load(f) - return state - - -def set_state(state_file, state): - with open(state_file, "w") as f: - json.dump(state, f) - - -class AutoAPI: - def __init__(self, openai_key, ai_name, ai_role, top_5_goals): - self.openai_key = openai_key - hex = uuid.uuid4().hex - print(hex) - self.state_file = os.path.join(STATE_DIR, f"state_{hex}.json") - self.log_file = os.path.join(STATE_DIR, f"log_{hex}.json") - - newline = "\n" - with open(os.path.join(REPO_DIR, "ai_settings.yaml"), "w") as f: - f.write( - f"""ai_goals: -{newline.join([f'- {goal[0]}' for goal in top_5_goals if goal[0]])} -ai_name: {ai_name} -ai_role: {ai_role} -""" - ) - state = { - "pending_input": None, - "awaiting_input": False, - "messages": [], - "last_message_read_index": -1, - } - set_state(self.state_file, state) - - with open(self.log_file, "w") as f: - subprocess.Popen( - [ - "python", - os.path.join(REPO_DIR, "ui", "api.py"), - openai_key, - self.state_file, - ], - cwd=REPO_DIR, - stdout=f, - stderr=f, - ) - - def send_message(self, message="Y"): - state = get_state(self.state_file) - state["pending_input"] = message - state["awaiting_input"] = False - set_state(self.state_file, state) - - def get_chatbot_response(self): - while True: - state = get_state(self.state_file) - if ( - state["awaiting_input"] - and state["last_message_read_index"] >= len(state["messages"]) - 1 - ): - break - if state["last_message_read_index"] >= len(state["messages"]) - 1: - time.sleep(1) - else: - state["last_message_read_index"] += 1 - title, content = state["messages"][state["last_message_read_index"]] - yield (f"**{title.strip()}** " if title else "") + utils.remove_color( - content - ).replace("\n", "
") - set_state(self.state_file, state) - - -if __name__ == "__main__": - print(sys.argv) - _, openai_key, state_file = sys.argv - os.environ["OPENAI_API_KEY"] = openai_key - import autogpt.config.config - from autogpt.logs import logger - from autogpt.cli import main - import autogpt.utils - from autogpt.spinner import Spinner - - def add_message(title, content): - state = get_state(state_file) - state["messages"].append((title, content)) - set_state(state_file, state) - - def typewriter_log(title="", title_color="", content="", *args, **kwargs): - add_message(title, content) - - def warn(message, title="", *args, **kwargs): - add_message(title, message) - - def error(title, message="", *args, **kwargs): - add_message(title, message) - - def clean_input(prompt=""): - add_message(None, prompt) - state = get_state(state_file) - state["awaiting_input"] = True - set_state(state_file, state) - while state["pending_input"] is None: - state = get_state(state_file) - print("Waiting for input...") - time.sleep(1) - print("Got input") - pending_input = state["pending_input"] - state["pending_input"] = None - set_state(state_file, state) - return pending_input - - def spinner_start(): - add_message(None, "Thinking...") - - logger.typewriter_log = typewriter_log - logger.warn = warn - logger.error = error - autogpt.utils.clean_input = clean_input - Spinner.spin = spinner_start - - sys.argv = sys.argv[:1] - main() diff --git a/spaces/KevinGeng/Laronix_voice_quality_checking_system_FILEIO/local/convert_metrics.py b/spaces/KevinGeng/Laronix_voice_quality_checking_system_FILEIO/local/convert_metrics.py deleted file mode 100644 index 2c75bd7ac5f75a01abc4c9875bb69bbb202dfd29..0000000000000000000000000000000000000000 --- a/spaces/KevinGeng/Laronix_voice_quality_checking_system_FILEIO/local/convert_metrics.py +++ /dev/null @@ -1,73 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt - -# Natural MOS to AVA MOS - -def linear_function(x): - m = (4 - 1) / (1.5 - 1) - b = 1 - m * 1 - return m * x + b - -def quadratic_function(x): - return -0.0816 * (x - 5) ** 2 + 5 - -# Natural MOS to AVA MOS -def nat2avaMOS(x): - if x <= 1.5: - return linear_function(x) - elif x >1.5 and x <= 5: - return quadratic_function(x) - -# Word error rate to Intellibility Score (X is percentage) -def WER2INTELI(x): - if x <= 10: - return 100 - elif x <= 100: - slope = (30 - 100) / (100 - 10) - intercept = 100 - slope * 10 - return slope * x + intercept - else: - return 100 * np.exp(-0.01 * (x - 100)) - -# 生成 x 值 -# x = np.linspace(0, 200, 400) # 从0到200生成400个点 - -# 计算对应的 y 值 -# y = [WER2INTELI(xi) for xi in x] - - -# plt.plot(x, y) -# plt.xlabel('x') -# plt.ylabel('f(x)') -# plt.title('Custom Function') -# plt.grid(True) -# plt.show() - -# 生成 x 值的范围 -x1 = np.linspace(1, 1.5, 100) -x2 = np.linspace(1.5, 5, 100) - -# 计算对应的 y 值 -y1 = linear_function(x1) -y2 = quadratic_function(x2) - -# 绘制线性部分 -plt.plot(x1, y1, label='Linear Function (1 <= x <= 1.5)') - -# 绘制二次部分 -plt.plot(x2, y2, label='Quadratic Function (1.5 <= x <= 5)') - -# 添加标签和标题 -plt.xlabel('Natural Mean Opinion Score') -plt.ylabel('AVA Mean Opinion Score') -plt.title('nat2avaMOS') - -# 添加图例 -plt.legend() - -# 显示图形 -plt.grid(True) - -# 显示图像 -plt.savefig("./local/nat2avaMOS.png") -# plt.savefig("./local/WER2INT.png") \ No newline at end of file diff --git a/spaces/KushJaggi/YOLOv8/config.py b/spaces/KushJaggi/YOLOv8/config.py deleted file mode 100644 index 30a0a8149d5b1bb1a8f3f2868018e62ec45eefef..0000000000000000000000000000000000000000 --- a/spaces/KushJaggi/YOLOv8/config.py +++ /dev/null @@ -1,17 +0,0 @@ -yolo_config = { - # Basic - 'img_size': (416, 416, 3), - 'anchors': [12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401], - 'strides': [8, 16, 32], - 'xyscale': [1.2, 1.1, 1.05], - - # Training - 'iou_loss_thresh': 0.5, - 'batch_size': 8, - 'num_gpu': 1, # 2, - - # Inference - 'max_boxes': 100, - 'iou_threshold': 0.413, - 'score_threshold': 0.3, -} diff --git a/spaces/KyanChen/RSPrompter/mmdet/datasets/samplers/multi_source_sampler.py b/spaces/KyanChen/RSPrompter/mmdet/datasets/samplers/multi_source_sampler.py deleted file mode 100644 index 6efcde35e1375547239825a8f78a9e74f7825290..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/datasets/samplers/multi_source_sampler.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools -from typing import Iterator, List, Optional, Sized, Union - -import numpy as np -import torch -from mmengine.dataset import BaseDataset -from mmengine.dist import get_dist_info, sync_random_seed -from torch.utils.data import Sampler - -from mmdet.registry import DATA_SAMPLERS - - -@DATA_SAMPLERS.register_module() -class MultiSourceSampler(Sampler): - r"""Multi-Source Infinite Sampler. - - According to the sampling ratio, sample data from different - datasets to form batches. - - Args: - dataset (Sized): The dataset. - batch_size (int): Size of mini-batch. - source_ratio (list[int | float]): The sampling ratio of different - source datasets in a mini-batch. - shuffle (bool): Whether shuffle the dataset or not. Defaults to True. - seed (int, optional): Random seed. If None, set a random seed. - Defaults to None. - - Examples: - >>> dataset_type = 'ConcatDataset' - >>> sub_dataset_type = 'CocoDataset' - >>> data_root = 'data/coco/' - >>> sup_ann = '../coco_semi_annos/instances_train2017.1@10.json' - >>> unsup_ann = '../coco_semi_annos/' \ - >>> 'instances_train2017.1@10-unlabeled.json' - >>> dataset = dict(type=dataset_type, - >>> datasets=[ - >>> dict( - >>> type=sub_dataset_type, - >>> data_root=data_root, - >>> ann_file=sup_ann, - >>> data_prefix=dict(img='train2017/'), - >>> filter_cfg=dict(filter_empty_gt=True, min_size=32), - >>> pipeline=sup_pipeline), - >>> dict( - >>> type=sub_dataset_type, - >>> data_root=data_root, - >>> ann_file=unsup_ann, - >>> data_prefix=dict(img='train2017/'), - >>> filter_cfg=dict(filter_empty_gt=True, min_size=32), - >>> pipeline=unsup_pipeline), - >>> ]) - >>> train_dataloader = dict( - >>> batch_size=5, - >>> num_workers=5, - >>> persistent_workers=True, - >>> sampler=dict(type='MultiSourceSampler', - >>> batch_size=5, source_ratio=[1, 4]), - >>> batch_sampler=None, - >>> dataset=dataset) - """ - - def __init__(self, - dataset: Sized, - batch_size: int, - source_ratio: List[Union[int, float]], - shuffle: bool = True, - seed: Optional[int] = None) -> None: - - assert hasattr(dataset, 'cumulative_sizes'),\ - f'The dataset must be ConcatDataset, but get {dataset}' - assert isinstance(batch_size, int) and batch_size > 0, \ - 'batch_size must be a positive integer value, ' \ - f'but got batch_size={batch_size}' - assert isinstance(source_ratio, list), \ - f'source_ratio must be a list, but got source_ratio={source_ratio}' - assert len(source_ratio) == len(dataset.cumulative_sizes), \ - 'The length of source_ratio must be equal to ' \ - f'the number of datasets, but got source_ratio={source_ratio}' - - rank, world_size = get_dist_info() - self.rank = rank - self.world_size = world_size - - self.dataset = dataset - self.cumulative_sizes = [0] + dataset.cumulative_sizes - self.batch_size = batch_size - self.source_ratio = source_ratio - - self.num_per_source = [ - int(batch_size * sr / sum(source_ratio)) for sr in source_ratio - ] - self.num_per_source[0] = batch_size - sum(self.num_per_source[1:]) - - assert sum(self.num_per_source) == batch_size, \ - 'The sum of num_per_source must be equal to ' \ - f'batch_size, but get {self.num_per_source}' - - self.seed = sync_random_seed() if seed is None else seed - self.shuffle = shuffle - self.source2inds = { - source: self._indices_of_rank(len(ds)) - for source, ds in enumerate(dataset.datasets) - } - - def _infinite_indices(self, sample_size: int) -> Iterator[int]: - """Infinitely yield a sequence of indices.""" - g = torch.Generator() - g.manual_seed(self.seed) - while True: - if self.shuffle: - yield from torch.randperm(sample_size, generator=g).tolist() - else: - yield from torch.arange(sample_size).tolist() - - def _indices_of_rank(self, sample_size: int) -> Iterator[int]: - """Slice the infinite indices by rank.""" - yield from itertools.islice( - self._infinite_indices(sample_size), self.rank, None, - self.world_size) - - def __iter__(self) -> Iterator[int]: - batch_buffer = [] - while True: - for source, num in enumerate(self.num_per_source): - batch_buffer_per_source = [] - for idx in self.source2inds[source]: - idx += self.cumulative_sizes[source] - batch_buffer_per_source.append(idx) - if len(batch_buffer_per_source) == num: - batch_buffer += batch_buffer_per_source - break - yield from batch_buffer - batch_buffer = [] - - def __len__(self) -> int: - return len(self.dataset) - - def set_epoch(self, epoch: int) -> None: - """Not supported in `epoch-based runner.""" - pass - - -@DATA_SAMPLERS.register_module() -class GroupMultiSourceSampler(MultiSourceSampler): - r"""Group Multi-Source Infinite Sampler. - - According to the sampling ratio, sample data from different - datasets but the same group to form batches. - - Args: - dataset (Sized): The dataset. - batch_size (int): Size of mini-batch. - source_ratio (list[int | float]): The sampling ratio of different - source datasets in a mini-batch. - shuffle (bool): Whether shuffle the dataset or not. Defaults to True. - seed (int, optional): Random seed. If None, set a random seed. - Defaults to None. - """ - - def __init__(self, - dataset: BaseDataset, - batch_size: int, - source_ratio: List[Union[int, float]], - shuffle: bool = True, - seed: Optional[int] = None) -> None: - super().__init__( - dataset=dataset, - batch_size=batch_size, - source_ratio=source_ratio, - shuffle=shuffle, - seed=seed) - - self._get_source_group_info() - self.group_source2inds = [{ - source: - self._indices_of_rank(self.group2size_per_source[source][group]) - for source in range(len(dataset.datasets)) - } for group in range(len(self.group_ratio))] - - def _get_source_group_info(self) -> None: - self.group2size_per_source = [{0: 0, 1: 0}, {0: 0, 1: 0}] - self.group2inds_per_source = [{0: [], 1: []}, {0: [], 1: []}] - for source, dataset in enumerate(self.dataset.datasets): - for idx in range(len(dataset)): - data_info = dataset.get_data_info(idx) - width, height = data_info['width'], data_info['height'] - group = 0 if width < height else 1 - self.group2size_per_source[source][group] += 1 - self.group2inds_per_source[source][group].append(idx) - - self.group_sizes = np.zeros(2, dtype=np.int64) - for group2size in self.group2size_per_source: - for group, size in group2size.items(): - self.group_sizes[group] += size - self.group_ratio = self.group_sizes / sum(self.group_sizes) - - def __iter__(self) -> Iterator[int]: - batch_buffer = [] - while True: - group = np.random.choice( - list(range(len(self.group_ratio))), p=self.group_ratio) - for source, num in enumerate(self.num_per_source): - batch_buffer_per_source = [] - for idx in self.group_source2inds[group][source]: - idx = self.group2inds_per_source[source][group][ - idx] + self.cumulative_sizes[source] - batch_buffer_per_source.append(idx) - if len(batch_buffer_per_source) == num: - batch_buffer += batch_buffer_per_source - break - yield from batch_buffer - batch_buffer = [] diff --git a/spaces/Lamai/LAMAIGPT/tests/smoke_test.py b/spaces/Lamai/LAMAIGPT/tests/smoke_test.py deleted file mode 100644 index 1b9d643fc21f3703384a2bb4f2bd1d725f4dd418..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/smoke_test.py +++ /dev/null @@ -1,59 +0,0 @@ -"""Smoke test for the autogpt package.""" -import os -import subprocess -import sys - -import pytest - -from autogpt.commands.file_operations import delete_file, read_file - - -@pytest.mark.integration_test -def test_write_file() -> None: - """ - Test case to check if the write_file command can successfully write 'Hello World' to a file - named 'hello_world.txt'. - - Read the current ai_settings.yaml file and store its content. - """ - env_vars = {"MEMORY_BACKEND": "no_memory", "TEMPERATURE": "0"} - ai_settings = None - if os.path.exists("ai_settings.yaml"): - with open("ai_settings.yaml", "r") as f: - ai_settings = f.read() - os.remove("ai_settings.yaml") - - try: - if os.path.exists("hello_world.txt"): - # Clean up any existing 'hello_world.txt' file before testing. - delete_file("hello_world.txt") - # Prepare input data for the test. - input_data = """write_file-GPT -an AI designed to use the write_file command to write 'Hello World' into a file named "hello_world.txt" and then use the task_complete command to complete the task. -Use the write_file command to write 'Hello World' into a file named "hello_world.txt". -Use the task_complete command to complete the task. -Do not use any other commands. - -y -5 -EOF""" - command = f"{sys.executable} -m autogpt" - - # Execute the script with the input data. - process = subprocess.Popen( - command, - stdin=subprocess.PIPE, - shell=True, - env={**os.environ, **env_vars}, - ) - process.communicate(input_data.encode()) - - # Read the content of the 'hello_world.txt' file created during the test. - content = read_file("hello_world.txt") - finally: - if ai_settings: - # Restore the original ai_settings.yaml file. - with open("ai_settings.yaml", "w") as f: - f.write(ai_settings) - - # Check if the content of the 'hello_world.txt' file is equal to 'Hello World'. - assert content == "Hello World", f"Expected 'Hello World', got {content}" diff --git a/spaces/LanguageBind/LanguageBind/open_clip/hf_model.py b/spaces/LanguageBind/LanguageBind/open_clip/hf_model.py deleted file mode 100644 index 08dbdbcde02b550ca765ca9bcb0b667be2c0443d..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/open_clip/hf_model.py +++ /dev/null @@ -1,193 +0,0 @@ -""" huggingface model adapter - -Wraps HuggingFace transformers (https://github.com/huggingface/transformers) models for use as a text tower in CLIP model. -""" -import re - -import torch -import torch.nn as nn -from torch import TensorType - -try: - import transformers - from transformers import AutoModel, AutoTokenizer, AutoConfig, PretrainedConfig - from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling, \ - BaseModelOutputWithPoolingAndCrossAttentions -except ImportError as e: - transformers = None - - - class BaseModelOutput: - pass - - - class PretrainedConfig: - pass - -from .hf_configs import arch_dict - - -# utils -def _camel2snake(s): - return re.sub(r'(? self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/classifiers.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/classifiers.py deleted file mode 100644 index 0cbb7c5ef2fbe0b5345ebfc50318ebb42d5aac35..0000000000000000000000000000000000000000 --- a/spaces/Lewislou/Lewislou-cell-seg-sribd/classifiers.py +++ /dev/null @@ -1,261 +0,0 @@ -from functools import partial -from typing import Any, Callable, List, Optional, Type, Union - -import torch -import torch.nn as nn -from torch import Tensor - -def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d: - """3x3 convolution with padding""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - groups=groups, - bias=False, - dilation=dilation, - ) - - -def conv1x1(in_planes: int, out_planes: int, stride: int = 1) -> nn.Conv2d: - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -class BasicBlock(nn.Module): - expansion: int = 1 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None, - ) -> None: - super().__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError("BasicBlock only supports groups=1 and base_width=64") - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - -class Bottleneck(nn.Module): - # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) - # while original implementation places the stride at the first 1x1 convolution(self.conv1) - # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. - # This variant is also known as ResNet V1.5 and improves accuracy according to - # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. - - expansion: int = 4 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None, - ) -> None: - super().__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.0)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - -class ResNet(nn.Module): - def __init__( - self, - block: Type[Union[BasicBlock, Bottleneck]], - layers: List[int], - num_classes: int = 1000, - zero_init_residual: bool = False, - groups: int = 1, - width_per_group: int = 64, - replace_stride_with_dilation: Optional[List[bool]] = None, - norm_layer: Optional[Callable[..., nn.Module]] = None, - ) -> None: - super().__init__() - # _log_api_usage_once(self) - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError( - "replace_stride_with_dilation should be None " - f"or a 3-element tuple, got {replace_stride_with_dilation}" - ) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, dilate=replace_stride_with_dilation[2]) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="relu") - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck) and m.bn3.weight is not None: - nn.init.constant_(m.bn3.weight, 0) # type: ignore[arg-type] - elif isinstance(m, BasicBlock) and m.bn2.weight is not None: - nn.init.constant_(m.bn2.weight, 0) # type: ignore[arg-type] - - def _make_layer( - self, - block: Type[Union[BasicBlock, Bottleneck]], - planes: int, - blocks: int, - stride: int = 1, - dilate: bool = False, - ) -> nn.Sequential: - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - self.inplanes, planes, stride, downsample, self.groups, self.base_width, previous_dilation, norm_layer - ) - ) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block( - self.inplanes, - planes, - groups=self.groups, - base_width=self.base_width, - dilation=self.dilation, - norm_layer=norm_layer, - ) - ) - - return nn.Sequential(*layers) - - def _forward_impl(self, x: Tensor) -> Tensor: - # See note [TorchScript super()] - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.fc(x) - - return x - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - -def resnet18(weights=None): - # weights: path - model = ResNet(BasicBlock, [2, 2, 2, 2], num_classes=4) - if weights is not None: - model.load_state_dict(torch.load(weights)) - return model - -def resnet10(): - return ResNet(BasicBlock, [1, 1, 1, 1], num_classes=4) diff --git a/spaces/LightSY/W2L-TD/facelib/utils/face_utils.py b/spaces/LightSY/W2L-TD/facelib/utils/face_utils.py deleted file mode 100644 index 5b520e3dcf6e407fd4965218d97439a375e729bf..0000000000000000000000000000000000000000 --- a/spaces/LightSY/W2L-TD/facelib/utils/face_utils.py +++ /dev/null @@ -1,249 +0,0 @@ -import cv2 -import numpy as np -# import torch - - -def compute_increased_bbox(bbox, increase_area, preserve_aspect=True): - left, top, right, bot = bbox - width = right - left - height = bot - top - - if preserve_aspect: - width_increase = max(increase_area, ((1 + 2 * increase_area) * height - width) / (2 * width)) - height_increase = max(increase_area, ((1 + 2 * increase_area) * width - height) / (2 * height)) - else: - width_increase = height_increase = increase_area - left = int(left - width_increase * width) - top = int(top - height_increase * height) - right = int(right + width_increase * width) - bot = int(bot + height_increase * height) - return (left, top, right, bot) - - -def get_valid_bboxes(bboxes, h, w): - left = max(bboxes[0], 0) - top = max(bboxes[1], 0) - right = min(bboxes[2], w) - bottom = min(bboxes[3], h) - return (left, top, right, bottom) - - -def align_crop_face_landmarks(img, - landmarks, - output_size, - transform_size=None, - enable_padding=True, - return_inverse_affine=False, - shrink_ratio=(1, 1)): - """Align and crop face with landmarks. - - The output_size and transform_size are based on width. The height is - adjusted based on shrink_ratio_h/shring_ration_w. - - Modified from: - https://github.com/NVlabs/ffhq-dataset/blob/master/download_ffhq.py - - Args: - img (Numpy array): Input image. - landmarks (Numpy array): 5 or 68 or 98 landmarks. - output_size (int): Output face size. - transform_size (ing): Transform size. Usually the four time of - output_size. - enable_padding (float): Default: True. - shrink_ratio (float | tuple[float] | list[float]): Shring the whole - face for height and width (crop larger area). Default: (1, 1). - - Returns: - (Numpy array): Cropped face. - """ - lm_type = 'retinaface_5' # Options: dlib_5, retinaface_5 - - if isinstance(shrink_ratio, (float, int)): - shrink_ratio = (shrink_ratio, shrink_ratio) - if transform_size is None: - transform_size = output_size * 4 - - # Parse landmarks - lm = np.array(landmarks) - if lm.shape[0] == 5 and lm_type == 'retinaface_5': - eye_left = lm[0] - eye_right = lm[1] - mouth_avg = (lm[3] + lm[4]) * 0.5 - elif lm.shape[0] == 5 and lm_type == 'dlib_5': - lm_eye_left = lm[2:4] - lm_eye_right = lm[0:2] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = lm[4] - elif lm.shape[0] == 68: - lm_eye_left = lm[36:42] - lm_eye_right = lm[42:48] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = (lm[48] + lm[54]) * 0.5 - elif lm.shape[0] == 98: - lm_eye_left = lm[60:68] - lm_eye_right = lm[68:76] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = (lm[76] + lm[82]) * 0.5 - - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - eye_to_mouth = mouth_avg - eye_avg - - # Get the oriented crop rectangle - # x: half width of the oriented crop rectangle - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - # - np.flipud(eye_to_mouth) * [-1, 1]: rotate 90 clockwise - # norm with the hypotenuse: get the direction - x /= np.hypot(*x) # get the hypotenuse of a right triangle - rect_scale = 1 # TODO: you can edit it to get larger rect - x *= max(np.hypot(*eye_to_eye) * 2.0 * rect_scale, np.hypot(*eye_to_mouth) * 1.8 * rect_scale) - # y: half height of the oriented crop rectangle - y = np.flipud(x) * [-1, 1] - - x *= shrink_ratio[1] # width - y *= shrink_ratio[0] # height - - # c: center - c = eye_avg + eye_to_mouth * 0.1 - # quad: (left_top, left_bottom, right_bottom, right_top) - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - # qsize: side length of the square - qsize = np.hypot(*x) * 2 - - quad_ori = np.copy(quad) - # Shrink, for large face - # TODO: do we really need shrink - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - h, w = img.shape[0:2] - rsize = (int(np.rint(float(w) / shrink)), int(np.rint(float(h) / shrink))) - img = cv2.resize(img, rsize, interpolation=cv2.INTER_AREA) - quad /= shrink - qsize /= shrink - - # Crop - h, w = img.shape[0:2] - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, w), min(crop[3] + border, h)) - if crop[2] - crop[0] < w or crop[3] - crop[1] < h: - img = img[crop[1]:crop[3], crop[0]:crop[2], :] - quad -= crop[0:2] - - # Pad - # pad: (width_left, height_top, width_right, height_bottom) - h, w = img.shape[0:2] - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - w + border, 0), max(pad[3] - h + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(img, ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w = img.shape[0:2] - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], - np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], - np.float32(h - 1 - y) / pad[3])) - blur = int(qsize * 0.02) - if blur % 2 == 0: - blur += 1 - blur_img = cv2.boxFilter(img, 0, ksize=(blur, blur)) - - img = img.astype('float32') - img += (blur_img - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = np.clip(img, 0, 255) # float32, [0, 255] - quad += pad[:2] - - # Transform use cv2 - h_ratio = shrink_ratio[0] / shrink_ratio[1] - dst_h, dst_w = int(transform_size * h_ratio), transform_size - template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]]) - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D(quad, template, method=cv2.LMEDS)[0] - cropped_face = cv2.warpAffine( - img, affine_matrix, (dst_w, dst_h), borderMode=cv2.BORDER_CONSTANT, borderValue=(135, 133, 132)) # gray - - if output_size < transform_size: - cropped_face = cv2.resize( - cropped_face, (output_size, int(output_size * h_ratio)), interpolation=cv2.INTER_LINEAR) - - if return_inverse_affine: - dst_h, dst_w = int(output_size * h_ratio), output_size - template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]]) - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D( - quad_ori, np.array([[0, 0], [0, output_size], [dst_w, dst_h], [dst_w, 0]]), method=cv2.LMEDS)[0] - inverse_affine = cv2.invertAffineTransform(affine_matrix) - else: - inverse_affine = None - return cropped_face, inverse_affine - - -def paste_face_back(img, face, inverse_affine): - h, w = img.shape[0:2] - face_h, face_w = face.shape[0:2] - inv_restored = cv2.warpAffine(face, inverse_affine, (w, h)) - mask = np.ones((face_h, face_w, 3), dtype=np.float32) - inv_mask = cv2.warpAffine(mask, inverse_affine, (w, h)) - # remove the black borders - inv_mask_erosion = cv2.erode(inv_mask, np.ones((2, 2), np.uint8)) - inv_restored_remove_border = inv_mask_erosion * inv_restored - total_face_area = np.sum(inv_mask_erosion) // 3 - # compute the fusion edge based on the area of face - w_edge = int(total_face_area**0.5) // 20 - erosion_radius = w_edge * 2 - inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)) - blur_size = w_edge * 2 - inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0) - img = inv_soft_mask * inv_restored_remove_border + (1 - inv_soft_mask) * img - # float32, [0, 255] - return img - - -if __name__ == '__main__': - import os - - from facelib.detection import init_detection_model - from facelib.utils.face_restoration_helper import get_largest_face - - img_path = '/home/wxt/datasets/ffhq/ffhq_wild/00009.png' - img_name = os.splitext(os.path.basename(img_path))[0] - - # initialize model - det_net = init_detection_model('retinaface_resnet50', half=False) - img_ori = cv2.imread(img_path) - h, w = img_ori.shape[0:2] - # if larger than 800, scale it - scale = max(h / 800, w / 800) - if scale > 1: - img = cv2.resize(img_ori, (int(w / scale), int(h / scale)), interpolation=cv2.INTER_LINEAR) - - # with torch.no_grad(): - # bboxes = det_net.detect_faces(img, 0.97) - bboxes = det_net.detect_faces(img, 0.97) - if scale > 1: - bboxes *= scale # the score is incorrect - bboxes = get_largest_face(bboxes, h, w)[0] - - landmarks = np.array([[bboxes[i], bboxes[i + 1]] for i in range(5, 15, 2)]) - - cropped_face, inverse_affine = align_crop_face_landmarks( - img_ori, - landmarks, - output_size=512, - transform_size=None, - enable_padding=True, - return_inverse_affine=True, - shrink_ratio=(1, 1)) - - cv2.imwrite(f'tmp/{img_name}_cropeed_face.png', cropped_face) - img = paste_face_back(img_ori, cropped_face, inverse_affine) - cv2.imwrite(f'tmp/{img_name}_back.png', img) diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_NOUGAT.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_NOUGAT.py" deleted file mode 100644 index ed15121159837d0f543063708a2cbff50b9e5491..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_NOUGAT.py" +++ /dev/null @@ -1,271 +0,0 @@ -from toolbox import CatchException, report_execption, gen_time_str -from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion -from toolbox import write_history_to_file, get_log_folder -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency -from .crazy_utils import read_and_clean_pdf_text -from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url -from colorful import * -import os -import math -import logging - -def markdown_to_dict(article_content): - import markdown - from bs4 import BeautifulSoup - cur_t = "" - cur_c = "" - results = {} - for line in article_content: - if line.startswith('#'): - if cur_t!="": - if cur_t not in results: - results.update({cur_t:cur_c.lstrip('\n')}) - else: - # 处理重名的章节 - results.update({cur_t + " " + gen_time_str():cur_c.lstrip('\n')}) - cur_t = line.rstrip('\n') - cur_c = "" - else: - cur_c += line - results_final = {} - for k in list(results.keys()): - if k.startswith('# '): - results_final['title'] = k.split('# ')[-1] - results_final['authors'] = results.pop(k).lstrip('\n') - if k.startswith('###### Abstract'): - results_final['abstract'] = results.pop(k).lstrip('\n') - - results_final_sections = [] - for k,v in results.items(): - results_final_sections.append({ - 'heading':k.lstrip("# "), - 'text':v if len(v) > 0 else f"The beginning of {k.lstrip('# ')} section." - }) - results_final['sections'] = results_final_sections - return results_final - - -@CatchException -def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - - disable_auto_promotion(chatbot) - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量翻译PDF文档。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import nougat - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade nougat-ocr tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - from .crazy_utils import get_files_from_everything - success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf') - # 检测输入参数,如没有给定输入参数,直接退出 - if not success: - if txt == "": txt = '空空如也的输入栏' - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析PDF_基于NOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -def nougat_with_timeout(command, cwd, timeout=3600): - import subprocess - process = subprocess.Popen(command, shell=True, cwd=cwd) - try: - stdout, stderr = process.communicate(timeout=timeout) - except subprocess.TimeoutExpired: - process.kill() - stdout, stderr = process.communicate() - print("Process timed out!") - return False - return True - - -def NOUGAT_parse_pdf(fp): - import glob - from toolbox import get_log_folder, gen_time_str - dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str()) - os.makedirs(dst) - nougat_with_timeout(f'nougat --out "{os.path.abspath(dst)}" "{os.path.abspath(fp)}"', os.getcwd()) - res = glob.glob(os.path.join(dst,'*.mmd')) - if len(res) == 0: - raise RuntimeError("Nougat解析论文失败。") - return res[0] - - -def 解析PDF_基于NOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import copy - import tiktoken - TOKEN_LIMIT_PER_FRAGMENT = 1280 - generated_conclusion_files = [] - generated_html_files = [] - DST_LANG = "中文" - for index, fp in enumerate(file_manifest): - chatbot.append(["当前进度:", f"正在解析论文,请稍候。(第一次运行时,需要花费较长时间下载NOUGAT参数)"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - fpp = NOUGAT_parse_pdf(fp) - - with open(fpp, 'r', encoding='utf8') as f: - article_content = f.readlines() - article_dict = markdown_to_dict(article_content) - logging.info(article_dict) - - prompt = "以下是一篇学术论文的基本信息:\n" - # title - title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n' - # authors - authors = article_dict.get('authors', '无法获取 authors'); prompt += f'authors:{authors}\n\n' - # abstract - abstract = article_dict.get('abstract', '无法获取 abstract'); prompt += f'abstract:{abstract}\n\n' - # command - prompt += f"请将题目和摘要翻译为{DST_LANG}。" - meta = [f'# Title:\n\n', title, f'# Abstract:\n\n', abstract ] - - # 单线,获取文章meta信息 - paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=prompt, - inputs_show_user=prompt, - llm_kwargs=llm_kwargs, - chatbot=chatbot, history=[], - sys_prompt="You are an academic paper reader。", - ) - - # 多线,翻译 - inputs_array = [] - inputs_show_user_array = [] - - # get_token_num - from request_llm.bridge_all import model_info - enc = model_info[llm_kwargs['llm_model']]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - - def break_down(txt): - raw_token_num = get_token_num(txt) - if raw_token_num <= TOKEN_LIMIT_PER_FRAGMENT: - return [txt] - else: - # raw_token_num > TOKEN_LIMIT_PER_FRAGMENT - # find a smooth token limit to achieve even seperation - count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT)) - token_limit_smooth = raw_token_num // count + count - return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth) - - for section in article_dict.get('sections'): - if len(section['text']) == 0: continue - section_frags = break_down(section['text']) - for i, fragment in enumerate(section_frags): - heading = section['heading'] - if len(section_frags) > 1: heading += f' Part-{i+1}' - inputs_array.append( - f"你需要翻译{heading}章节,内容如下: \n\n{fragment}" - ) - inputs_show_user_array.append( - f"# {heading}\n\n{fragment}" - ) - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[meta for _ in inputs_array], - sys_prompt_array=[ - "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in inputs_array], - ) - res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + gpt_response_collection, file_basename=None, file_fullname=None) - promote_file_to_downloadzone(res_path, rename_file=os.path.basename(fp)+'.md', chatbot=chatbot) - generated_conclusion_files.append(res_path) - - ch = construct_html() - orig = "" - trans = "" - gpt_response_collection_html = copy.deepcopy(gpt_response_collection) - for i,k in enumerate(gpt_response_collection_html): - if i%2==0: - gpt_response_collection_html[i] = inputs_show_user_array[i//2] - else: - gpt_response_collection_html[i] = gpt_response_collection_html[i] - - final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""] - final.extend(gpt_response_collection_html) - for i, k in enumerate(final): - if i%2==0: - orig = k - if i%2==1: - trans = k - ch.add_row(a=orig, b=trans) - create_report_file_name = f"{os.path.basename(fp)}.trans.html" - html_file = ch.save_file(create_report_file_name) - generated_html_files.append(html_file) - promote_file_to_downloadzone(html_file, rename_file=os.path.basename(html_file), chatbot=chatbot) - - chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files))) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - -class construct_html(): - def __init__(self) -> None: - self.css = """ -.row { - display: flex; - flex-wrap: wrap; -} - -.column { - flex: 1; - padding: 10px; -} - -.table-header { - font-weight: bold; - border-bottom: 1px solid black; -} - -.table-row { - border-bottom: 1px solid lightgray; -} - -.table-cell { - padding: 5px; -} - """ - self.html_string = f'翻译结果' - - - def add_row(self, a, b): - tmp = """ -
-
REPLACE_A
-
REPLACE_B
-
- """ - from toolbox import markdown_convertion - tmp = tmp.replace('REPLACE_A', markdown_convertion(a)) - tmp = tmp.replace('REPLACE_B', markdown_convertion(b)) - self.html_string += tmp - - - def save_file(self, file_name): - with open(os.path.join(get_log_folder(), file_name), 'w', encoding='utf8') as f: - f.write(self.html_string.encode('utf-8', 'ignore').decode()) - return os.path.join(get_log_folder(), file_name) diff --git a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_chatglm.py b/spaces/Liu-LAB/GPT-academic/request_llm/bridge_chatglm.py deleted file mode 100644 index 6dac86395da134aa896da9d9a7c84ccd94e795d0..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_chatglm.py +++ /dev/null @@ -1,166 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "ChatGLM尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,ChatGLM消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.chatglm_model = None - self.chatglm_tokenizer = None - self.info = "" - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import sentencepiece - self.info = "依赖检测通过" - self.success = True - except: - self.info = "缺少ChatGLM的依赖,如果要使用ChatGLM,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。" - self.success = False - - def ready(self): - return self.chatglm_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - retry = 0 - LOCAL_MODEL_QUANT, device = get_conf('LOCAL_MODEL_QUANT', 'LOCAL_MODEL_DEVICE') - - if LOCAL_MODEL_QUANT == "INT4": # INT4 - _model_name_ = "THUDM/chatglm2-6b-int4" - elif LOCAL_MODEL_QUANT == "INT8": # INT8 - _model_name_ = "THUDM/chatglm2-6b-int8" - else: - _model_name_ = "THUDM/chatglm2-6b" # FP16 - - while True: - try: - if self.chatglm_model is None: - self.chatglm_tokenizer = AutoTokenizer.from_pretrained(_model_name_, trust_remote_code=True) - if device=='cpu': - self.chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).float() - else: - self.chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).half().cuda() - self.chatglm_model = self.chatglm_model.eval() - break - else: - break - except: - retry += 1 - if retry > 3: - self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。') - raise RuntimeError("不能正常加载ChatGLM的参数!") - - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - # 收到消息,开始请求 - try: - for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs): - self.child.send(response) - # # 中途接收可能的终止指令(如果有的话) - # if self.child.poll(): - # command = self.child.recv() - # if command == '[Terminate]': break - except: - from toolbox import trimmed_format_exc - self.child.send('[Local Message] Call ChatGLM fail.' + '\n```\n' + trimmed_format_exc() + '\n```\n') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global glm_handle -glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info - if not glm_handle.success: - error = glm_handle.info - glm_handle = None - raise RuntimeError(error) - - # chatglm 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - history_feedin.append(["What can I do?", sys_prompt]) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not glm_handle.success: - glm_handle = None - return - - if additional_fn is not None: - from core_functional import handle_core_functionality - inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) - - # 处理历史信息 - history_feedin = [] - history_feedin.append(["What can I do?", system_prompt] ) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收chatglm的回复 - response = "[Local Message]: 等待ChatGLM响应中 ..." - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待ChatGLM响应中 ...": - response = "[Local Message]: ChatGLM响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/LoveWaves/123/README.md b/spaces/LoveWaves/123/README.md deleted file mode 100644 index c951d14de2cbff3e969800c9cc6a32fe07c868ea..0000000000000000000000000000000000000000 --- a/spaces/LoveWaves/123/README.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: LabelStudio -emoji: 🟧 -colorFrom: yellow -colorTo: purple -sdk: docker -tags: -- label-studio -fullwidth: true -license: openrail -app_port: 8080 -duplicated_from: LabelStudio/LabelStudio ---- - - -[Website](https://hubs.ly/Q01CNgsd0) • [Docs](https://hubs.ly/Q01CN9Yq0) • [12K+ GitHub ⭐️!](https://hubs.ly/Q01CNbPQ0) • [Slack Community](https://hubs.ly/Q01CNb9H0) - -## What is Label Studio? - -Label Studio is an open source data labeling platform. It lets you label audio, -text, images, videos, and time series data with a simple, straightforward, and -highly-configurable user interface. Label Studio can prepare new data or -improve existing training data to get more accurate ML models. - - -## Label Studio in Hugging Face Spaces - -The Label Studio community is thrilled to offer Label Studio as a Hugging Face -Spaces application. You can try the data-annotation interface, connect popular -machine learning models, and share the application with collaborators. You can -start immediately by creating an account or replicate the space and work in -your own environment. - -## Creating a Use Account and Logging In - -Begin by creating a new account in the Label Studio space, then log in with your -credentials. - -**By default, these spaces permit anyone to create a new login -account, allowing them to view and modify project configuration, data sets, and -annotations. Without any modifications, treat this space like a demo environment.** - -## Creating a Labeling Project - -After logging in, Label Studio will present you with a project view. Here you -can create a new project with prompts to upload data and set up a custom -configuration interface. - -**Note that in the default configuration, storage is local and temporary. Any -projects, annotations, and configurations will be lost if the space is restarted.** - -## Next Steps and Additional Resources - -To help with getting started, the Label Studio community curated a list of -resources including tutorials and documentation. - -- 🚀 [Zero to One with Label Studio Tutorial](https://labelstud.io/blog/introduction-to-label-studio-in-hugging-face-spaces/) -- 📈 [Try Label Studio Enterprise](https://hubs.ly/Q01CMLll0) -- 🤗 [Tutorial: Using Label Studio with Hugging Face Datasets Hub](https://danielvanstrien.xyz/huggingface/huggingface-datasets/annotation/full%20stack%20deep%20learning%20notes/2022/09/07/label-studio-annotations-hub.html) -- 💡 [Label Studio Docs](https://hubs.ly/Q01CN9Yq0) - - -![Gif of Label Studio annotating different types of data](https://raw.githubusercontent.com/heartexlabs/label-studio/master/images/annotation_examples.gif) - -### Making your Label Studio Hugging Face Space production-ready - -By default this space allows for the unrestricted creation of new accounts -will full access to all projects and data. This is great for trying out -Label Studio and collaborating on projects, but you may want to restrict -access to your space to only authorized users. Add the following environment -variable to your spaces Dockerfile to disable public account creation for -this space. - - ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true - -Set secrets in your space to create an inital user, and log in with your -provided username and password. Do not set these in your Dockerfile, as they -globally visible on a public space. - - LABEL_STUDIO_USERNAME - LABEL_STUDIO_PASSWORD - -You will need to provide new users with an invitation link to join the space, -which can be found in the Organizations interface of Label Studio - -By default this space stores all project configuration and data annotations -in local storage with Sqlite. If the space is reset, all configuration and -annotation data in the space will be lost. You can enable configuration -persistence by connecting an external Postgres database to your space, -guaranteeing that all project and annotation settings are preserved. - -Set the following secret variables to match your own hosted instance of -Postgres. We strongly recommend setting these as secrets to prevent leaking -information about your database service to the public in your spaces -definition. - - DJANGO_DB=default - POSTGRE_NAME= - POSTGRE_PORT= - POSTGRE_USER= - POSTGRE_PASSWORD= - POSTGRE_PORT= - POSTGRE_HOST= - -Add the following environment variable to remove the warning about ephemeral -storage. - - ENV STORAGE_PERSISTENCE=1 - -Note that you will need to connect cloud storage to host data items that you -want to annotate, as local storage will not be preserved across a space reset. - -By default the only data storage enabled for this space is local. In the case -of a space reset, all data will be lost. To enable permanent storage, you -must enable a cloud storage connector. We also strongly recommend enabling -configuration persistence to preserve project data, annotations, and user -settings. Choose the appropriate cloud connector and configure the secrets -for it. - -#### Amazon S3 - STORAGE_TYPE=s3 - STORAGE_AWS_ACCESS_KEY_ID="" - STORAGE_AWS_SECRET_ACCESS_KEY="" - STORAGE_AWS_BUCKET_NAME="" - STORAGE_AWS_REGION_NAME="" - STORAGE_AWS_FOLDER="" - -#### Google Cloud Storage - - STORAGE_TYPE=gcs - STORAGE_GCS_BUCKET_NAME="" - STORAGE_GCS_PROJECT_ID="" - STORAGE_GCS_FOLDER="" - GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json" - -Azure Blob Storage -================== - - STORAGE_TYPE=azure - STORAGE_AZURE_ACCOUNT_NAME="" - STORAGE_AZURE_ACCOUNT_KEY="" - STORAGE_AZURE_CONTAINER_NAME="" - STORAGE_AZURE_FOLDER="" - - -## Questions? Concerns? Want to get involved? - -Email the community team at [community@labelstud.io](mailto:community@labelstud.io) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/base.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/base.py deleted file mode 100644 index 3776506328ef9457afdad047fb4219c5e25c3ab6..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/base.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import torch.nn.functional as F - -from ..transforms import AddHorizontalFlip, SigmoidForPred, LimitLongestSide - - -class BasePredictor(object): - def __init__(self, net, device, - net_clicks_limit=None, - with_flip=False, - zoom_in=None, - max_size=None, - **kwargs): - self.net = net - self.with_flip = with_flip - self.net_clicks_limit = net_clicks_limit - self.original_image = None - self.device = device - self.zoom_in = zoom_in - - self.transforms = [zoom_in] if zoom_in is not None else [] - if max_size is not None: - self.transforms.append(LimitLongestSide(max_size=max_size)) - self.transforms.append(SigmoidForPred()) - if with_flip: - self.transforms.append(AddHorizontalFlip()) - - def set_input_image(self, image_nd): - for transform in self.transforms: - transform.reset() - self.original_image = image_nd.to(self.device) - if len(self.original_image.shape) == 3: - self.original_image = self.original_image.unsqueeze(0) - - def get_prediction(self, clicker): - clicks_list = clicker.get_clicks() - - image_nd, clicks_lists, is_image_changed = self.apply_transforms( - self.original_image, [clicks_list] - ) - - pred_logits = self._get_prediction(image_nd, clicks_lists, is_image_changed) - prediction = F.interpolate(pred_logits, mode='bilinear', align_corners=True, - size=image_nd.size()[2:]) - - for t in reversed(self.transforms): - prediction = t.inv_transform(prediction) - - if self.zoom_in is not None and self.zoom_in.check_possible_recalculation(): - print('zooming') - return self.get_prediction(clicker) - - # return prediction.cpu().numpy()[0, 0] - return prediction - - def _get_prediction(self, image_nd, clicks_lists, is_image_changed): - points_nd = self.get_points_nd(clicks_lists) - return self.net(image_nd, points_nd)['instances'] - - def _get_transform_states(self): - return [x.get_state() for x in self.transforms] - - def _set_transform_states(self, states): - assert len(states) == len(self.transforms) - for state, transform in zip(states, self.transforms): - transform.set_state(state) - - def apply_transforms(self, image_nd, clicks_lists): - is_image_changed = False - for t in self.transforms: - image_nd, clicks_lists = t.transform(image_nd, clicks_lists) - is_image_changed |= t.image_changed - - return image_nd, clicks_lists, is_image_changed - - def get_points_nd(self, clicks_lists): - total_clicks = [] - num_pos_clicks = [sum(x.is_positive for x in clicks_list) for clicks_list in clicks_lists] - num_neg_clicks = [len(clicks_list) - num_pos for clicks_list, num_pos in zip(clicks_lists, num_pos_clicks)] - num_max_points = max(num_pos_clicks + num_neg_clicks) - if self.net_clicks_limit is not None: - num_max_points = min(self.net_clicks_limit, num_max_points) - num_max_points = max(1, num_max_points) - - for clicks_list in clicks_lists: - clicks_list = clicks_list[:self.net_clicks_limit] - pos_clicks = [click.coords for click in clicks_list if click.is_positive] - pos_clicks = pos_clicks + (num_max_points - len(pos_clicks)) * [(-1, -1)] - - neg_clicks = [click.coords for click in clicks_list if not click.is_positive] - neg_clicks = neg_clicks + (num_max_points - len(neg_clicks)) * [(-1, -1)] - total_clicks.append(pos_clicks + neg_clicks) - - return torch.tensor(total_clicks, device=self.device) - - def get_states(self): - return {'transform_states': self._get_transform_states()} - - def set_states(self, states): - self._set_transform_states(states['transform_states']) diff --git a/spaces/MarkusDressel/cord/app.py b/spaces/MarkusDressel/cord/app.py deleted file mode 100644 index fe0192d68dd787e8d59f7f6bbc294c2e19184d56..0000000000000000000000000000000000000000 --- a/spaces/MarkusDressel/cord/app.py +++ /dev/null @@ -1,77 +0,0 @@ -import os -import gradio as gr -import numpy as np -from transformers import LayoutLMv2Processor, LayoutLMv2ForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont -import PIL - - -processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased") -model = LayoutLMv2ForTokenClassification.from_pretrained("MarkusDressel/cord") -id2label = model.config.id2label - -label_ints = np.random.randint(0,len(PIL.ImageColor.colormap.items()),30) - -label_color_pil = [k for k,_ in PIL.ImageColor.colormap.items()] - -label_color = [label_color_pil[i] for i in label_ints] -label2color = {} -for k,v in id2label.items(): - label2color[v[2:]]=label_color[k] - - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] -def iob_to_label(label): - label = label[2:] - if not label: - return 'other' - return label - - -def process_image(image): - width, height = image.size - # encode - encoding = processor(image, truncation=True, return_offsets_mapping=True, return_tensors="pt") - offset_mapping = encoding.pop('offset_mapping') - # forward pass - outputs = model(**encoding) - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0 - true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]] - true_boxes = [unnormalize_box(box, width, height) for idx, box in enumerate(token_boxes) if not is_subword[idx]] - # draw predictions over the image - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction).lower() - draw.rectangle(box, outline=label2color[predicted_label], width=5) - draw.text((box[0]+10, box[1]-10), text=predicted_label, fill=label2color[predicted_label], font=font) - - return image -title = "Cord demo: LayoutLMv2" -description = "Demo for Microsoft's LayoutLMv2.This particular model is fine-tuned on CORD, a dataset of manually annotated receipts. It annotates the words appearing in the image in up to 30 classes. To use it, simply upload an image or use the example image below and click 'Submit'. Results will show up in a few seconds. If you want to make the output bigger, right-click on it and select 'Open image in new tab'." -article = "

LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding | Github Repo

" -examples =[['receipt_00189.png']] -css = ".output_image, .input_image {height: 40rem !important; width: 100% !important;}" -#css = "@media screen and (max-width: 600px) { .output_image, .input_image {height:20rem !important; width: 100% !important;} }" -# css = ".output_image, .input_image {height: 600px !important}" -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="annotated image"), - title=title, - description=description, - article=article, - examples=examples, - css=css, - enable_queue=True) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py deleted file mode 100644 index a33e7972877f902d0e7d18401ca675e3e4e60a18..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py +++ /dev/null @@ -1,51 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='FCNHead', - in_channels=64, - in_index=4, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/pointrend_r50.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/pointrend_r50.py deleted file mode 100644 index 9d323dbf9466d41e0800aa57ef84045f3d874bdf..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/pointrend_r50.py +++ /dev/null @@ -1,56 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='CascadeEncoderDecoder', - num_stages=2, - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=4), - decode_head=[ - dict( - type='FPNHead', - in_channels=[256, 256, 256, 256], - in_index=[0, 1, 2, 3], - feature_strides=[4, 8, 16, 32], - channels=128, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - dict( - type='PointHead', - in_channels=[256], - in_index=[0], - channels=256, - num_fcs=3, - coarse_pred_each_layer=True, - dropout_ratio=-1, - num_classes=19, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) - ], - # model training and testing settings - train_cfg=dict( - num_points=2048, oversample_ratio=3, importance_sample_ratio=0.75), - test_cfg=dict( - mode='whole', - subdivision_steps=2, - subdivision_num_points=8196, - scale_factor=2)) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/scatter_points.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/scatter_points.py deleted file mode 100644 index 2b8aa4169e9f6ca4a6f845ce17d6d1e4db416bb8..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/scatter_points.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', - ['dynamic_point_to_voxel_forward', 'dynamic_point_to_voxel_backward']) - - -class _DynamicScatter(Function): - - @staticmethod - def forward(ctx, feats, coors, reduce_type='max'): - """convert kitti points(N, >=3) to voxels. - - Args: - feats (torch.Tensor): [N, C]. Points features to be reduced - into voxels. - coors (torch.Tensor): [N, ndim]. Corresponding voxel coordinates - (specifically multi-dim voxel index) of each points. - reduce_type (str, optional): Reduce op. support 'max', 'sum' and - 'mean'. Default: 'max'. - - Returns: - voxel_feats (torch.Tensor): [M, C]. Reduced features, input - features that shares the same voxel coordinates are reduced to - one row. - voxel_coors (torch.Tensor): [M, ndim]. Voxel coordinates. - """ - results = ext_module.dynamic_point_to_voxel_forward( - feats, coors, reduce_type) - (voxel_feats, voxel_coors, point2voxel_map, - voxel_points_count) = results - ctx.reduce_type = reduce_type - ctx.save_for_backward(feats, voxel_feats, point2voxel_map, - voxel_points_count) - ctx.mark_non_differentiable(voxel_coors) - return voxel_feats, voxel_coors - - @staticmethod - def backward(ctx, grad_voxel_feats, grad_voxel_coors=None): - (feats, voxel_feats, point2voxel_map, - voxel_points_count) = ctx.saved_tensors - grad_feats = torch.zeros_like(feats) - # TODO: whether to use index put or use cuda_backward - # To use index put, need point to voxel index - ext_module.dynamic_point_to_voxel_backward( - grad_feats, grad_voxel_feats.contiguous(), feats, voxel_feats, - point2voxel_map, voxel_points_count, ctx.reduce_type) - return grad_feats, None, None - - -dynamic_scatter = _DynamicScatter.apply - - -class DynamicScatter(nn.Module): - """Scatters points into voxels, used in the voxel encoder with dynamic - voxelization. - - Note: - The CPU and GPU implementation get the same output, but have numerical - difference after summation and division (e.g., 5e-7). - - Args: - voxel_size (list): list [x, y, z] size of three dimension. - point_cloud_range (list): The coordinate range of points, [x_min, - y_min, z_min, x_max, y_max, z_max]. - average_points (bool): whether to use avg pooling to scatter points - into voxel. - """ - - def __init__(self, voxel_size, point_cloud_range, average_points: bool): - super().__init__() - - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.average_points = average_points - - def forward_single(self, points, coors): - """Scatters points into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - voxel_feats (torch.Tensor): Reduced features, input features that - shares the same voxel coordinates are reduced to one row. - voxel_coors (torch.Tensor): Voxel coordinates. - """ - reduce = 'mean' if self.average_points else 'max' - return dynamic_scatter(points.contiguous(), coors.contiguous(), reduce) - - def forward(self, points, coors): - """Scatters points/features into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - voxel_feats (torch.Tensor): Reduced features, input features that - shares the same voxel coordinates are reduced to one row. - voxel_coors (torch.Tensor): Voxel coordinates. - """ - if coors.size(-1) == 3: - return self.forward_single(points, coors) - else: - batch_size = coors[-1, 0] + 1 - voxels, voxel_coors = [], [] - for i in range(batch_size): - inds = torch.where(coors[:, 0] == i) - voxel, voxel_coor = self.forward_single( - points[inds], coors[inds][:, 1:]) - coor_pad = nn.functional.pad( - voxel_coor, (1, 0), mode='constant', value=i) - voxel_coors.append(coor_pad) - voxels.append(voxel) - features = torch.cat(voxels, dim=0) - feature_coors = torch.cat(voxel_coors, dim=0) - - return features, feature_coors - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += 'voxel_size=' + str(self.voxel_size) - s += ', point_cloud_range=' + str(self.point_cloud_range) - s += ', average_points=' + str(self.average_points) - s += ')' - return s diff --git a/spaces/Menna2211/ImCaptioning/Home.py b/spaces/Menna2211/ImCaptioning/Home.py deleted file mode 100644 index c85e22d45c8f51b53623dbdf7f2ee681d65860c8..0000000000000000000000000000000000000000 --- a/spaces/Menna2211/ImCaptioning/Home.py +++ /dev/null @@ -1,18 +0,0 @@ -import streamlit as st - -st.set_page_config( - page_title="Home", - page_icon="👋", -) - -st.title("Welcome to My ImgCaptioning! 👋") - -st.markdown( - """ - ImgCaptioning is an open-source app built specifically for Image Captioning projects. - ### Image Caption: - The application allows users to upload an image and generate a descriptive caption for the image Using: - - Hugging Face Model: [blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) - - Github model: [CATR](https://github.com/saahiluppal/catr) -""" -) diff --git a/spaces/MetaWabbit/Basic_Prompt_Generation_Tool/README.md b/spaces/MetaWabbit/Basic_Prompt_Generation_Tool/README.md deleted file mode 100644 index 9765db2c80dd4c4b938060743922163b1718e003..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Basic_Prompt_Generation_Tool/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatGPT Prompt Generator -emoji: 👨🏻‍🎤 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: merve/ChatGPT-prompt-generator ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MirageML/sjc/sd1/ldm/modules/x_transformer.py b/spaces/MirageML/sjc/sd1/ldm/modules/x_transformer.py deleted file mode 100644 index 5fc15bf9cfe0111a910e7de33d04ffdec3877576..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/sd1/ldm/modules/x_transformer.py +++ /dev/null @@ -1,641 +0,0 @@ -"""shout-out to https://github.com/lucidrains/x-transformers/tree/main/x_transformers""" -import torch -from torch import nn, einsum -import torch.nn.functional as F -from functools import partial -from inspect import isfunction -from collections import namedtuple -from einops import rearrange, repeat, reduce - -# constants - -DEFAULT_DIM_HEAD = 64 - -Intermediates = namedtuple('Intermediates', [ - 'pre_softmax_attn', - 'post_softmax_attn' -]) - -LayerIntermediates = namedtuple('Intermediates', [ - 'hiddens', - 'attn_intermediates' -]) - - -class AbsolutePositionalEmbedding(nn.Module): - def __init__(self, dim, max_seq_len): - super().__init__() - self.emb = nn.Embedding(max_seq_len, dim) - self.init_() - - def init_(self): - nn.init.normal_(self.emb.weight, std=0.02) - - def forward(self, x): - n = torch.arange(x.shape[1], device=x.device) - return self.emb(n)[None, :, :] - - -class FixedPositionalEmbedding(nn.Module): - def __init__(self, dim): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer('inv_freq', inv_freq) - - def forward(self, x, seq_dim=1, offset=0): - t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset - sinusoid_inp = torch.einsum('i , j -> i j', t, self.inv_freq) - emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1) - return emb[None, :, :] - - -# helpers - -def exists(val): - return val is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def always(val): - def inner(*args, **kwargs): - return val - return inner - - -def not_equals(val): - def inner(x): - return x != val - return inner - - -def equals(val): - def inner(x): - return x == val - return inner - - -def max_neg_value(tensor): - return -torch.finfo(tensor.dtype).max - - -# keyword argument helpers - -def pick_and_pop(keys, d): - values = list(map(lambda key: d.pop(key), keys)) - return dict(zip(keys, values)) - - -def group_dict_by_key(cond, d): - return_val = [dict(), dict()] - for key in d.keys(): - match = bool(cond(key)) - ind = int(not match) - return_val[ind][key] = d[key] - return (*return_val,) - - -def string_begins_with(prefix, str): - return str.startswith(prefix) - - -def group_by_key_prefix(prefix, d): - return group_dict_by_key(partial(string_begins_with, prefix), d) - - -def groupby_prefix_and_trim(prefix, d): - kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d) - kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items()))) - return kwargs_without_prefix, kwargs - - -# classes -class Scale(nn.Module): - def __init__(self, value, fn): - super().__init__() - self.value = value - self.fn = fn - - def forward(self, x, **kwargs): - x, *rest = self.fn(x, **kwargs) - return (x * self.value, *rest) - - -class Rezero(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - self.g = nn.Parameter(torch.zeros(1)) - - def forward(self, x, **kwargs): - x, *rest = self.fn(x, **kwargs) - return (x * self.g, *rest) - - -class ScaleNorm(nn.Module): - def __init__(self, dim, eps=1e-5): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(1)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class RMSNorm(nn.Module): - def __init__(self, dim, eps=1e-8): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(dim)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class Residual(nn.Module): - def forward(self, x, residual): - return x + residual - - -class GRUGating(nn.Module): - def __init__(self, dim): - super().__init__() - self.gru = nn.GRUCell(dim, dim) - - def forward(self, x, residual): - gated_output = self.gru( - rearrange(x, 'b n d -> (b n) d'), - rearrange(residual, 'b n d -> (b n) d') - ) - - return gated_output.reshape_as(x) - - -# feedforward - -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -# attention. -class Attention(nn.Module): - def __init__( - self, - dim, - dim_head=DEFAULT_DIM_HEAD, - heads=8, - causal=False, - mask=None, - talking_heads=False, - sparse_topk=None, - use_entmax15=False, - num_mem_kv=0, - dropout=0., - on_attn=False - ): - super().__init__() - if use_entmax15: - raise NotImplementedError("Check out entmax activation instead of softmax activation!") - self.scale = dim_head ** -0.5 - self.heads = heads - self.causal = causal - self.mask = mask - - inner_dim = dim_head * heads - - self.to_q = nn.Linear(dim, inner_dim, bias=False) - self.to_k = nn.Linear(dim, inner_dim, bias=False) - self.to_v = nn.Linear(dim, inner_dim, bias=False) - self.dropout = nn.Dropout(dropout) - - # talking heads - self.talking_heads = talking_heads - if talking_heads: - self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - - # explicit topk sparse attention - self.sparse_topk = sparse_topk - - # entmax - #self.attn_fn = entmax15 if use_entmax15 else F.softmax - self.attn_fn = F.softmax - - # add memory key / values - self.num_mem_kv = num_mem_kv - if num_mem_kv > 0: - self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - - # attention on attention - self.attn_on_attn = on_attn - self.to_out = nn.Sequential(nn.Linear(inner_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(inner_dim, dim) - - def forward( - self, - x, - context=None, - mask=None, - context_mask=None, - rel_pos=None, - sinusoidal_emb=None, - prev_attn=None, - mem=None - ): - b, n, _, h, talking_heads, device = *x.shape, self.heads, self.talking_heads, x.device - kv_input = default(context, x) - - q_input = x - k_input = kv_input - v_input = kv_input - - if exists(mem): - k_input = torch.cat((mem, k_input), dim=-2) - v_input = torch.cat((mem, v_input), dim=-2) - - if exists(sinusoidal_emb): - # in shortformer, the query would start at a position offset depending on the past cached memory - offset = k_input.shape[-2] - q_input.shape[-2] - q_input = q_input + sinusoidal_emb(q_input, offset=offset) - k_input = k_input + sinusoidal_emb(k_input) - - q = self.to_q(q_input) - k = self.to_k(k_input) - v = self.to_v(v_input) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), (q, k, v)) - - input_mask = None - if any(map(exists, (mask, context_mask))): - q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool()) - k_mask = q_mask if not exists(context) else context_mask - k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool()) - q_mask = rearrange(q_mask, 'b i -> b () i ()') - k_mask = rearrange(k_mask, 'b j -> b () () j') - input_mask = q_mask * k_mask - - if self.num_mem_kv > 0: - mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b=b), (self.mem_k, self.mem_v)) - k = torch.cat((mem_k, k), dim=-2) - v = torch.cat((mem_v, v), dim=-2) - if exists(input_mask): - input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True) - - dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale - mask_value = max_neg_value(dots) - - if exists(prev_attn): - dots = dots + prev_attn - - pre_softmax_attn = dots - - if talking_heads: - dots = einsum('b h i j, h k -> b k i j', dots, self.pre_softmax_proj).contiguous() - - if exists(rel_pos): - dots = rel_pos(dots) - - if exists(input_mask): - dots.masked_fill_(~input_mask, mask_value) - del input_mask - - if self.causal: - i, j = dots.shape[-2:] - r = torch.arange(i, device=device) - mask = rearrange(r, 'i -> () () i ()') < rearrange(r, 'j -> () () () j') - mask = F.pad(mask, (j - i, 0), value=False) - dots.masked_fill_(mask, mask_value) - del mask - - if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]: - top, _ = dots.topk(self.sparse_topk, dim=-1) - vk = top[..., -1].unsqueeze(-1).expand_as(dots) - mask = dots < vk - dots.masked_fill_(mask, mask_value) - del mask - - attn = self.attn_fn(dots, dim=-1) - post_softmax_attn = attn - - attn = self.dropout(attn) - - if talking_heads: - attn = einsum('b h i j, h k -> b k i j', attn, self.post_softmax_proj).contiguous() - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - - intermediates = Intermediates( - pre_softmax_attn=pre_softmax_attn, - post_softmax_attn=post_softmax_attn - ) - - return self.to_out(out), intermediates - - -class AttentionLayers(nn.Module): - def __init__( - self, - dim, - depth, - heads=8, - causal=False, - cross_attend=False, - only_cross=False, - use_scalenorm=False, - use_rmsnorm=False, - use_rezero=False, - rel_pos_num_buckets=32, - rel_pos_max_distance=128, - position_infused_attn=False, - custom_layers=None, - sandwich_coef=None, - par_ratio=None, - residual_attn=False, - cross_residual_attn=False, - macaron=False, - pre_norm=True, - gate_residual=False, - **kwargs - ): - super().__init__() - ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs) - attn_kwargs, _ = groupby_prefix_and_trim('attn_', kwargs) - - dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD) - - self.dim = dim - self.depth = depth - self.layers = nn.ModuleList([]) - - self.has_pos_emb = position_infused_attn - self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None - self.rotary_pos_emb = always(None) - - assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance' - self.rel_pos = None - - self.pre_norm = pre_norm - - self.residual_attn = residual_attn - self.cross_residual_attn = cross_residual_attn - - norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm - norm_class = RMSNorm if use_rmsnorm else norm_class - norm_fn = partial(norm_class, dim) - - norm_fn = nn.Identity if use_rezero else norm_fn - branch_fn = Rezero if use_rezero else None - - if cross_attend and not only_cross: - default_block = ('a', 'c', 'f') - elif cross_attend and only_cross: - default_block = ('c', 'f') - else: - default_block = ('a', 'f') - - if macaron: - default_block = ('f',) + default_block - - if exists(custom_layers): - layer_types = custom_layers - elif exists(par_ratio): - par_depth = depth * len(default_block) - assert 1 < par_ratio <= par_depth, 'par ratio out of range' - default_block = tuple(filter(not_equals('f'), default_block)) - par_attn = par_depth // par_ratio - depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper - par_width = (depth_cut + depth_cut // par_attn) // par_attn - assert len(default_block) <= par_width, 'default block is too large for par_ratio' - par_block = default_block + ('f',) * (par_width - len(default_block)) - par_head = par_block * par_attn - layer_types = par_head + ('f',) * (par_depth - len(par_head)) - elif exists(sandwich_coef): - assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth' - layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef - else: - layer_types = default_block * depth - - self.layer_types = layer_types - self.num_attn_layers = len(list(filter(equals('a'), layer_types))) - - for layer_type in self.layer_types: - if layer_type == 'a': - layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs) - elif layer_type == 'c': - layer = Attention(dim, heads=heads, **attn_kwargs) - elif layer_type == 'f': - layer = FeedForward(dim, **ff_kwargs) - layer = layer if not macaron else Scale(0.5, layer) - else: - raise Exception(f'invalid layer type {layer_type}') - - if isinstance(layer, Attention) and exists(branch_fn): - layer = branch_fn(layer) - - if gate_residual: - residual_fn = GRUGating(dim) - else: - residual_fn = Residual() - - self.layers.append(nn.ModuleList([ - norm_fn(), - layer, - residual_fn - ])) - - def forward( - self, - x, - context=None, - mask=None, - context_mask=None, - mems=None, - return_hiddens=False - ): - hiddens = [] - intermediates = [] - prev_attn = None - prev_cross_attn = None - - mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers - - for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)): - is_last = ind == (len(self.layers) - 1) - - if layer_type == 'a': - hiddens.append(x) - layer_mem = mems.pop(0) - - residual = x - - if self.pre_norm: - x = norm(x) - - if layer_type == 'a': - out, inter = block(x, mask=mask, sinusoidal_emb=self.pia_pos_emb, rel_pos=self.rel_pos, - prev_attn=prev_attn, mem=layer_mem) - elif layer_type == 'c': - out, inter = block(x, context=context, mask=mask, context_mask=context_mask, prev_attn=prev_cross_attn) - elif layer_type == 'f': - out = block(x) - - x = residual_fn(out, residual) - - if layer_type in ('a', 'c'): - intermediates.append(inter) - - if layer_type == 'a' and self.residual_attn: - prev_attn = inter.pre_softmax_attn - elif layer_type == 'c' and self.cross_residual_attn: - prev_cross_attn = inter.pre_softmax_attn - - if not self.pre_norm and not is_last: - x = norm(x) - - if return_hiddens: - intermediates = LayerIntermediates( - hiddens=hiddens, - attn_intermediates=intermediates - ) - - return x, intermediates - - return x - - -class Encoder(AttentionLayers): - def __init__(self, **kwargs): - assert 'causal' not in kwargs, 'cannot set causality on encoder' - super().__init__(causal=False, **kwargs) - - - -class TransformerWrapper(nn.Module): - def __init__( - self, - *, - num_tokens, - max_seq_len, - attn_layers, - emb_dim=None, - max_mem_len=0., - emb_dropout=0., - num_memory_tokens=None, - tie_embedding=False, - use_pos_emb=True - ): - super().__init__() - assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder' - - dim = attn_layers.dim - emb_dim = default(emb_dim, dim) - - self.max_seq_len = max_seq_len - self.max_mem_len = max_mem_len - self.num_tokens = num_tokens - - self.token_emb = nn.Embedding(num_tokens, emb_dim) - self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len) if ( - use_pos_emb and not attn_layers.has_pos_emb) else always(0) - self.emb_dropout = nn.Dropout(emb_dropout) - - self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity() - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - - self.init_() - - self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t() - - # memory tokens (like [cls]) from Memory Transformers paper - num_memory_tokens = default(num_memory_tokens, 0) - self.num_memory_tokens = num_memory_tokens - if num_memory_tokens > 0: - self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim)) - - # let funnel encoder know number of memory tokens, if specified - if hasattr(attn_layers, 'num_memory_tokens'): - attn_layers.num_memory_tokens = num_memory_tokens - - def init_(self): - nn.init.normal_(self.token_emb.weight, std=0.02) - - def forward( - self, - x, - return_embeddings=False, - mask=None, - return_mems=False, - return_attn=False, - mems=None, - **kwargs - ): - b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens - x = self.token_emb(x) - x += self.pos_emb(x) - x = self.emb_dropout(x) - - x = self.project_emb(x) - - if num_mem > 0: - mem = repeat(self.memory_tokens, 'n d -> b n d', b=b) - x = torch.cat((mem, x), dim=1) - - # auto-handle masking after appending memory tokens - if exists(mask): - mask = F.pad(mask, (num_mem, 0), value=True) - - x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs) - x = self.norm(x) - - mem, x = x[:, :num_mem], x[:, num_mem:] - - out = self.to_logits(x) if not return_embeddings else x - - if return_mems: - hiddens = intermediates.hiddens - new_mems = list(map(lambda pair: torch.cat(pair, dim=-2), zip(mems, hiddens))) if exists(mems) else hiddens - new_mems = list(map(lambda t: t[..., -self.max_mem_len:, :].detach(), new_mems)) - return out, new_mems - - if return_attn: - attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates)) - return out, attn_maps - - return out - diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/object_detection/box_list_ops.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/object_detection/box_list_ops.py deleted file mode 100644 index 9f1b06e28d588eb05c9ea8596b44d08690481eae..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/object_detection/box_list_ops.py +++ /dev/null @@ -1,1079 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Bounding Box List operations. - -Example box operations that are supported: - * areas: compute bounding box areas - * iou: pairwise intersection-over-union scores - * sq_dist: pairwise distances between bounding boxes - -Whenever box_list_ops functions output a BoxList, the fields of the incoming -BoxList are retained unless documented otherwise. -""" -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from six.moves import range -import tensorflow as tf - -from official.vision.detection.utils.object_detection import box_list -from official.vision.detection.utils.object_detection import ops - - -class SortOrder(object): - """Enum class for sort order. - - Attributes: - ascend: ascend order. - descend: descend order. - """ - ascend = 1 - descend = 2 - - -def area(boxlist, scope=None): - """Computes area of boxes. - - Args: - boxlist: BoxList holding N boxes - scope: name scope. - - Returns: - a tensor with shape [N] representing box areas. - """ - with tf.name_scope(scope, 'Area'): - y_min, x_min, y_max, x_max = tf.split( - value=boxlist.get(), num_or_size_splits=4, axis=1) - return tf.squeeze((y_max - y_min) * (x_max - x_min), [1]) - - -def height_width(boxlist, scope=None): - """Computes height and width of boxes in boxlist. - - Args: - boxlist: BoxList holding N boxes - scope: name scope. - - Returns: - Height: A tensor with shape [N] representing box heights. - Width: A tensor with shape [N] representing box widths. - """ - with tf.name_scope(scope, 'HeightWidth'): - y_min, x_min, y_max, x_max = tf.split( - value=boxlist.get(), num_or_size_splits=4, axis=1) - return tf.squeeze(y_max - y_min, [1]), tf.squeeze(x_max - x_min, [1]) - - -def scale(boxlist, y_scale, x_scale, scope=None): - """scale box coordinates in x and y dimensions. - - Args: - boxlist: BoxList holding N boxes - y_scale: (float) scalar tensor - x_scale: (float) scalar tensor - scope: name scope. - - Returns: - boxlist: BoxList holding N boxes - """ - with tf.name_scope(scope, 'Scale'): - y_scale = tf.cast(y_scale, tf.float32) - x_scale = tf.cast(x_scale, tf.float32) - y_min, x_min, y_max, x_max = tf.split( - value=boxlist.get(), num_or_size_splits=4, axis=1) - y_min = y_scale * y_min - y_max = y_scale * y_max - x_min = x_scale * x_min - x_max = x_scale * x_max - scaled_boxlist = box_list.BoxList( - tf.concat([y_min, x_min, y_max, x_max], 1)) - return _copy_extra_fields(scaled_boxlist, boxlist) - - -def clip_to_window(boxlist, window, filter_nonoverlapping=True, scope=None): - """Clip bounding boxes to a window. - - This op clips any input bounding boxes (represented by bounding box - corners) to a window, optionally filtering out boxes that do not - overlap at all with the window. - - Args: - boxlist: BoxList holding M_in boxes - window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max] - window to which the op should clip boxes. - filter_nonoverlapping: whether to filter out boxes that do not overlap at - all with the window. - scope: name scope. - - Returns: - a BoxList holding M_out boxes where M_out <= M_in - """ - with tf.name_scope(scope, 'ClipToWindow'): - y_min, x_min, y_max, x_max = tf.split( - value=boxlist.get(), num_or_size_splits=4, axis=1) - win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window) - y_min_clipped = tf.maximum(tf.minimum(y_min, win_y_max), win_y_min) - y_max_clipped = tf.maximum(tf.minimum(y_max, win_y_max), win_y_min) - x_min_clipped = tf.maximum(tf.minimum(x_min, win_x_max), win_x_min) - x_max_clipped = tf.maximum(tf.minimum(x_max, win_x_max), win_x_min) - clipped = box_list.BoxList( - tf.concat([y_min_clipped, x_min_clipped, y_max_clipped, x_max_clipped], - 1)) - clipped = _copy_extra_fields(clipped, boxlist) - if filter_nonoverlapping: - areas = area(clipped) - nonzero_area_indices = tf.cast( - tf.reshape(tf.where(tf.greater(areas, 0.0)), [-1]), tf.int32) - clipped = gather(clipped, nonzero_area_indices) - return clipped - - -def prune_outside_window(boxlist, window, scope=None): - """Prunes bounding boxes that fall outside a given window. - - This function prunes bounding boxes that even partially fall outside the given - window. See also clip_to_window which only prunes bounding boxes that fall - completely outside the window, and clips any bounding boxes that partially - overflow. - - Args: - boxlist: a BoxList holding M_in boxes. - window: a float tensor of shape [4] representing [ymin, xmin, ymax, xmax] - of the window - scope: name scope. - - Returns: - pruned_corners: a tensor with shape [M_out, 4] where M_out <= M_in - valid_indices: a tensor with shape [M_out] indexing the valid bounding boxes - in the input tensor. - """ - with tf.name_scope(scope, 'PruneOutsideWindow'): - y_min, x_min, y_max, x_max = tf.split( - value=boxlist.get(), num_or_size_splits=4, axis=1) - win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window) - coordinate_violations = tf.concat([ - tf.less(y_min, win_y_min), tf.less(x_min, win_x_min), - tf.greater(y_max, win_y_max), tf.greater(x_max, win_x_max) - ], 1) - valid_indices = tf.reshape( - tf.where(tf.logical_not(tf.reduce_any(coordinate_violations, 1))), [-1]) - return gather(boxlist, valid_indices), valid_indices - - -def prune_completely_outside_window(boxlist, window, scope=None): - """Prunes bounding boxes that fall completely outside of the given window. - - The function clip_to_window prunes bounding boxes that fall - completely outside the window, but also clips any bounding boxes that - partially overflow. This function does not clip partially overflowing boxes. - - Args: - boxlist: a BoxList holding M_in boxes. - window: a float tensor of shape [4] representing [ymin, xmin, ymax, xmax] - of the window - scope: name scope. - - Returns: - pruned_boxlist: a new BoxList with all bounding boxes partially or fully in - the window. - valid_indices: a tensor with shape [M_out] indexing the valid bounding boxes - in the input tensor. - """ - with tf.name_scope(scope, 'PruneCompleteleyOutsideWindow'): - y_min, x_min, y_max, x_max = tf.split( - value=boxlist.get(), num_or_size_splits=4, axis=1) - win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window) - coordinate_violations = tf.concat([ - tf.greater_equal(y_min, win_y_max), tf.greater_equal(x_min, win_x_max), - tf.less_equal(y_max, win_y_min), tf.less_equal(x_max, win_x_min) - ], 1) - valid_indices = tf.reshape( - tf.where(tf.logical_not(tf.reduce_any(coordinate_violations, 1))), [-1]) - return gather(boxlist, valid_indices), valid_indices - - -def intersection(boxlist1, boxlist2, scope=None): - """Compute pairwise intersection areas between boxes. - - Args: - boxlist1: BoxList holding N boxes - boxlist2: BoxList holding M boxes - scope: name scope. - - Returns: - a tensor with shape [N, M] representing pairwise intersections - """ - with tf.name_scope(scope, 'Intersection'): - y_min1, x_min1, y_max1, x_max1 = tf.split( - value=boxlist1.get(), num_or_size_splits=4, axis=1) - y_min2, x_min2, y_max2, x_max2 = tf.split( - value=boxlist2.get(), num_or_size_splits=4, axis=1) - all_pairs_min_ymax = tf.minimum(y_max1, tf.transpose(y_max2)) - all_pairs_max_ymin = tf.maximum(y_min1, tf.transpose(y_min2)) - intersect_heights = tf.maximum(0.0, all_pairs_min_ymax - all_pairs_max_ymin) - all_pairs_min_xmax = tf.minimum(x_max1, tf.transpose(x_max2)) - all_pairs_max_xmin = tf.maximum(x_min1, tf.transpose(x_min2)) - intersect_widths = tf.maximum(0.0, all_pairs_min_xmax - all_pairs_max_xmin) - return intersect_heights * intersect_widths - - -def matched_intersection(boxlist1, boxlist2, scope=None): - """Compute intersection areas between corresponding boxes in two boxlists. - - Args: - boxlist1: BoxList holding N boxes - boxlist2: BoxList holding N boxes - scope: name scope. - - Returns: - a tensor with shape [N] representing pairwise intersections - """ - with tf.name_scope(scope, 'MatchedIntersection'): - y_min1, x_min1, y_max1, x_max1 = tf.split( - value=boxlist1.get(), num_or_size_splits=4, axis=1) - y_min2, x_min2, y_max2, x_max2 = tf.split( - value=boxlist2.get(), num_or_size_splits=4, axis=1) - min_ymax = tf.minimum(y_max1, y_max2) - max_ymin = tf.maximum(y_min1, y_min2) - intersect_heights = tf.maximum(0.0, min_ymax - max_ymin) - min_xmax = tf.minimum(x_max1, x_max2) - max_xmin = tf.maximum(x_min1, x_min2) - intersect_widths = tf.maximum(0.0, min_xmax - max_xmin) - return tf.reshape(intersect_heights * intersect_widths, [-1]) - - -def iou(boxlist1, boxlist2, scope=None): - """Computes pairwise intersection-over-union between box collections. - - Args: - boxlist1: BoxList holding N boxes - boxlist2: BoxList holding M boxes - scope: name scope. - - Returns: - a tensor with shape [N, M] representing pairwise iou scores. - """ - with tf.name_scope(scope, 'IOU'): - intersections = intersection(boxlist1, boxlist2) - areas1 = area(boxlist1) - areas2 = area(boxlist2) - unions = ( - tf.expand_dims(areas1, 1) + tf.expand_dims(areas2, 0) - intersections) - return tf.where( - tf.equal(intersections, 0.0), - tf.zeros_like(intersections), tf.truediv(intersections, unions)) - - -def matched_iou(boxlist1, boxlist2, scope=None): - """Compute intersection-over-union between corresponding boxes in boxlists. - - Args: - boxlist1: BoxList holding N boxes - boxlist2: BoxList holding N boxes - scope: name scope. - - Returns: - a tensor with shape [N] representing pairwise iou scores. - """ - with tf.name_scope(scope, 'MatchedIOU'): - intersections = matched_intersection(boxlist1, boxlist2) - areas1 = area(boxlist1) - areas2 = area(boxlist2) - unions = areas1 + areas2 - intersections - return tf.where( - tf.equal(intersections, 0.0), - tf.zeros_like(intersections), tf.truediv(intersections, unions)) - - -def ioa(boxlist1, boxlist2, scope=None): - """Computes pairwise intersection-over-area between box collections. - - intersection-over-area (IOA) between two boxes box1 and box2 is defined as - their intersection area over box2's area. Note that ioa is not symmetric, - that is, ioa(box1, box2) != ioa(box2, box1). - - Args: - boxlist1: BoxList holding N boxes - boxlist2: BoxList holding M boxes - scope: name scope. - - Returns: - a tensor with shape [N, M] representing pairwise ioa scores. - """ - with tf.name_scope(scope, 'IOA'): - intersections = intersection(boxlist1, boxlist2) - areas = tf.expand_dims(area(boxlist2), 0) - return tf.truediv(intersections, areas) - - -def prune_non_overlapping_boxes( - boxlist1, boxlist2, min_overlap=0.0, scope=None): - """Prunes the boxes in boxlist1 that overlap less than thresh with boxlist2. - - For each box in boxlist1, we want its IOA to be more than minoverlap with - at least one of the boxes in boxlist2. If it does not, we remove it. - - Args: - boxlist1: BoxList holding N boxes. - boxlist2: BoxList holding M boxes. - min_overlap: Minimum required overlap between boxes, to count them as - overlapping. - scope: name scope. - - Returns: - new_boxlist1: A pruned boxlist with size [N', 4]. - keep_inds: A tensor with shape [N'] indexing kept bounding boxes in the - first input BoxList `boxlist1`. - """ - with tf.name_scope(scope, 'PruneNonOverlappingBoxes'): - ioa_ = ioa(boxlist2, boxlist1) # [M, N] tensor - ioa_ = tf.reduce_max(ioa_, reduction_indices=[0]) # [N] tensor - keep_bool = tf.greater_equal(ioa_, tf.constant(min_overlap)) - keep_inds = tf.squeeze(tf.where(keep_bool), axis=[1]) - new_boxlist1 = gather(boxlist1, keep_inds) - return new_boxlist1, keep_inds - - -def prune_small_boxes(boxlist, min_side, scope=None): - """Prunes small boxes in the boxlist which have a side smaller than min_side. - - Args: - boxlist: BoxList holding N boxes. - min_side: Minimum width AND height of box to survive pruning. - scope: name scope. - - Returns: - A pruned boxlist. - """ - with tf.name_scope(scope, 'PruneSmallBoxes'): - height, width = height_width(boxlist) - is_valid = tf.logical_and(tf.greater_equal(width, min_side), - tf.greater_equal(height, min_side)) - return gather(boxlist, tf.reshape(tf.where(is_valid), [-1])) - - -def change_coordinate_frame(boxlist, window, scope=None): - """Change coordinate frame of the boxlist to be relative to window's frame. - - Given a window of the form [ymin, xmin, ymax, xmax], - changes bounding box coordinates from boxlist to be relative to this window - (e.g., the min corner maps to (0,0) and the max corner maps to (1,1)). - - An example use case is data augmentation: where we are given groundtruth - boxes (boxlist) and would like to randomly crop the image to some - window (window). In this case we need to change the coordinate frame of - each groundtruth box to be relative to this new window. - - Args: - boxlist: A BoxList object holding N boxes. - window: A rank 1 tensor [4]. - scope: name scope. - - Returns: - Returns a BoxList object with N boxes. - """ - with tf.name_scope(scope, 'ChangeCoordinateFrame'): - win_height = window[2] - window[0] - win_width = window[3] - window[1] - boxlist_new = scale(box_list.BoxList( - boxlist.get() - [window[0], window[1], window[0], window[1]]), - 1.0 / win_height, 1.0 / win_width) - boxlist_new = _copy_extra_fields(boxlist_new, boxlist) - return boxlist_new - - -def sq_dist(boxlist1, boxlist2, scope=None): - """Computes the pairwise squared distances between box corners. - - This op treats each box as if it were a point in a 4d Euclidean space and - computes pairwise squared distances. - - Mathematically, we are given two matrices of box coordinates X and Y, - where X(i,:) is the i'th row of X, containing the 4 numbers defining the - corners of the i'th box in boxlist1. Similarly Y(j,:) corresponds to - boxlist2. We compute - Z(i,j) = ||X(i,:) - Y(j,:)||^2 - = ||X(i,:)||^2 + ||Y(j,:)||^2 - 2 X(i,:)' * Y(j,:), - - Args: - boxlist1: BoxList holding N boxes - boxlist2: BoxList holding M boxes - scope: name scope. - - Returns: - a tensor with shape [N, M] representing pairwise distances - """ - with tf.name_scope(scope, 'SqDist'): - sqnorm1 = tf.reduce_sum(tf.square(boxlist1.get()), 1, keep_dims=True) - sqnorm2 = tf.reduce_sum(tf.square(boxlist2.get()), 1, keep_dims=True) - innerprod = tf.matmul(boxlist1.get(), boxlist2.get(), - transpose_a=False, transpose_b=True) - return sqnorm1 + tf.transpose(sqnorm2) - 2.0 * innerprod - - -def boolean_mask(boxlist, indicator, fields=None, scope=None, - use_static_shapes=False, indicator_sum=None): - """Select boxes from BoxList according to indicator and return new BoxList. - - `boolean_mask` returns the subset of boxes that are marked as "True" by the - indicator tensor. By default, `boolean_mask` returns boxes corresponding to - the input index list, as well as all additional fields stored in the boxlist - (indexing into the first dimension). However one can optionally only draw - from a subset of fields. - - Args: - boxlist: BoxList holding N boxes - indicator: a rank-1 boolean tensor - fields: (optional) list of fields to also gather from. If None (default), - all fields are gathered from. Pass an empty fields list to only gather - the box coordinates. - scope: name scope. - use_static_shapes: Whether to use an implementation with static shape - gurantees. - indicator_sum: An integer containing the sum of `indicator` vector. Only - required if `use_static_shape` is True. - - Returns: - subboxlist: a BoxList corresponding to the subset of the input BoxList - specified by indicator - Raises: - ValueError: if `indicator` is not a rank-1 boolean tensor. - """ - with tf.name_scope(scope, 'BooleanMask'): - if indicator.shape.ndims != 1: - raise ValueError('indicator should have rank 1') - if indicator.dtype != tf.bool: - raise ValueError('indicator should be a boolean tensor') - if use_static_shapes: - if not (indicator_sum and isinstance(indicator_sum, int)): - raise ValueError('`indicator_sum` must be a of type int') - selected_positions = tf.cast(indicator, dtype=tf.float32) - indexed_positions = tf.cast( - tf.multiply( - tf.cumsum(selected_positions), selected_positions), - dtype=tf.int32) - one_hot_selector = tf.one_hot( - indexed_positions - 1, indicator_sum, dtype=tf.float32) - sampled_indices = tf.cast( - tf.tensordot( - tf.cast(tf.range(tf.shape(indicator)[0]), dtype=tf.float32), - one_hot_selector, - axes=[0, 0]), - dtype=tf.int32) - return gather(boxlist, sampled_indices, use_static_shapes=True) - else: - subboxlist = box_list.BoxList(tf.boolean_mask(boxlist.get(), indicator)) - if fields is None: - fields = boxlist.get_extra_fields() - for field in fields: - if not boxlist.has_field(field): - raise ValueError('boxlist must contain all specified fields') - subfieldlist = tf.boolean_mask(boxlist.get_field(field), indicator) - subboxlist.add_field(field, subfieldlist) - return subboxlist - - -def gather(boxlist, indices, fields=None, scope=None, use_static_shapes=False): - """Gather boxes from BoxList according to indices and return new BoxList. - - By default, `gather` returns boxes corresponding to the input index list, as - well as all additional fields stored in the boxlist (indexing into the - first dimension). However one can optionally only gather from a - subset of fields. - - Args: - boxlist: BoxList holding N boxes - indices: a rank-1 tensor of type int32 / int64 - fields: (optional) list of fields to also gather from. If None (default), - all fields are gathered from. Pass an empty fields list to only gather - the box coordinates. - scope: name scope. - use_static_shapes: Whether to use an implementation with static shape - gurantees. - - Returns: - subboxlist: a BoxList corresponding to the subset of the input BoxList - specified by indices - Raises: - ValueError: if specified field is not contained in boxlist or if the - indices are not of type int32 - """ - with tf.name_scope(scope, 'Gather'): - if len(indices.shape.as_list()) != 1: - raise ValueError('indices should have rank 1') - if indices.dtype != tf.int32 and indices.dtype != tf.int64: - raise ValueError('indices should be an int32 / int64 tensor') - gather_op = tf.gather - if use_static_shapes: - gather_op = ops.matmul_gather_on_zeroth_axis - subboxlist = box_list.BoxList(gather_op(boxlist.get(), indices)) - if fields is None: - fields = boxlist.get_extra_fields() - fields += ['boxes'] - for field in fields: - if not boxlist.has_field(field): - raise ValueError('boxlist must contain all specified fields') - subfieldlist = gather_op(boxlist.get_field(field), indices) - subboxlist.add_field(field, subfieldlist) - return subboxlist - - -def concatenate(boxlists, fields=None, scope=None): - """Concatenate list of BoxLists. - - This op concatenates a list of input BoxLists into a larger BoxList. It also - handles concatenation of BoxList fields as long as the field tensor shapes - are equal except for the first dimension. - - Args: - boxlists: list of BoxList objects - fields: optional list of fields to also concatenate. By default, all - fields from the first BoxList in the list are included in the - concatenation. - scope: name scope. - - Returns: - a BoxList with number of boxes equal to - sum([boxlist.num_boxes() for boxlist in BoxList]) - Raises: - ValueError: if boxlists is invalid (i.e., is not a list, is empty, or - contains non BoxList objects), or if requested fields are not contained in - all boxlists - """ - with tf.name_scope(scope, 'Concatenate'): - if not isinstance(boxlists, list): - raise ValueError('boxlists should be a list') - if not boxlists: - raise ValueError('boxlists should have nonzero length') - for boxlist in boxlists: - if not isinstance(boxlist, box_list.BoxList): - raise ValueError('all elements of boxlists should be BoxList objects') - concatenated = box_list.BoxList( - tf.concat([boxlist.get() for boxlist in boxlists], 0)) - if fields is None: - fields = boxlists[0].get_extra_fields() - for field in fields: - first_field_shape = boxlists[0].get_field(field).get_shape().as_list() - first_field_shape[0] = -1 - if None in first_field_shape: - raise ValueError('field %s must have fully defined shape except for the' - ' 0th dimension.' % field) - for boxlist in boxlists: - if not boxlist.has_field(field): - raise ValueError('boxlist must contain all requested fields') - field_shape = boxlist.get_field(field).get_shape().as_list() - field_shape[0] = -1 - if field_shape != first_field_shape: - raise ValueError('field %s must have same shape for all boxlists ' - 'except for the 0th dimension.' % field) - concatenated_field = tf.concat( - [boxlist.get_field(field) for boxlist in boxlists], 0) - concatenated.add_field(field, concatenated_field) - return concatenated - - -def sort_by_field(boxlist, field, order=SortOrder.descend, scope=None): - """Sort boxes and associated fields according to a scalar field. - - A common use case is reordering the boxes according to descending scores. - - Args: - boxlist: BoxList holding N boxes. - field: A BoxList field for sorting and reordering the BoxList. - order: (Optional) descend or ascend. Default is descend. - scope: name scope. - - Returns: - sorted_boxlist: A sorted BoxList with the field in the specified order. - - Raises: - ValueError: if specified field does not exist - ValueError: if the order is not either descend or ascend - """ - with tf.name_scope(scope, 'SortByField'): - if order != SortOrder.descend and order != SortOrder.ascend: - raise ValueError('Invalid sort order') - - field_to_sort = boxlist.get_field(field) - if len(field_to_sort.shape.as_list()) != 1: - raise ValueError('Field should have rank 1') - - num_boxes = boxlist.num_boxes() - num_entries = tf.size(field_to_sort) - length_assert = tf.Assert( - tf.equal(num_boxes, num_entries), - ['Incorrect field size: actual vs expected.', num_entries, num_boxes]) - - with tf.control_dependencies([length_assert]): - _, sorted_indices = tf.nn.top_k(field_to_sort, num_boxes, sorted=True) - - if order == SortOrder.ascend: - sorted_indices = tf.reverse_v2(sorted_indices, [0]) - - return gather(boxlist, sorted_indices) - - -def visualize_boxes_in_image(image, boxlist, normalized=False, scope=None): - """Overlay bounding box list on image. - - Currently this visualization plots a 1 pixel thick red bounding box on top - of the image. Note that tf.image.draw_bounding_boxes essentially is - 1 indexed. - - Args: - image: an image tensor with shape [height, width, 3] - boxlist: a BoxList - normalized: (boolean) specify whether corners are to be interpreted - as absolute coordinates in image space or normalized with respect to the - image size. - scope: name scope. - - Returns: - image_and_boxes: an image tensor with shape [height, width, 3] - """ - with tf.name_scope(scope, 'VisualizeBoxesInImage'): - if not normalized: - height, width, _ = tf.unstack(tf.shape(image)) - boxlist = scale(boxlist, - 1.0 / tf.cast(height, tf.float32), - 1.0 / tf.cast(width, tf.float32)) - corners = tf.expand_dims(boxlist.get(), 0) - image = tf.expand_dims(image, 0) - return tf.squeeze(tf.image.draw_bounding_boxes(image, corners), [0]) - - -def filter_field_value_equals(boxlist, field, value, scope=None): - """Filter to keep only boxes with field entries equal to the given value. - - Args: - boxlist: BoxList holding N boxes. - field: field name for filtering. - value: scalar value. - scope: name scope. - - Returns: - a BoxList holding M boxes where M <= N - - Raises: - ValueError: if boxlist not a BoxList object or if it does not have - the specified field. - """ - with tf.name_scope(scope, 'FilterFieldValueEquals'): - if not isinstance(boxlist, box_list.BoxList): - raise ValueError('boxlist must be a BoxList') - if not boxlist.has_field(field): - raise ValueError('boxlist must contain the specified field') - filter_field = boxlist.get_field(field) - gather_index = tf.reshape(tf.where(tf.equal(filter_field, value)), [-1]) - return gather(boxlist, gather_index) - - -def filter_greater_than(boxlist, thresh, scope=None): - """Filter to keep only boxes with score exceeding a given threshold. - - This op keeps the collection of boxes whose corresponding scores are - greater than the input threshold. - - TODO(jonathanhuang): Change function name to filter_scores_greater_than - - Args: - boxlist: BoxList holding N boxes. Must contain a 'scores' field - representing detection scores. - thresh: scalar threshold - scope: name scope. - - Returns: - a BoxList holding M boxes where M <= N - - Raises: - ValueError: if boxlist not a BoxList object or if it does not - have a scores field - """ - with tf.name_scope(scope, 'FilterGreaterThan'): - if not isinstance(boxlist, box_list.BoxList): - raise ValueError('boxlist must be a BoxList') - if not boxlist.has_field('scores'): - raise ValueError('input boxlist must have \'scores\' field') - scores = boxlist.get_field('scores') - if len(scores.shape.as_list()) > 2: - raise ValueError('Scores should have rank 1 or 2') - if len(scores.shape.as_list()) == 2 and scores.shape.as_list()[1] != 1: - raise ValueError('Scores should have rank 1 or have shape ' - 'consistent with [None, 1]') - high_score_indices = tf.cast(tf.reshape( - tf.where(tf.greater(scores, thresh)), - [-1]), tf.int32) - return gather(boxlist, high_score_indices) - - -def non_max_suppression(boxlist, thresh, max_output_size, scope=None): - """Non maximum suppression. - - This op greedily selects a subset of detection bounding boxes, pruning - away boxes that have high IOU (intersection over union) overlap (> thresh) - with already selected boxes. Note that this only works for a single class --- - to apply NMS to multi-class predictions, use MultiClassNonMaxSuppression. - - Args: - boxlist: BoxList holding N boxes. Must contain a 'scores' field - representing detection scores. - thresh: scalar threshold - max_output_size: maximum number of retained boxes - scope: name scope. - - Returns: - a BoxList holding M boxes where M <= max_output_size - Raises: - ValueError: if thresh is not in [0, 1] - """ - with tf.name_scope(scope, 'NonMaxSuppression'): - if not 0 <= thresh <= 1.0: - raise ValueError('thresh must be between 0 and 1') - if not isinstance(boxlist, box_list.BoxList): - raise ValueError('boxlist must be a BoxList') - if not boxlist.has_field('scores'): - raise ValueError('input boxlist must have \'scores\' field') - selected_indices = tf.image.non_max_suppression( - boxlist.get(), boxlist.get_field('scores'), - max_output_size, iou_threshold=thresh) - return gather(boxlist, selected_indices) - - -def _copy_extra_fields(boxlist_to_copy_to, boxlist_to_copy_from): - """Copies the extra fields of boxlist_to_copy_from to boxlist_to_copy_to. - - Args: - boxlist_to_copy_to: BoxList to which extra fields are copied. - boxlist_to_copy_from: BoxList from which fields are copied. - - Returns: - boxlist_to_copy_to with extra fields. - """ - for field in boxlist_to_copy_from.get_extra_fields(): - boxlist_to_copy_to.add_field(field, boxlist_to_copy_from.get_field(field)) - return boxlist_to_copy_to - - -def to_normalized_coordinates(boxlist, height, width, - check_range=True, scope=None): - """Converts absolute box coordinates to normalized coordinates in [0, 1]. - - Usually one uses the dynamic shape of the image or conv-layer tensor: - boxlist = box_list_ops.to_normalized_coordinates(boxlist, - tf.shape(images)[1], - tf.shape(images)[2]), - - This function raises an assertion failed error at graph execution time when - the maximum coordinate is smaller than 1.01 (which means that coordinates are - already normalized). The value 1.01 is to deal with small rounding errors. - - Args: - boxlist: BoxList with coordinates in terms of pixel-locations. - height: Maximum value for height of absolute box coordinates. - width: Maximum value for width of absolute box coordinates. - check_range: If True, checks if the coordinates are normalized or not. - scope: name scope. - - Returns: - boxlist with normalized coordinates in [0, 1]. - """ - with tf.name_scope(scope, 'ToNormalizedCoordinates'): - height = tf.cast(height, tf.float32) - width = tf.cast(width, tf.float32) - - if check_range: - max_val = tf.reduce_max(boxlist.get()) - max_assert = tf.Assert(tf.greater(max_val, 1.01), - ['max value is lower than 1.01: ', max_val]) - with tf.control_dependencies([max_assert]): - width = tf.identity(width) - - return scale(boxlist, 1 / height, 1 / width) - - -def to_absolute_coordinates(boxlist, - height, - width, - check_range=True, - maximum_normalized_coordinate=1.1, - scope=None): - """Converts normalized box coordinates to absolute pixel coordinates. - - This function raises an assertion failed error when the maximum box coordinate - value is larger than maximum_normalized_coordinate (in which case coordinates - are already absolute). - - Args: - boxlist: BoxList with coordinates in range [0, 1]. - height: Maximum value for height of absolute box coordinates. - width: Maximum value for width of absolute box coordinates. - check_range: If True, checks if the coordinates are normalized or not. - maximum_normalized_coordinate: Maximum coordinate value to be considered - as normalized, default to 1.1. - scope: name scope. - - Returns: - boxlist with absolute coordinates in terms of the image size. - - """ - with tf.name_scope(scope, 'ToAbsoluteCoordinates'): - height = tf.cast(height, tf.float32) - width = tf.cast(width, tf.float32) - - # Ensure range of input boxes is correct. - if check_range: - box_maximum = tf.reduce_max(boxlist.get()) - max_assert = tf.Assert( - tf.greater_equal(maximum_normalized_coordinate, box_maximum), - ['maximum box coordinate value is larger ' - 'than %f: ' % maximum_normalized_coordinate, box_maximum]) - with tf.control_dependencies([max_assert]): - width = tf.identity(width) - - return scale(boxlist, height, width) - - -def refine_boxes_multi_class(pool_boxes, - num_classes, - nms_iou_thresh, - nms_max_detections, - voting_iou_thresh=0.5): - """Refines a pool of boxes using non max suppression and box voting. - - Box refinement is done independently for each class. - - Args: - pool_boxes: (BoxList) A collection of boxes to be refined. pool_boxes must - have a rank 1 'scores' field and a rank 1 'classes' field. - num_classes: (int scalar) Number of classes. - nms_iou_thresh: (float scalar) iou threshold for non max suppression (NMS). - nms_max_detections: (int scalar) maximum output size for NMS. - voting_iou_thresh: (float scalar) iou threshold for box voting. - - Returns: - BoxList of refined boxes. - - Raises: - ValueError: if - a) nms_iou_thresh or voting_iou_thresh is not in [0, 1]. - b) pool_boxes is not a BoxList. - c) pool_boxes does not have a scores and classes field. - """ - if not 0.0 <= nms_iou_thresh <= 1.0: - raise ValueError('nms_iou_thresh must be between 0 and 1') - if not 0.0 <= voting_iou_thresh <= 1.0: - raise ValueError('voting_iou_thresh must be between 0 and 1') - if not isinstance(pool_boxes, box_list.BoxList): - raise ValueError('pool_boxes must be a BoxList') - if not pool_boxes.has_field('scores'): - raise ValueError('pool_boxes must have a \'scores\' field') - if not pool_boxes.has_field('classes'): - raise ValueError('pool_boxes must have a \'classes\' field') - - refined_boxes = [] - for i in range(num_classes): - boxes_class = filter_field_value_equals(pool_boxes, 'classes', i) - refined_boxes_class = refine_boxes(boxes_class, nms_iou_thresh, - nms_max_detections, voting_iou_thresh) - refined_boxes.append(refined_boxes_class) - return sort_by_field(concatenate(refined_boxes), 'scores') - - -def refine_boxes(pool_boxes, - nms_iou_thresh, - nms_max_detections, - voting_iou_thresh=0.5): - """Refines a pool of boxes using non max suppression and box voting. - - Args: - pool_boxes: (BoxList) A collection of boxes to be refined. pool_boxes must - have a rank 1 'scores' field. - nms_iou_thresh: (float scalar) iou threshold for non max suppression (NMS). - nms_max_detections: (int scalar) maximum output size for NMS. - voting_iou_thresh: (float scalar) iou threshold for box voting. - - Returns: - BoxList of refined boxes. - - Raises: - ValueError: if - a) nms_iou_thresh or voting_iou_thresh is not in [0, 1]. - b) pool_boxes is not a BoxList. - c) pool_boxes does not have a scores field. - """ - if not 0.0 <= nms_iou_thresh <= 1.0: - raise ValueError('nms_iou_thresh must be between 0 and 1') - if not 0.0 <= voting_iou_thresh <= 1.0: - raise ValueError('voting_iou_thresh must be between 0 and 1') - if not isinstance(pool_boxes, box_list.BoxList): - raise ValueError('pool_boxes must be a BoxList') - if not pool_boxes.has_field('scores'): - raise ValueError('pool_boxes must have a \'scores\' field') - - nms_boxes = non_max_suppression( - pool_boxes, nms_iou_thresh, nms_max_detections) - return box_voting(nms_boxes, pool_boxes, voting_iou_thresh) - - -def box_voting(selected_boxes, pool_boxes, iou_thresh=0.5): - """Performs box voting as described in S. Gidaris and N. Komodakis, ICCV 2015. - - Performs box voting as described in 'Object detection via a multi-region & - semantic segmentation-aware CNN model', Gidaris and Komodakis, ICCV 2015. For - each box 'B' in selected_boxes, we find the set 'S' of boxes in pool_boxes - with iou overlap >= iou_thresh. The location of B is set to the weighted - average location of boxes in S (scores are used for weighting). And the score - of B is set to the average score of boxes in S. - - Args: - selected_boxes: BoxList containing a subset of boxes in pool_boxes. These - boxes are usually selected from pool_boxes using non max suppression. - pool_boxes: BoxList containing a set of (possibly redundant) boxes. - iou_thresh: (float scalar) iou threshold for matching boxes in - selected_boxes and pool_boxes. - - Returns: - BoxList containing averaged locations and scores for each box in - selected_boxes. - - Raises: - ValueError: if - a) selected_boxes or pool_boxes is not a BoxList. - b) if iou_thresh is not in [0, 1]. - c) pool_boxes does not have a scores field. - """ - if not 0.0 <= iou_thresh <= 1.0: - raise ValueError('iou_thresh must be between 0 and 1') - if not isinstance(selected_boxes, box_list.BoxList): - raise ValueError('selected_boxes must be a BoxList') - if not isinstance(pool_boxes, box_list.BoxList): - raise ValueError('pool_boxes must be a BoxList') - if not pool_boxes.has_field('scores'): - raise ValueError('pool_boxes must have a \'scores\' field') - - iou_ = iou(selected_boxes, pool_boxes) - match_indicator = tf.cast(tf.greater(iou_, iou_thresh), dtype=tf.float32) - num_matches = tf.reduce_sum(match_indicator, 1) - # TODO(kbanoop): Handle the case where some boxes in selected_boxes do not - # match to any boxes in pool_boxes. For such boxes without any matches, we - # should return the original boxes without voting. - match_assert = tf.Assert( - tf.reduce_all(tf.greater(num_matches, 0)), - ['Each box in selected_boxes must match with at least one box ' - 'in pool_boxes.']) - - scores = tf.expand_dims(pool_boxes.get_field('scores'), 1) - scores_assert = tf.Assert( - tf.reduce_all(tf.greater_equal(scores, 0)), - ['Scores must be non negative.']) - - with tf.control_dependencies([scores_assert, match_assert]): - sum_scores = tf.matmul(match_indicator, scores) - averaged_scores = tf.reshape(sum_scores, [-1]) / num_matches - - box_locations = tf.matmul(match_indicator, - pool_boxes.get() * scores) / sum_scores - averaged_boxes = box_list.BoxList(box_locations) - _copy_extra_fields(averaged_boxes, selected_boxes) - averaged_boxes.add_field('scores', averaged_scores) - return averaged_boxes - - -def get_minimal_coverage_box(boxlist, - default_box=None, - scope=None): - """Creates a single bounding box which covers all boxes in the boxlist. - - Args: - boxlist: A Boxlist. - default_box: A [1, 4] float32 tensor. If no boxes are present in `boxlist`, - this default box will be returned. If None, will use a default box of - [[0., 0., 1., 1.]]. - scope: Name scope. - - Returns: - A [1, 4] float32 tensor with a bounding box that tightly covers all the - boxes in the box list. If the boxlist does not contain any boxes, the - default box is returned. - """ - with tf.name_scope(scope, 'CreateCoverageBox'): - num_boxes = boxlist.num_boxes() - - def coverage_box(bboxes): - y_min, x_min, y_max, x_max = tf.split( - value=bboxes, num_or_size_splits=4, axis=1) - y_min_coverage = tf.reduce_min(y_min, axis=0) - x_min_coverage = tf.reduce_min(x_min, axis=0) - y_max_coverage = tf.reduce_max(y_max, axis=0) - x_max_coverage = tf.reduce_max(x_max, axis=0) - return tf.stack( - [y_min_coverage, x_min_coverage, y_max_coverage, x_max_coverage], - axis=1) - - default_box = default_box or tf.constant([[0., 0., 1., 1.]]) - return tf.cond( - tf.greater_equal(num_boxes, 1), - true_fn=lambda: coverage_box(boxlist.get()), - false_fn=lambda: default_box) - - -def sample_boxes_by_jittering(boxlist, - num_boxes_to_sample, - stddev=0.1, - scope=None): - """Samples num_boxes_to_sample boxes by jittering around boxlist boxes. - - It is possible that this function might generate boxes with size 0. The larger - the stddev, this is more probable. For a small stddev of 0.1 this probability - is very small. - - Args: - boxlist: A boxlist containing N boxes in normalized coordinates. - num_boxes_to_sample: A positive integer containing the number of boxes to - sample. - stddev: Standard deviation. This is used to draw random offsets for the - box corners from a normal distribution. The offset is multiplied by the - box size so will be larger in terms of pixels for larger boxes. - scope: Name scope. - - Returns: - sampled_boxlist: A boxlist containing num_boxes_to_sample boxes in - normalized coordinates. - """ - with tf.name_scope(scope, 'SampleBoxesByJittering'): - num_boxes = boxlist.num_boxes() - box_indices = tf.random_uniform( - [num_boxes_to_sample], - minval=0, - maxval=num_boxes, - dtype=tf.int32) - sampled_boxes = tf.gather(boxlist.get(), box_indices) - sampled_boxes_height = sampled_boxes[:, 2] - sampled_boxes[:, 0] - sampled_boxes_width = sampled_boxes[:, 3] - sampled_boxes[:, 1] - rand_miny_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev) - rand_minx_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev) - rand_maxy_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev) - rand_maxx_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev) - miny = rand_miny_gaussian * sampled_boxes_height + sampled_boxes[:, 0] - minx = rand_minx_gaussian * sampled_boxes_width + sampled_boxes[:, 1] - maxy = rand_maxy_gaussian * sampled_boxes_height + sampled_boxes[:, 2] - maxx = rand_maxx_gaussian * sampled_boxes_width + sampled_boxes[:, 3] - maxy = tf.maximum(miny, maxy) - maxx = tf.maximum(minx, maxx) - sampled_boxes = tf.stack([miny, minx, maxy, maxx], axis=1) - sampled_boxes = tf.maximum(tf.minimum(sampled_boxes, 1.0), 0.0) - return box_list.BoxList(sampled_boxes) diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_env_vis.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_env_vis.py deleted file mode 100644 index 3690ff484fea9344db6fbe20ac54731200f0c84e..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_env_vis.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright 2016 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""A simple python function to walk in the enviornments that we have created. -PYTHONPATH='.' PYOPENGL_PLATFORM=egl python scripts/script_env_vis.py \ - --dataset_name sbpd --building_name area3 -""" -import sys -import numpy as np -import matplotlib -matplotlib.use('TkAgg') -from PIL import ImageTk, Image -import Tkinter as tk -import logging -from tensorflow.python.platform import app -from tensorflow.python.platform import flags - -import datasets.nav_env_config as nec -import datasets.nav_env as nav_env -import cv2 -from datasets import factory -import render.swiftshader_renderer as renderer - -SwiftshaderRenderer = renderer.SwiftshaderRenderer -VisualNavigationEnv = nav_env.VisualNavigationEnv - -FLAGS = flags.FLAGS -flags.DEFINE_string('dataset_name', 'sbpd', 'Name of the dataset.') -flags.DEFINE_float('fov', 60., 'Field of view') -flags.DEFINE_integer('image_size', 512, 'Size of the image.') -flags.DEFINE_string('building_name', '', 'Name of the building.') - -def get_args(): - navtask = nec.nav_env_base_config() - navtask.task_params.type = 'rng_rejection_sampling_many' - navtask.task_params.rejection_sampling_M = 2000 - navtask.task_params.min_dist = 10 - sz = FLAGS.image_size - navtask.camera_param.fov = FLAGS.fov - navtask.camera_param.height = sz - navtask.camera_param.width = sz - navtask.task_params.img_height = sz - navtask.task_params.img_width = sz - - # navtask.task_params.semantic_task.class_map_names = ['chair', 'door', 'table'] - # navtask.task_params.type = 'to_nearest_obj_acc' - - logging.info('navtask: %s', navtask) - return navtask - -def load_building(dataset_name, building_name): - dataset = factory.get_dataset(dataset_name) - - navtask = get_args() - cp = navtask.camera_param - rgb_shader, d_shader = renderer.get_shaders(cp.modalities) - r_obj = SwiftshaderRenderer() - r_obj.init_display(width=cp.width, height=cp.height, - fov=cp.fov, z_near=cp.z_near, z_far=cp.z_far, - rgb_shader=rgb_shader, d_shader=d_shader) - r_obj.clear_scene() - b = VisualNavigationEnv(robot=navtask.robot, env=navtask.env, - task_params=navtask.task_params, - building_name=building_name, flip=False, - logdir=None, building_loader=dataset, - r_obj=r_obj) - b.load_building_into_scene() - b.set_building_visibility(False) - return b - -def walk_through(b): - # init agent at a random location in the environment. - init_env_state = b.reset([np.random.RandomState(0), np.random.RandomState(0)]) - - global current_node - rng = np.random.RandomState(0) - current_node = rng.choice(b.task.nodes.shape[0]) - - root = tk.Tk() - image = b.render_nodes(b.task.nodes[[current_node],:])[0] - print(image.shape) - image = image.astype(np.uint8) - im = Image.fromarray(image) - im = ImageTk.PhotoImage(im) - panel = tk.Label(root, image=im) - - map_size = b.traversible.shape - sc = np.max(map_size)/256. - loc = np.array([[map_size[1]/2., map_size[0]/2.]]) - x_axis = np.zeros_like(loc); x_axis[:,1] = sc - y_axis = np.zeros_like(loc); y_axis[:,0] = -sc - cum_fs, cum_valid = nav_env.get_map_to_predict(loc, x_axis, y_axis, - map=b.traversible*1., - map_size=256) - cum_fs = cum_fs[0] - cum_fs = cv2.applyColorMap((cum_fs*255).astype(np.uint8), cv2.COLORMAP_JET) - im = Image.fromarray(cum_fs) - im = ImageTk.PhotoImage(im) - panel_overhead = tk.Label(root, image=im) - - def refresh(): - global current_node - image = b.render_nodes(b.task.nodes[[current_node],:])[0] - image = image.astype(np.uint8) - im = Image.fromarray(image) - im = ImageTk.PhotoImage(im) - panel.configure(image=im) - panel.image = im - - def left_key(event): - global current_node - current_node = b.take_action([current_node], [2], 1)[0][0] - refresh() - - def up_key(event): - global current_node - current_node = b.take_action([current_node], [3], 1)[0][0] - refresh() - - def right_key(event): - global current_node - current_node = b.take_action([current_node], [1], 1)[0][0] - refresh() - - def quit(event): - root.destroy() - - panel_overhead.grid(row=4, column=5, rowspan=1, columnspan=1, - sticky=tk.W+tk.E+tk.N+tk.S) - panel.bind('', left_key) - panel.bind('', up_key) - panel.bind('', right_key) - panel.bind('q', quit) - panel.focus_set() - panel.grid(row=0, column=0, rowspan=5, columnspan=5, - sticky=tk.W+tk.E+tk.N+tk.S) - root.mainloop() - -def simple_window(): - root = tk.Tk() - - image = np.zeros((128, 128, 3), dtype=np.uint8) - image[32:96, 32:96, 0] = 255 - im = Image.fromarray(image) - im = ImageTk.PhotoImage(im) - - image = np.zeros((128, 128, 3), dtype=np.uint8) - image[32:96, 32:96, 1] = 255 - im2 = Image.fromarray(image) - im2 = ImageTk.PhotoImage(im2) - - panel = tk.Label(root, image=im) - - def left_key(event): - panel.configure(image=im2) - panel.image = im2 - - def quit(event): - sys.exit() - - panel.bind('', left_key) - panel.bind('', left_key) - panel.bind('', left_key) - panel.bind('q', quit) - panel.focus_set() - panel.pack(side = "bottom", fill = "both", expand = "yes") - root.mainloop() - -def main(_): - b = load_building(FLAGS.dataset_name, FLAGS.building_name) - walk_through(b) - -if __name__ == '__main__': - app.run() diff --git a/spaces/NSect/RealisticPhotoModels/README.md b/spaces/NSect/RealisticPhotoModels/README.md deleted file mode 100644 index 638a01187a737a6f640ea5654ae2acfb356f572c..0000000000000000000000000000000000000000 --- a/spaces/NSect/RealisticPhotoModels/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ImagineAI Imagine Generator -emoji: 💩 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -duplicated_from: DiffusionArtco/RealisticPhotoModels ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/constrained_decoding/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/constrained_decoding/README.md deleted file mode 100644 index e04b8b6a018214c8233fa87fd91d46a6dd1519d4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/constrained_decoding/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# (Vectorized) Lexically constrained decoding with dynamic beam allocation - -This page provides instructions for how to use lexically constrained decoding in Fairseq. -Fairseq implements the code described in the following papers: - -* [Fast Lexically Constrained Decoding With Dynamic Beam Allocation](https://www.aclweb.org/anthology/N18-1119/) (Post & Vilar, 2018) -* [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://www.aclweb.org/anthology/N19-1090/) (Hu et al., 2019) - -## Quick start - -Constrained search is enabled by adding the command-line argument `--constraints` to `fairseq-interactive`. -Constraints are appended to each line of input, separated by tabs. Each constraint (one or more tokens) -is a separate field. - -The following command, using [Fairseq's WMT19 German--English model](https://github.com/pytorch/fairseq/blob/main/examples/wmt19/README.md), -translates the sentence *Die maschinelle Übersetzung ist schwer zu kontrollieren.* with the constraints -"hard" and "to influence". - - echo -e "Die maschinelle Übersetzung ist schwer zu kontrollieren.\thard\ttoinfluence" \ - | normalize.py | tok.py \ - | fairseq-interactive /path/to/model \ - --path /path/to/model/model1.pt \ - --bpe fastbpe \ - --bpe-codes /path/to/model/bpecodes \ - --constraints \ - -s de -t en \ - --beam 10 - -(tok.py and normalize.py can be found in the same directory as this README; they are just shortcuts around Fairseq's WMT19 preprocessing). -This will generate the following output: - - [snip] - S-0 Die masch@@ in@@ elle Über@@ setzung ist schwer zu kontrollieren . - W-0 1.844 seconds - C-0 hard - C-0 influence - H-0 -1.5333266258239746 Mach@@ ine trans@@ lation is hard to influence . - D-0 -1.5333266258239746 Machine translation is hard to influence . - P-0 -0.5434 -0.1423 -0.1930 -0.1415 -0.2346 -1.8031 -0.1701 -11.7727 -0.1815 -0.1511 - -By default, constraints are generated in the order supplied, with any number (zero or more) of tokens generated -between constraints. If you wish for the decoder to order the constraints, then use `--constraints unordered`. -Note that you may want to use a larger beam. - -## Implementation details - -The heart of the implementation is in `fairseq/search.py`, which adds a `LexicallyConstrainedBeamSearch` instance. -This instance of beam search tracks the progress of each hypothesis in the beam through the set of constraints -provided for each input sentence. It does this using one of two classes, both found in `fairseq/token_generation_contstraints.py`: - -* OrderedConstraintState: assumes the `C` input constraints will be generated in the provided order -* UnorderedConstraintState: tries to apply `C` (phrasal) constraints in all `C!` orders - -## Differences from Sockeye - -There are a number of [differences from Sockeye's implementation](https://awslabs.github.io/sockeye/inference.html#lexical-constraints). - -* Generating constraints in the order supplied (the default option here) is not available in Sockeye. -* Due to an improved beam allocation method, there is no need to prune the beam. -* Again due to better allocation, beam sizes as low as 10 or even 5 are often sufficient. -* [The vector extensions described in Hu et al.](https://github.com/edwardjhu/sockeye/tree/trie_constraints) (NAACL 2019) were never merged - into the main Sockeye branch. - -## Citation - -The paper first describing lexical constraints for seq2seq decoding is: - -```bibtex -@inproceedings{hokamp-liu-2017-lexically, - title = "Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search", - author = "Hokamp, Chris and - Liu, Qun", - booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", - month = jul, - year = "2017", - address = "Vancouver, Canada", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/P17-1141", - doi = "10.18653/v1/P17-1141", - pages = "1535--1546", -} -``` - -The fairseq implementation uses the extensions described in - -```bibtex -@inproceedings{post-vilar-2018-fast, - title = "Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation", - author = "Post, Matt and - Vilar, David", - booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", - month = jun, - year = "2018", - address = "New Orleans, Louisiana", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/N18-1119", - doi = "10.18653/v1/N18-1119", - pages = "1314--1324", -} -``` - -and - -```bibtex -@inproceedings{hu-etal-2019-improved, - title = "Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting", - author = "Hu, J. Edward and - Khayrallah, Huda and - Culkin, Ryan and - Xia, Patrick and - Chen, Tongfei and - Post, Matt and - Van Durme, Benjamin", - booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", - month = jun, - year = "2019", - address = "Minneapolis, Minnesota", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/N19-1090", - doi = "10.18653/v1/N19-1090", - pages = "839--850", -} -``` diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/prepend_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/prepend_dataset.py deleted file mode 100644 index ad74784d2d7920e4a6225282d95543ce16ea50d9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/prepend_dataset.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class PrependDataset(BaseWrapperDataset): - def __init__(self, dataset, prepend_getter, ensure_first_token_is=None): - super().__init__(dataset) - self.prepend_getter = prepend_getter - self.ensure_first_token = ensure_first_token_is - - def __getitem__(self, idx): - item = self.dataset[idx] - is_tuple = isinstance(item, tuple) - src = item[0] if is_tuple else item - - assert self.ensure_first_token is None or src[0] == self.ensure_first_token - prepend_idx = self.prepend_getter(self.dataset, idx) - assert isinstance(prepend_idx, int) - src[0] = prepend_idx - item = tuple((src,) + item[1:]) if is_tuple else src - return item diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/adaptive_input.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/adaptive_input.py deleted file mode 100644 index 446534a9f8b87337a4dd752944ea386ff7cf7965..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/adaptive_input.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from typing import List - -import torch -from fairseq.modules.quant_noise import quant_noise -from torch import nn - - -class AdaptiveInput(nn.Module): - def __init__( - self, - vocab_size: int, - padding_idx: int, - initial_dim: int, - factor: float, - output_dim: int, - cutoff: List[int], - q_noise: float = 0, - qn_block_size: int = 8, - ): - super().__init__() - - if vocab_size > cutoff[-1]: - cutoff = cutoff + [vocab_size] - else: - assert ( - vocab_size == cutoff[-1] - ), "cannot specify cutoff larger than vocab size" - - self.cutoff = cutoff - self.embedding_dim = output_dim - self.padding_idx = padding_idx - - self.embeddings = nn.ModuleList() - for i in range(len(self.cutoff)): - prev = self.cutoff[i - 1] if i > 0 else 0 - size = self.cutoff[i] - prev - dim = int(initial_dim // (factor ** i)) - seq = nn.Sequential( - nn.Embedding(size, dim, self.padding_idx), - quant_noise( - nn.Linear(dim, output_dim, bias=False), q_noise, qn_block_size - ), - ) - - self.embeddings.append(seq) - self.padding_idx = None - self.padding_idx = padding_idx - - def init_weights(m): - if isinstance(m, nn.Embedding): - nn.init.normal_(m.weight, mean=0, std=m.weight.shape[1] ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - elif hasattr(m, "weight"): - nn.init.xavier_uniform_(m.weight) - - self.apply(init_weights) - - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - - def weights_for_band(self, band: int): - return self.embeddings[band][0].weight, self.embeddings[band][1].weight - - def forward(self, input: torch.Tensor): - result = self._float_tensor.new(input.shape + (self.embedding_dim,)) - for i in range(len(self.cutoff)): - mask = input.lt(self.cutoff[i]) - if i > 0: - mask.mul_(input.ge(self.cutoff[i - 1])) - chunk_input = input[mask] - self.cutoff[i - 1] - else: - chunk_input = input[mask] - if mask.any(): - result[mask] = self.embeddings[i](chunk_input) - return result diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/README.md b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/README.md deleted file mode 100644 index 7774c333053b95d15b180fdfc3ee3cd817790520..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# Deep Transformers with Latent Depth (Li et al., 2020) - -[https://arxiv.org/abs/2009.13102](https://arxiv.org/abs/2009.13102). - -## Introduction - -We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection. As an extension of this framework, we propose a novel method to train one shared Transformer network for multilingual machine translation with different layer selection posteriors for each language pair. - -## Training a multilingual model with latent depth - -Below is an example of training with latent depth in decoder for one-to-many (O2M) related languages. We use the same preprocessed (numberized and binarized) TED8 dataset as in [Balancing Training for Multilingual Neural Machine Translation (Wang et al., 2020)](https://github.com/cindyxinyiwang/multiDDS), which could be generated by [the script](https://github.com/cindyxinyiwang/multiDDS/blob/multiDDS/util_scripts/prepare_multilingual_data.sh) the author provided. -```bash -lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur" -databin_dir= - -fairseq-train ${databin_dir} \ - --user-dir examples/latent_depth/latent_depth_src \ - --lang-pairs "${lang_pairs_str}" \ - --arch multilingual_transformer_iwslt_de_en \ - --task multilingual_translation_latent_depth \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --share-encoders \ - --share-decoders \ - --decoder-langtok \ - --share-decoder-input-output-embed \ - --dropout 0.3 --attention-dropout 0.3 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --stop-min-lr 1e-9 --warmup-init-lr 1e-7 --warmup-updates 8000 \ - --max-tokens 4096 --update-freq 1 \ - --lr 0.0015 \ - --clip-norm 1.0 \ - --seed 2 \ - --ddp-backend=legacy_ddp \ - --encoder-layers 12 \ - --decoder-layers 24 \ - --decoder-latent-layer \ - --sparsity-weight 0.1 \ - --anneal-updates 5000 \ - --soft-update 500 \ - --target-layers 12 \ - --share-weight 0.1 -``` -## Inference command - -```bash -lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur" -databin_dir= -model_path= -src_lang= -tgt_lang= -gen_data= - -fairseq-generate ${databin_dir} \ - --path ${model_path} \ - --task multilingual_translation_latent_depth \ - --decoder-latent-layer \ - --lang-pairs "${lang_pairs_str}" \ - -s ${src_lang} -t ${tgt_lang} \ - --gen-subset $gen_data \ - --scoring sacrebleu \ - --remove-bpe 'sentencepiece' \ - --lenpen 1.0 \ - --beam 5 \ - --decoder-langtok \ - --max-tokens 4096 -``` - - -## Citation -```bibtex -@article{li2020deep, - title={Deep Transformers with Latent Depth}, - author={Li, Xian and Stickland, Asa Cooper and Tang, Yuqing and Kong, Xiang}, - journal={arXiv preprint arXiv:2009.13102}, - year={2020} -} -``` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/text_to_speech/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/text_to_speech/__init__.py deleted file mode 100644 index 652fee0d685b61af47b314367037888fa640e1a7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/text_to_speech/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .tacotron2 import * # noqa -from .tts_transformer import * # noqa -from .fastspeech2 import * # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py deleted file mode 100644 index d0e7e14b7e72b1151f7d7f19094430bbab64f8f0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional, List -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class FixedLRScheduleConfig(FairseqDataclass): - force_anneal: Optional[int] = field( - default=None, - metadata={"help": "force annealing at specified epoch"}, - ) - lr_shrink: float = field( - default=0.1, - metadata={"help": "shrink factor for annealing, lr_new = (lr * lr_shrink)"}, - ) - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("fixed", dataclass=FixedLRScheduleConfig) -class FixedLRSchedule(FairseqLRScheduler): - """Decay the LR on a fixed schedule.""" - - def __init__(self, cfg: FixedLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - - self.lr = cfg.lr[0] - if cfg.warmup_updates > 0: - self.warmup_factor = 1.0 / cfg.warmup_updates - else: - self.warmup_factor = 1 - - def state_dict(self): - return {"lr": self.lr} - - def load_state_dict(self, state_dict): - if "lr" in state_dict: - self.lr = state_dict["lr"] - - def get_next_lr(self, epoch): - lrs = self.cfg.lr - if self.cfg.force_anneal is None or epoch < self.cfg.force_anneal: - # use fixed LR schedule - next_lr = lrs[min(epoch - 1, len(lrs) - 1)] - else: - # annneal based on lr_shrink - next_lr = lrs[-1] * self.cfg.lr_shrink ** ( - epoch + 1 - self.cfg.force_anneal - ) - return next_lr - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - self.lr = self.get_next_lr(epoch) - self.optimizer.set_lr(self.warmup_factor * self.lr) - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if self.cfg.warmup_updates > 0 and num_updates < self.cfg.warmup_updates: - self.warmup_factor = (num_updates + 1) / float(self.cfg.warmup_updates) - self.optimizer.set_lr(self.warmup_factor * self.lr) - else: - self.optimizer.set_lr(self.lr) - return self.optimizer.get_lr() diff --git a/spaces/OIUGLK/bingo/tests/kblob.ts b/spaces/OIUGLK/bingo/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/ORI-Muchim/ONFIRETTS/commons.py b/spaces/ORI-Muchim/ONFIRETTS/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/ONFIRETTS/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/modeling/soft_nms.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/modeling/soft_nms.py deleted file mode 100644 index 6a5aae7c4261191b8e07e0fd25055d8917f7f97d..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/modeling/soft_nms.py +++ /dev/null @@ -1,177 +0,0 @@ -import torch - -from detectron2.structures import Boxes, RotatedBoxes, pairwise_iou, pairwise_iou_rotated - - -def soft_nms(boxes, scores, method, gaussian_sigma, linear_threshold, prune_threshold): - """ - Performs soft non-maximum suppression algorithm on axis aligned boxes - - Args: - boxes (Tensor[N, 5]): - boxes where NMS will be performed. They - are expected to be in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept -""" - return _soft_nms( - Boxes, - pairwise_iou, - boxes, - scores, - method, - gaussian_sigma, - linear_threshold, - prune_threshold, - ) - - -def batched_soft_nms( - boxes, scores, idxs, method, gaussian_sigma, linear_threshold, prune_threshold -): - """ - Performs soft non-maximum suppression in a batched fashion. - - Each index value correspond to a category, and NMS - will not be applied between elements of different categories. - - Args: - boxes (Tensor[N, 4]): - boxes where NMS will be performed. They - are expected to be in (x1, y1, x2, y2) format - scores (Tensor[N]): - scores for each one of the boxes - idxs (Tensor[N]): - indices of the categories for each one of the boxes. - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept - """ - if boxes.numel() == 0: - return ( - torch.empty((0,), dtype=torch.int64, device=boxes.device), - torch.empty((0,), dtype=torch.float32, device=scores.device), - ) - # strategy: in order to perform NMS independently per class. - # we add an offset to all the boxes. The offset is dependent - # only on the class idx, and is large enough so that boxes - # from different classes do not overlap - max_coordinate = boxes.max() - offsets = idxs.to(boxes) * (max_coordinate + 1) - boxes_for_nms = boxes + offsets[:, None] - return soft_nms( - boxes_for_nms, scores, method, gaussian_sigma, linear_threshold, prune_threshold - ) - - -def _soft_nms( - box_class, - pairwise_iou_func, - boxes, - scores, - method, - gaussian_sigma, - linear_threshold, - prune_threshold, -): - """ - Soft non-max suppression algorithm. - - Implementation of [Soft-NMS -- Improving Object Detection With One Line of Codec] - (https://arxiv.org/abs/1704.04503) - - Args: - box_class (cls): one of Box, RotatedBoxes - pairwise_iou_func (func): one of pairwise_iou, pairwise_iou_rotated - boxes (Tensor[N, ?]): - boxes where NMS will be performed - if Boxes, in (x1, y1, x2, y2) format - if RotatedBoxes, in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept - """ - boxes = boxes.clone() - scores = scores.clone() - idxs = torch.arange(scores.size()[0]) - - idxs_out = [] - scores_out = [] - - while scores.numel() > 0: - top_idx = torch.argmax(scores) - idxs_out.append(idxs[top_idx].item()) - scores_out.append(scores[top_idx].item()) - - top_box = boxes[top_idx] - ious = pairwise_iou_func(box_class(top_box.unsqueeze(0)), box_class(boxes))[0] - - if method == "linear": - decay = torch.ones_like(ious) - decay_mask = ious > linear_threshold - decay[decay_mask] = 1 - ious[decay_mask] - elif method == "gaussian": - decay = torch.exp(-torch.pow(ious, 2) / gaussian_sigma) - elif method == "hard": # standard NMS - decay = (ious < linear_threshold).float() - else: - raise NotImplementedError("{} soft nms method not implemented.".format(method)) - - scores *= decay - keep = scores > prune_threshold - keep[top_idx] = False - - boxes = boxes[keep] - scores = scores[keep] - idxs = idxs[keep] - - return torch.tensor(idxs_out).to(boxes.device), torch.tensor(scores_out).to(scores.device) \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/modules/image_degradation/utils_image.py b/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/visualizers/noop.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/visualizers/noop.py deleted file mode 100644 index 4175089a54a8484d51e6c879c1a99c4e4d961d15..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/visualizers/noop.py +++ /dev/null @@ -1,9 +0,0 @@ -from saicinpainting.training.visualizers.base import BaseVisualizer - - -class NoopVisualizer(BaseVisualizer): - def __init__(self, *args, **kwargs): - pass - - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - pass diff --git a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/tool_using/README_en.md b/spaces/Osborn-bh/ChatGLM3-6B-Osborn/tool_using/README_en.md deleted file mode 100644 index 0e53479e781ac878629526b156b1ee98404b318f..0000000000000000000000000000000000000000 --- a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/tool_using/README_en.md +++ /dev/null @@ -1,75 +0,0 @@ -# Tool Invocation -This document will introduce how to use the ChatGLM3-6B for tool invocation. Currently, only the ChatGLM3-6B model supports tool invocation, while the ChatGLM3-6B-Base and ChatGLM3-6B-32K models do not support it. - -## Building System Prompt -Here are two examples of tool invocation. First, prepare the description information of the data to be built. - -```python -tools = [ - { - "name": "track", - "description": "Track the real-time price of a specified stock", - "parameters": { - "type": "object", - "properties": { - "symbol": { - "description": "The stock code that needs to be tracked" - } - }, - "required": ['symbol'] - } - }, - { - "name": "text-to-speech", - "description": "Convert text to speech", - "parameters": { - "type": "object", - "properties": { - "text": { - "description": "The text that needs to be converted into speech" - }, - "voice": { - "description": "The type of voice to use (male, female, etc.)" - }, - "speed": { - "description": "The speed of the speech (fast, medium, slow, etc.)" - } - }, - "required": ['text'] - } - } -] -system_info = {"role": "system", "content": "Answer the following questions as best as you can. You have access to the following tools:", "tools": tools} -``` - -Please ensure that the definition format of the tool is consistent with the example to obtain optimal performance. - -## Asking Questions -Note: Currently, the tool invocation of ChatGLM3-6B only supports the `chat` method and does not support the `stream_chat` method. -```python -history = [system_info] -query = "Help me inquire the price of stock 10111" -response, history = model.chat(tokenizer, query, history=history) -print(response) -``` -The expected output here is -```json -{"name": "track", "parameters": {"symbol": "10111"}} -``` -This indicates that the model needs to call the tool `track`, and the parameter `symbol` needs to be passed in. - -## Invoke Tool, Generate Response -Here, you need to implement the logic of calling the tool yourself. Assuming that the return result has been obtained, return the result to the model in json format and get a response. -```python -result = json.dumps({"price": 12412}, ensure_ascii=False) -response, history = model.chat(tokenizer, result, history=history, role="observation") -print(response) -``` -Here `role="observation"` indicates that the input is the return value of the tool invocation rather than user input, and it cannot be omitted. - -The expected output is -``` -Based on your query, after the API call, the price of stock 10111 is 12412. -``` - -This indicates that this tool invocation has ended, and the model generates a response based on the return result. For more complex questions, the model may need to make multiple tool invocations. At this time, you can judge whether the returned `response` is `str` or `dict` to determine whether the return is a generated response or a tool invocation request. \ No newline at end of file diff --git a/spaces/OscarLiu/MybingGPT/Dockerfile b/spaces/OscarLiu/MybingGPT/Dockerfile deleted file mode 100644 index ba9fd0454b8d5f9d83bad1a119a943796b6da1ed..0000000000000000000000000000000000000000 --- a/spaces/OscarLiu/MybingGPT/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="10oT6v1wb3ZgNo6yzaMfjJIEMgApo_ilf94cMAESnSr18f59SChqVt_k7BTgcTtQV6LwAUipyjXvX3A6an0dx5eTVmgg5OnDkTyDVweto5Tfg0aPHtbJ8EUUBN3YrC6SzWl0z4KhKAETxR86uqF_1E-4uFO7jSBdvqvmGxn7y7r_Xsbek6KG1qGPNFalbwrdGTEXst9uCMwKoLqaZL_DSXA" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/cgnet.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/cgnet.py deleted file mode 100644 index eff8d9458c877c5db894957e0b1b4597e40da6ab..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/cgnet.py +++ /dev/null @@ -1,35 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', eps=1e-03, requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='CGNet', - norm_cfg=norm_cfg, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16)), - decode_head=dict( - type='FCNHead', - in_channels=256, - in_index=2, - channels=256, - num_convs=0, - concat_input=False, - dropout_ratio=0, - num_classes=19, - norm_cfg=norm_cfg, - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=[ - 2.5959933, 6.7415504, 3.5354059, 9.8663225, 9.690899, 9.369352, - 10.289121, 9.953208, 4.3097677, 9.490387, 7.674431, 9.396905, - 10.347791, 6.3927646, 10.226669, 10.241062, 10.280587, - 10.396974, 10.055647 - ])), - # model training and testing settings - train_cfg=dict(sampler=None), - test_cfg=dict(mode='whole')) diff --git a/spaces/PAIR/Text2Video-Zero/text_to_video_pipeline.py b/spaces/PAIR/Text2Video-Zero/text_to_video_pipeline.py deleted file mode 100644 index 173b3985e261d00755aabcb8cccf2045e2fa2ab5..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/text_to_video_pipeline.py +++ /dev/null @@ -1,504 +0,0 @@ -from diffusers import StableDiffusionPipeline -import torch -from dataclasses import dataclass -from typing import Callable, List, Optional, Union -import numpy as np -from diffusers.utils import deprecate, logging, BaseOutput -from einops import rearrange, repeat -from torch.nn.functional import grid_sample -import torchvision.transforms as T -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.schedulers import KarrasDiffusionSchedulers -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -import PIL -from PIL import Image -from kornia.morphology import dilation - - -@dataclass -class TextToVideoPipelineOutput(BaseOutput): - # videos: Union[torch.Tensor, np.ndarray] - # code: Union[torch.Tensor, np.ndarray] - images: Union[List[PIL.Image.Image], np.ndarray] - nsfw_content_detected: Optional[List[bool]] - - -def coords_grid(batch, ht, wd, device): - # Adapted from https://github.com/princeton-vl/RAFT/blob/master/core/utils/utils.py - coords = torch.meshgrid(torch.arange( - ht, device=device), torch.arange(wd, device=device)) - coords = torch.stack(coords[::-1], dim=0).float() - return coords[None].repeat(batch, 1, 1, 1) - - -class TextToVideoPipeline(StableDiffusionPipeline): - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__(vae, text_encoder, tokenizer, unet, scheduler, - safety_checker, feature_extractor, requires_safety_checker) - - def DDPM_forward(self, x0, t0, tMax, generator, device, shape, text_embeddings): - rand_device = "cpu" if device.type == "mps" else device - - if x0 is None: - return torch.randn(shape, generator=generator, device=rand_device, dtype=text_embeddings.dtype).to(device) - else: - eps = torch.randn(x0.shape, dtype=text_embeddings.dtype, generator=generator, - device=rand_device) - alpha_vec = torch.prod(self.scheduler.alphas[t0:tMax]) - - xt = torch.sqrt(alpha_vec) * x0 + \ - torch.sqrt(1-alpha_vec) * eps - return xt - - def prepare_latents(self, batch_size, num_channels_latents, video_length, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, video_length, height // - self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - rand_device = "cpu" if device.type == "mps" else device - - if isinstance(generator, list): - shape = (1,) + shape[1:] - latents = [ - torch.randn( - shape, generator=generator[i], device=rand_device, dtype=dtype) - for i in range(batch_size) - ] - latents = torch.cat(latents, dim=0).to(device) - else: - latents = torch.randn( - shape, generator=generator, device=rand_device, dtype=dtype).to(device) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def warp_latents_independently(self, latents, reference_flow): - _, _, H, W = reference_flow.size() - b, _, f, h, w = latents.size() - assert b == 1 - coords0 = coords_grid(f, H, W, device=latents.device).to(latents.dtype) - - coords_t0 = coords0 + reference_flow - coords_t0[:, 0] /= W - coords_t0[:, 1] /= H - - coords_t0 = coords_t0 * 2.0 - 1.0 - - coords_t0 = T.Resize((h, w))(coords_t0) - - coords_t0 = rearrange(coords_t0, 'f c h w -> f h w c') - - latents_0 = rearrange(latents[0], 'c f h w -> f c h w') - warped = grid_sample(latents_0, coords_t0, - mode='nearest', padding_mode='reflection') - - warped = rearrange(warped, '(b f) c h w -> b c f h w', f=f) - return warped - - def DDIM_backward(self, num_inference_steps, timesteps, skip_t, t0, t1, do_classifier_free_guidance, null_embs, text_embeddings, latents_local, - latents_dtype, guidance_scale, guidance_stop_step, callback, callback_steps, extra_step_kwargs, num_warmup_steps): - entered = False - - f = latents_local.shape[2] - - latents_local = rearrange(latents_local, "b c f w h -> (b f) c w h") - - latents = latents_local.detach().clone() - x_t0_1 = None - x_t1_1 = None - - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - if t > skip_t: - continue - else: - if not entered: - print( - f"Continue DDIM with i = {i}, t = {t}, latent = {latents.shape}, device = {latents.device}, type = {latents.dtype}") - entered = True - - latents = latents.detach() - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat( - [latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input( - latent_model_input, t) - - # predict the noise residual - with torch.no_grad(): - if null_embs is not None: - text_embeddings[0] = null_embs[i][0] - te = torch.cat([repeat(text_embeddings[0, :, :], "c k -> f c k", f=f), - repeat(text_embeddings[1, :, :], "c k -> f c k", f=f)]) - noise_pred = self.unet( - latent_model_input, t, encoder_hidden_states=te).sample.to(dtype=latents_dtype) - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk( - 2) - noise_pred = noise_pred_uncond + guidance_scale * \ - (noise_pred_text - noise_pred_uncond) - - if i >= guidance_stop_step * len(timesteps): - alpha = 0 - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, t, latents, **extra_step_kwargs).prev_sample - # latents = latents - alpha * grads / (torch.norm(grads) + 1e-10) - # call the callback, if provided - - if i < len(timesteps)-1 and timesteps[i+1] == t0: - x_t0_1 = latents.detach().clone() - print(f"latent t0 found at i = {i}, t = {t}") - elif i < len(timesteps)-1 and timesteps[i+1] == t1: - x_t1_1 = latents.detach().clone() - print(f"latent t1 found at i={i}, t = {t}") - - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - latents = rearrange(latents, "(b f) c w h -> b c f w h", f=f) - - res = {"x0": latents.detach().clone()} - if x_t0_1 is not None: - x_t0_1 = rearrange(x_t0_1, "(b f) c w h -> b c f w h", f=f) - res["x_t0_1"] = x_t0_1.detach().clone() - if x_t1_1 is not None: - x_t1_1 = rearrange(x_t1_1, "(b f) c w h -> b c f w h", f=f) - res["x_t1_1"] = x_t1_1.detach().clone() - return res - - def decode_latents(self, latents): - video_length = latents.shape[2] - latents = 1 / 0.18215 * latents - latents = rearrange(latents, "b c f h w -> (b f) c h w") - video = self.vae.decode(latents).sample - video = rearrange(video, "(b f) c h w -> b c f h w", f=video_length) - video = (video / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - video = video.detach().cpu() - return video - - def create_motion_field(self, motion_field_strength_x, motion_field_strength_y, frame_ids, video_length, latents): - - reference_flow = torch.zeros( - (video_length-1, 2, 512, 512), device=latents.device, dtype=latents.dtype) - for fr_idx, frame_id in enumerate(frame_ids): - reference_flow[fr_idx, 0, :, - :] = motion_field_strength_x*(frame_id) - reference_flow[fr_idx, 1, :, - :] = motion_field_strength_y*(frame_id) - return reference_flow - - def create_motion_field_and_warp_latents(self, motion_field_strength_x, motion_field_strength_y, frame_ids, video_length, latents): - - motion_field = self.create_motion_field(motion_field_strength_x=motion_field_strength_x, - motion_field_strength_y=motion_field_strength_y, latents=latents, video_length=video_length, frame_ids=frame_ids) - for idx, latent in enumerate(latents): - latents[idx] = self.warp_latents_independently( - latent[None], motion_field) - return motion_field, latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - video_length: Optional[int], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - guidance_stop_step: float = 0.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_videos_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, - List[torch.Generator]]] = None, - xT: Optional[torch.FloatTensor] = None, - null_embs: Optional[torch.FloatTensor] = None, - motion_field_strength_x: float = 12, - motion_field_strength_y: float = 12, - output_type: Optional[str] = "tensor", - return_dict: bool = True, - callback: Optional[Callable[[ - int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - use_motion_field: bool = True, - smooth_bg: bool = False, - smooth_bg_strength: float = 0.4, - t0: int = 44, - t1: int = 47, - **kwargs, - ): - frame_ids = kwargs.pop("frame_ids", list(range(video_length))) - assert t0 < t1 - assert num_videos_per_prompt == 1 - assert isinstance(prompt, list) and len(prompt) > 0 - assert isinstance(negative_prompt, list) or negative_prompt is None - - prompt_types = [prompt, negative_prompt] - - for idx, prompt_type in enumerate(prompt_types): - prompt_template = None - for prompt in prompt_type: - if prompt_template is None: - prompt_template = prompt - else: - assert prompt == prompt_template - if prompt_types[idx] is not None: - prompt_types[idx] = prompt_types[idx][0] - prompt = prompt_types[0] - negative_prompt = prompt_types[1] - - # Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_videos_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # print(f" Latent shape = {latents.shape}") - - # Prepare latent variables - num_channels_latents = self.unet.in_channels - - xT = self.prepare_latents( - batch_size * num_videos_per_prompt, - num_channels_latents, - 1, - height, - width, - text_embeddings.dtype, - device, - generator, - xT, - ) - dtype = xT.dtype - - # when motion field is not used, augment with random latent codes - if use_motion_field: - xT = xT[:, :, :1] - else: - if xT.shape[2] < video_length: - xT_missing = self.prepare_latents( - batch_size * num_videos_per_prompt, - num_channels_latents, - video_length-xT.shape[2], - height, - width, - text_embeddings.dtype, - device, - generator, - None, - ) - xT = torch.cat([xT, xT_missing], dim=2) - - xInit = xT.clone() - - timesteps_ddpm = [981, 961, 941, 921, 901, 881, 861, 841, 821, 801, 781, 761, 741, 721, - 701, 681, 661, 641, 621, 601, 581, 561, 541, 521, 501, 481, 461, 441, - 421, 401, 381, 361, 341, 321, 301, 281, 261, 241, 221, 201, 181, 161, - 141, 121, 101, 81, 61, 41, 21, 1] - timesteps_ddpm.reverse() - - t0 = timesteps_ddpm[t0] - t1 = timesteps_ddpm[t1] - - print(f"t0 = {t0} t1 = {t1}") - x_t1_1 = None - - # Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - # Denoising loop - num_warmup_steps = len(timesteps) - \ - num_inference_steps * self.scheduler.order - - shape = (batch_size, num_channels_latents, 1, height // - self.vae_scale_factor, width // self.vae_scale_factor) - - ddim_res = self.DDIM_backward(num_inference_steps=num_inference_steps, timesteps=timesteps, skip_t=1000, t0=t0, t1=t1, do_classifier_free_guidance=do_classifier_free_guidance, - null_embs=null_embs, text_embeddings=text_embeddings, latents_local=xT, latents_dtype=dtype, guidance_scale=guidance_scale, guidance_stop_step=guidance_stop_step, - callback=callback, callback_steps=callback_steps, extra_step_kwargs=extra_step_kwargs, num_warmup_steps=num_warmup_steps) - - x0 = ddim_res["x0"].detach() - - if "x_t0_1" in ddim_res: - x_t0_1 = ddim_res["x_t0_1"].detach() - if "x_t1_1" in ddim_res: - x_t1_1 = ddim_res["x_t1_1"].detach() - del ddim_res - del xT - if use_motion_field: - del x0 - - x_t0_k = x_t0_1[:, :, :1, :, :].repeat(1, 1, video_length-1, 1, 1) - - reference_flow, x_t0_k = self.create_motion_field_and_warp_latents( - motion_field_strength_x=motion_field_strength_x, motion_field_strength_y=motion_field_strength_y, latents=x_t0_k, video_length=video_length, frame_ids=frame_ids[1:]) - - # assuming t0=t1=1000, if t0 = 1000 - if t1 > t0: - x_t1_k = self.DDPM_forward( - x0=x_t0_k, t0=t0, tMax=t1, device=device, shape=shape, text_embeddings=text_embeddings, generator=generator) - else: - x_t1_k = x_t0_k - - if x_t1_1 is None: - raise Exception - - x_t1 = torch.cat([x_t1_1, x_t1_k], dim=2).clone().detach() - - ddim_res = self.DDIM_backward(num_inference_steps=num_inference_steps, timesteps=timesteps, skip_t=t1, t0=-1, t1=-1, do_classifier_free_guidance=do_classifier_free_guidance, - null_embs=null_embs, text_embeddings=text_embeddings, latents_local=x_t1, latents_dtype=dtype, guidance_scale=guidance_scale, - guidance_stop_step=guidance_stop_step, callback=callback, callback_steps=callback_steps, extra_step_kwargs=extra_step_kwargs, num_warmup_steps=num_warmup_steps) - - x0 = ddim_res["x0"].detach() - del ddim_res - del x_t1 - del x_t1_1 - del x_t1_k - else: - x_t1 = x_t1_1.clone() - x_t1_1 = x_t1_1[:, :, :1, :, :].clone() - x_t1_k = x_t1_1[:, :, 1:, :, :].clone() - x_t0_k = x_t0_1[:, :, 1:, :, :].clone() - x_t0_1 = x_t0_1[:, :, :1, :, :].clone() - - # smooth background - if smooth_bg: - h, w = x0.shape[3], x0.shape[4] - M_FG = torch.zeros((batch_size, video_length, h, w), - device=x0.device).to(x0.dtype) - for batch_idx, x0_b in enumerate(x0): - z0_b = self.decode_latents(x0_b[None]).detach() - z0_b = rearrange(z0_b[0], "c f h w -> f h w c") - for frame_idx, z0_f in enumerate(z0_b): - z0_f = torch.round( - z0_f * 255).cpu().numpy().astype(np.uint8) - # apply SOD detection - m_f = torch.tensor(self.sod_model.process_data( - z0_f), device=x0.device).to(x0.dtype) - mask = T.Resize( - size=(h, w), interpolation=T.InterpolationMode.NEAREST)(m_f[None]) - kernel = torch.ones(5, 5, device=x0.device, dtype=x0.dtype) - mask = dilation(mask[None].to(x0.device), kernel)[0] - M_FG[batch_idx, frame_idx, :, :] = mask - - x_t1_1_fg_masked = x_t1_1 * \ - (1 - repeat(M_FG[:, 0, :, :], - "b w h -> b c 1 w h", c=x_t1_1.shape[1])) - - x_t1_1_fg_masked_moved = [] - for batch_idx, x_t1_1_fg_masked_b in enumerate(x_t1_1_fg_masked): - x_t1_fg_masked_b = x_t1_1_fg_masked_b.clone() - - x_t1_fg_masked_b = x_t1_fg_masked_b.repeat( - 1, video_length-1, 1, 1) - if use_motion_field: - x_t1_fg_masked_b = x_t1_fg_masked_b[None] - x_t1_fg_masked_b = self.warp_latents_independently( - x_t1_fg_masked_b, reference_flow) - else: - x_t1_fg_masked_b = x_t1_fg_masked_b[None] - - x_t1_fg_masked_b = torch.cat( - [x_t1_1_fg_masked_b[None], x_t1_fg_masked_b], dim=2) - x_t1_1_fg_masked_moved.append(x_t1_fg_masked_b) - - x_t1_1_fg_masked_moved = torch.cat(x_t1_1_fg_masked_moved, dim=0) - - M_FG_1 = M_FG[:, :1, :, :] - - M_FG_warped = [] - for batch_idx, m_fg_1_b in enumerate(M_FG_1): - m_fg_1_b = m_fg_1_b[None, None] - m_fg_b = m_fg_1_b.repeat(1, 1, video_length-1, 1, 1) - if use_motion_field: - m_fg_b = self.warp_latents_independently( - m_fg_b.clone(), reference_flow) - M_FG_warped.append( - torch.cat([m_fg_1_b[:1, 0], m_fg_b[:1, 0]], dim=1)) - - M_FG_warped = torch.cat(M_FG_warped, dim=0) - - channels = x0.shape[1] - - M_BG = (1-M_FG) * (1 - M_FG_warped) - M_BG = repeat(M_BG, "b f h w -> b c f h w", c=channels) - a_convex = smooth_bg_strength - - latents = (1-M_BG) * x_t1 + M_BG * (a_convex * - x_t1 + (1-a_convex) * x_t1_1_fg_masked_moved) - - ddim_res = self.DDIM_backward(num_inference_steps=num_inference_steps, timesteps=timesteps, skip_t=t1, t0=-1, t1=-1, do_classifier_free_guidance=do_classifier_free_guidance, - null_embs=null_embs, text_embeddings=text_embeddings, latents_local=latents, latents_dtype=dtype, guidance_scale=guidance_scale, - guidance_stop_step=guidance_stop_step, callback=callback, callback_steps=callback_steps, extra_step_kwargs=extra_step_kwargs, num_warmup_steps=num_warmup_steps) - x0 = ddim_res["x0"].detach() - del ddim_res - del latents - - latents = x0 - - # manually for max memory savings - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.unet.to("cpu") - torch.cuda.empty_cache() - - if output_type == "latent": - image = latents - has_nsfw_concept = None - else: - image = self.decode_latents(latents) - - # Run safety checker - image, has_nsfw_concept = self.run_safety_checker( - image, device, text_embeddings.dtype) - image = rearrange(image, "b c f h w -> (b f) h w c") - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return TextToVideoPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/PaSathees/FoodVision_Big/model.py b/spaces/PaSathees/FoodVision_Big/model.py deleted file mode 100644 index ee2573aa56ff53bc44833b3a1687e79a0e4811ea..0000000000000000000000000000000000000000 --- a/spaces/PaSathees/FoodVision_Big/model.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch -import torchvision - -from torch import nn - -def create_effnetb2_model(num_classes:int=3, seed:int=42): - """Creates an EfficientNetB2 feature extractor model and transforms. - - Args: - num_classes (int, optional): number of classes in the classifier head. - Defaults to 3. - seed (int, optional): random seed value. Defaults to 42. - - Returns: - model (torch.nn.Module): EffNetB2 feature extractor model. - transforms (torchvision.transforms): EffNetB2 image transforms. - """ - weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT - transforms = weights.transforms() - model = torchvision.models.efficientnet_b2(weights=weights) - - for param in model.parameters(): - param.requires_grad = False - - torch.manual_seed(seed) - model.classifier = nn.Sequential( - nn.Dropout(p=0.3, inplace=True), - nn.Linear(in_features=1408, out_features=num_classes), - ) - - return model, transforms diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/training/train.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/training/train.py deleted file mode 100644 index be0cccc6145b46d026831cb71f198d2292fae931..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/training/train.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import fnmatch -import shutil - -import numpy -import torchaudio -import gradio - -from bark.hubert.pre_kmeans_hubert import CustomHubert -from bark.hubert.customtokenizer import auto_train -from tqdm.auto import tqdm - - -def training_prepare_files(path, model,progress=gradio.Progress(track_tqdm=True)): - - semanticsfolder = "./training/data/output" - wavfolder = "./training/data/output_wav" - ready = os.path.join(path, 'ready') - - testfiles = fnmatch.filter(os.listdir(ready), '*.npy') - if(len(testfiles) < 1): - # prepare and copy for training - hubert_model = CustomHubert(checkpoint_path=model) - - wavfiles = fnmatch.filter(os.listdir(wavfolder), '*.wav') - for i, f in tqdm(enumerate(wavfiles), total=len(wavfiles)): - semaname = '.'.join(f.split('.')[:-1]) # Cut off the extension - semaname = f'{semaname}.npy' - semafilename = os.path.join(semanticsfolder, semaname) - if not os.path.isfile(semafilename): - print(f'Skipping {f} no semantics pair found!') - continue - - print('Processing', f) - wav, sr = torchaudio.load(os.path.join(wavfolder, f)) - if wav.shape[0] == 2: # Stereo to mono if needed - wav = wav.mean(0, keepdim=True) - output = hubert_model.forward(wav, input_sample_hz=sr) - out_array = output.cpu().numpy() - fname = f'{i}_semantic_features.npy' - numpy.save(os.path.join(ready, fname), out_array) - fname = f'{i}_semantic.npy' - shutil.copy(semafilename, os.path.join(ready, fname)) - -def train(path, save_every, max_epochs): - auto_train(path, save_epochs=save_every) - diff --git a/spaces/PeepDaSlan9/De-limiter/models/load_models.py b/spaces/PeepDaSlan9/De-limiter/models/load_models.py deleted file mode 100644 index 791112d99cb631b0eb235787902013ebdcc29838..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/De-limiter/models/load_models.py +++ /dev/null @@ -1,87 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -from asteroid_filterbanks import make_enc_dec - -from asteroid.masknn import TDConvNet - -import utils -from .base_models import ( - BaseEncoderMaskerDecoderWithConfigs, - BaseEncoderMaskerDecoderWithConfigsMaskOnOutput, - BaseEncoderMaskerDecoderWithConfigsMultiChannelAsteroid, -) - - -def load_model_with_args(args): - if args.model_loss_params.architecture == "conv_tasnet_mask_on_output": - encoder, decoder = make_enc_dec( - "free", - n_filters=args.conv_tasnet_params.n_filters, - kernel_size=args.conv_tasnet_params.kernel_size, - stride=args.conv_tasnet_params.stride, - sample_rate=args.sample_rate, - ) - masker = TDConvNet( - in_chan=encoder.n_feats_out * args.data_params.nb_channels, # stereo - n_src=1, # for de-limit task. - out_chan=encoder.n_feats_out, - n_blocks=args.conv_tasnet_params.n_blocks, - n_repeats=args.conv_tasnet_params.n_repeats, - bn_chan=args.conv_tasnet_params.bn_chan, - hid_chan=args.conv_tasnet_params.hid_chan, - skip_chan=args.conv_tasnet_params.skip_chan, - # conv_kernel_size=args.conv_tasnet_params.conv_kernel_size, - norm_type=args.conv_tasnet_params.norm_type if args.conv_tasnet_params.norm_type else 'gLN', - mask_act=args.conv_tasnet_params.mask_act, - # causal=args.conv_tasnet_params.causal, - ) - - model = BaseEncoderMaskerDecoderWithConfigsMaskOnOutput( - encoder, - masker, - decoder, - encoder_activation=args.conv_tasnet_params.encoder_activation, - use_encoder=True, - apply_mask=True, - use_decoder=True, - decoder_activation=args.conv_tasnet_params.decoder_activation, - ) - model.use_encoder_to_target = False - - elif args.model_loss_params.architecture == "conv_tasnet": - encoder, decoder = make_enc_dec( - "free", - n_filters=args.conv_tasnet_params.n_filters, - kernel_size=args.conv_tasnet_params.kernel_size, - stride=args.conv_tasnet_params.stride, - sample_rate=args.sample_rate, - ) - masker = TDConvNet( - in_chan=encoder.n_feats_out * args.data_params.nb_channels, # stereo - n_src=args.conv_tasnet_params.n_src, # for de-limit task with the standard conv-tasnet setting. - out_chan=encoder.n_feats_out, - n_blocks=args.conv_tasnet_params.n_blocks, - n_repeats=args.conv_tasnet_params.n_repeats, - bn_chan=args.conv_tasnet_params.bn_chan, - hid_chan=args.conv_tasnet_params.hid_chan, - skip_chan=args.conv_tasnet_params.skip_chan, - # conv_kernel_size=args.conv_tasnet_params.conv_kernel_size, - norm_type=args.conv_tasnet_params.norm_type if args.conv_tasnet_params.norm_type else 'gLN', - mask_act=args.conv_tasnet_params.mask_act, - # causal=args.conv_tasnet_params.causal, - ) - - model = BaseEncoderMaskerDecoderWithConfigsMultiChannelAsteroid( - encoder, - masker, - decoder, - encoder_activation=args.conv_tasnet_params.encoder_activation, - use_encoder=True, - apply_mask=False if args.conv_tasnet_params.synthesis else True, - use_decoder=True, - decoder_activation=args.conv_tasnet_params.decoder_activation, - ) - model.use_encoder_to_target = False - - return model diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/_functions.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/_functions.py deleted file mode 100644 index 9b5a8a44483ab991411d07122b22a1d027e4be8e..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/_functions.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import _get_stream - - -def scatter(input, devices, streams=None): - """Scatters tensor across multiple GPUs.""" - if streams is None: - streams = [None] * len(devices) - - if isinstance(input, list): - chunk_size = (len(input) - 1) // len(devices) + 1 - outputs = [ - scatter(input[i], [devices[i // chunk_size]], - [streams[i // chunk_size]]) for i in range(len(input)) - ] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - # TODO: copy to a pinned buffer first (if copying from CPU) - stream = streams[0] if output.numel() > 0 else None - if devices != [-1]: - with torch.cuda.device(devices[0]), torch.cuda.stream(stream): - output = output.cuda(devices[0], non_blocking=True) - else: - # unsqueeze the first dimension thus the tensor's shape is the - # same as those scattered with GPU. - output = output.unsqueeze(0) - return output - else: - raise Exception(f'Unknown type {type(input)}.') - - -def synchronize_stream(output, devices, streams): - if isinstance(output, list): - chunk_size = len(output) // len(devices) - for i in range(len(devices)): - for j in range(chunk_size): - synchronize_stream(output[i * chunk_size + j], [devices[i]], - [streams[i]]) - elif isinstance(output, torch.Tensor): - if output.numel() != 0: - with torch.cuda.device(devices[0]): - main_stream = torch.cuda.current_stream() - main_stream.wait_stream(streams[0]) - output.record_stream(main_stream) - else: - raise Exception(f'Unknown type {type(output)}.') - - -def get_input_device(input): - if isinstance(input, list): - for item in input: - input_device = get_input_device(item) - if input_device != -1: - return input_device - return -1 - elif isinstance(input, torch.Tensor): - return input.get_device() if input.is_cuda else -1 - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_gpus, input): - input_device = get_input_device(input) - streams = None - if input_device == -1 and target_gpus != [-1]: - # Perform CPU to GPU copies in a background stream - streams = [_get_stream(device) for device in target_gpus] - - outputs = scatter(input, target_gpus, streams) - # Synchronize with the copy stream - if streams is not None: - synchronize_stream(outputs, target_gpus, streams) - - return tuple(outputs) diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/atss.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/atss.py deleted file mode 100644 index fe4d781afcb520970f915a561d8d59ee6f4fe010..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/atss.py +++ /dev/null @@ -1,233 +0,0 @@ -import math -import torch -import torch.nn.functional as F -from torch import nn - -from .inference import make_atss_postprocessor -from .loss import make_atss_loss_evaluator - -from maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist -from maskrcnn_benchmark.layers import Scale, DFConv2d, DYReLU, SELayer -from .anchor_generator import make_anchor_generator_complex - - -class BoxCoder(object): - - def __init__(self, cfg): - self.cfg = cfg - - def encode(self, gt_boxes, anchors): - - TO_REMOVE = 1 # TODO remove - ex_widths = anchors[:, 2] - anchors[:, 0] + TO_REMOVE - ex_heights = anchors[:, 3] - anchors[:, 1] + TO_REMOVE - ex_ctr_x = (anchors[:, 2] + anchors[:, 0]) / 2 - ex_ctr_y = (anchors[:, 3] + anchors[:, 1]) / 2 - - gt_widths = gt_boxes[:, 2] - gt_boxes[:, 0] + TO_REMOVE - gt_heights = gt_boxes[:, 3] - gt_boxes[:, 1] + TO_REMOVE - gt_ctr_x = (gt_boxes[:, 2] + gt_boxes[:, 0]) / 2 - gt_ctr_y = (gt_boxes[:, 3] + gt_boxes[:, 1]) / 2 - - wx, wy, ww, wh = (10., 10., 5., 5.) - targets_dx = wx * (gt_ctr_x - ex_ctr_x) / ex_widths - targets_dy = wy * (gt_ctr_y - ex_ctr_y) / ex_heights - targets_dw = ww * torch.log(gt_widths / ex_widths) - targets_dh = wh * torch.log(gt_heights / ex_heights) - targets = torch.stack((targets_dx, targets_dy, targets_dw, targets_dh), dim=1) - - return targets - - def decode(self, preds, anchors): - - anchors = anchors.to(preds.dtype) - - TO_REMOVE = 1 # TODO remove - widths = anchors[:, 2] - anchors[:, 0] + TO_REMOVE - heights = anchors[:, 3] - anchors[:, 1] + TO_REMOVE - ctr_x = (anchors[:, 2] + anchors[:, 0]) / 2 - ctr_y = (anchors[:, 3] + anchors[:, 1]) / 2 - - wx, wy, ww, wh = (10., 10., 5., 5.) - dx = preds[:, 0::4] / wx - dy = preds[:, 1::4] / wy - dw = preds[:, 2::4] / ww - dh = preds[:, 3::4] / wh - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=math.log(1000. / 16)) - dh = torch.clamp(dh, max=math.log(1000. / 16)) - - pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] - pred_ctr_y = dy * heights[:, None] + ctr_y[:, None] - pred_w = torch.exp(dw) * widths[:, None] - pred_h = torch.exp(dh) * heights[:, None] - - pred_boxes = torch.zeros_like(preds) - pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * (pred_w - 1) - pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * (pred_h - 1) - pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * (pred_w - 1) - pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * (pred_h - 1) - - return pred_boxes - - -class ATSSHead(torch.nn.Module): - def __init__(self, cfg): - super(ATSSHead, self).__init__() - self.cfg = cfg - num_classes = cfg.MODEL.ATSS.NUM_CLASSES - 1 - num_anchors = len(cfg.MODEL.RPN.ASPECT_RATIOS) * cfg.MODEL.RPN.SCALES_PER_OCTAVE - in_channels = cfg.MODEL.BACKBONE.OUT_CHANNELS - channels = cfg.MODEL.ATSS.CHANNELS - use_gn = cfg.MODEL.ATSS.USE_GN - use_bn = cfg.MODEL.ATSS.USE_BN - use_dcn_in_tower = cfg.MODEL.ATSS.USE_DFCONV - use_dyrelu = cfg.MODEL.ATSS.USE_DYRELU - use_se = cfg.MODEL.ATSS.USE_SE - - cls_tower = [] - bbox_tower = [] - for i in range(cfg.MODEL.ATSS.NUM_CONVS): - if use_dcn_in_tower and \ - i == cfg.MODEL.ATSS.NUM_CONVS - 1: - conv_func = DFConv2d - else: - conv_func = nn.Conv2d - - cls_tower.append( - conv_func( - in_channels if i==0 else channels, - channels, - kernel_size=3, - stride=1, - padding=1, - bias=True - ) - ) - if use_gn: - cls_tower.append(nn.GroupNorm(32, channels)) - if use_bn: - cls_tower.append(nn.BatchNorm2d(channels)) - if use_se: - cls_tower.append(SELayer(channels)) - if use_dyrelu: - cls_tower.append(DYReLU(channels, channels)) - else: - cls_tower.append(nn.ReLU()) - - bbox_tower.append( - conv_func( - in_channels if i == 0 else channels, - channels, - kernel_size=3, - stride=1, - padding=1, - bias=True - ) - ) - if use_gn: - bbox_tower.append(nn.GroupNorm(32, channels)) - if use_bn: - bbox_tower.append(nn.BatchNorm2d(channels)) - if use_se: - bbox_tower.append(SELayer(channels)) - if use_dyrelu: - bbox_tower.append(DYReLU(channels, channels)) - else: - bbox_tower.append(nn.ReLU()) - - self.add_module('cls_tower', nn.Sequential(*cls_tower)) - self.add_module('bbox_tower', nn.Sequential(*bbox_tower)) - self.cls_logits = nn.Conv2d( - channels, num_anchors * num_classes, kernel_size=3, stride=1, - padding=1 - ) - self.bbox_pred = nn.Conv2d( - channels, num_anchors * 4, kernel_size=3, stride=1, - padding=1 - ) - self.centerness = nn.Conv2d( - channels, num_anchors * 1, kernel_size=3, stride=1, - padding=1 - ) - - # initialization - for modules in [self.cls_tower, self.bbox_tower, - self.cls_logits, self.bbox_pred, - self.centerness]: - for l in modules.modules(): - if isinstance(l, nn.Conv2d): - torch.nn.init.normal_(l.weight, std=0.01) - torch.nn.init.constant_(l.bias, 0) - - # initialize the bias for focal loss - prior_prob = cfg.MODEL.ATSS.PRIOR_PROB - bias_value = -math.log((1 - prior_prob) / prior_prob) - torch.nn.init.constant_(self.cls_logits.bias, bias_value) - - self.scales = nn.ModuleList([Scale(init_value=1.0) for _ in range(5)]) - - def forward(self, x): - logits = [] - bbox_reg = [] - centerness = [] - for l, feature in enumerate(x): - cls_tower = self.cls_tower(feature) - box_tower = self.bbox_tower(feature) - - logits.append(self.cls_logits(cls_tower)) - - bbox_pred = self.scales[l](self.bbox_pred(box_tower)) - bbox_reg.append(bbox_pred) - - centerness.append(self.centerness(box_tower)) - return logits, bbox_reg, centerness - - -class ATSSModule(torch.nn.Module): - - def __init__(self, cfg): - super(ATSSModule, self).__init__() - self.cfg = cfg - self.head = ATSSHead(cfg) - box_coder = BoxCoder(cfg) - self.loss_evaluator = make_atss_loss_evaluator(cfg, box_coder) - self.box_selector_train = make_atss_postprocessor(cfg, box_coder, is_train=True) - self.box_selector_test = make_atss_postprocessor(cfg, box_coder, is_train=False) - self.anchor_generator = make_anchor_generator_complex(cfg) - - def forward(self, images, features, targets=None): - box_cls, box_regression, centerness = self.head(features) - anchors = self.anchor_generator(images, features) - - if self.training: - return self._forward_train(box_cls, box_regression, centerness, targets, anchors) - else: - return self._forward_test(box_cls, box_regression, centerness, anchors) - - def _forward_train(self, box_cls, box_regression, centerness, targets, anchors): - loss_box_cls, loss_box_reg, loss_centerness = self.loss_evaluator( - box_cls, box_regression, centerness, targets, anchors - ) - losses = { - "loss_cls": loss_box_cls, - "loss_reg": loss_box_reg, - "loss_centerness": loss_centerness - } - if self.cfg.MODEL.RPN_ONLY: - return None, losses - else: - boxes = self.box_selector_train(box_cls, box_regression, centerness, anchors) - train_boxes = [] - for b, a in zip(boxes, anchors): - a = cat_boxlist(a) - b.add_field("visibility", torch.ones(b.bbox.shape[0], dtype=torch.bool, device=b.bbox.device)) - del b.extra_fields['scores'] - del b.extra_fields['labels'] - train_boxes.append(cat_boxlist([b, a])) - return train_boxes, losses - - def _forward_test(self, box_cls, box_regression, centerness, anchors): - boxes = self.box_selector_test(box_cls, box_regression, centerness, anchors) - return boxes, {} diff --git a/spaces/Popitmania123/Open-reverse-proxy/README.md b/spaces/Popitmania123/Open-reverse-proxy/README.md deleted file mode 100644 index 77ae5a9297a51a2aa96ee0e05373c7b846d540ec..0000000000000000000000000000000000000000 --- a/spaces/Popitmania123/Open-reverse-proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Open Reverse Proxy -emoji: 🐢 -colorFrom: gray -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/colorama/winterm.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/colorama/winterm.py deleted file mode 100644 index 0fdb4ec4e91090876dc3fbf207049b521fa0dd73..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/colorama/winterm.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -from . import win32 - - -# from wincon.h -class WinColor(object): - BLACK = 0 - BLUE = 1 - GREEN = 2 - CYAN = 3 - RED = 4 - MAGENTA = 5 - YELLOW = 6 - GREY = 7 - -# from wincon.h -class WinStyle(object): - NORMAL = 0x00 # dim text, dim background - BRIGHT = 0x08 # bright text, dim background - BRIGHT_BACKGROUND = 0x80 # dim text, bright background - -class WinTerm(object): - - def __init__(self): - self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes - self.set_attrs(self._default) - self._default_fore = self._fore - self._default_back = self._back - self._default_style = self._style - # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style. - # So that LIGHT_EX colors and BRIGHT style do not clobber each other, - # we track them separately, since LIGHT_EX is overwritten by Fore/Back - # and BRIGHT is overwritten by Style codes. - self._light = 0 - - def get_attrs(self): - return self._fore + self._back * 16 + (self._style | self._light) - - def set_attrs(self, value): - self._fore = value & 7 - self._back = (value >> 4) & 7 - self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND) - - def reset_all(self, on_stderr=None): - self.set_attrs(self._default) - self.set_console(attrs=self._default) - self._light = 0 - - def fore(self, fore=None, light=False, on_stderr=False): - if fore is None: - fore = self._default_fore - self._fore = fore - # Emulate LIGHT_EX with BRIGHT Style - if light: - self._light |= WinStyle.BRIGHT - else: - self._light &= ~WinStyle.BRIGHT - self.set_console(on_stderr=on_stderr) - - def back(self, back=None, light=False, on_stderr=False): - if back is None: - back = self._default_back - self._back = back - # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style - if light: - self._light |= WinStyle.BRIGHT_BACKGROUND - else: - self._light &= ~WinStyle.BRIGHT_BACKGROUND - self.set_console(on_stderr=on_stderr) - - def style(self, style=None, on_stderr=False): - if style is None: - style = self._default_style - self._style = style - self.set_console(on_stderr=on_stderr) - - def set_console(self, attrs=None, on_stderr=False): - if attrs is None: - attrs = self.get_attrs() - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleTextAttribute(handle, attrs) - - def get_position(self, handle): - position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition - # Because Windows coordinates are 0-based, - # and win32.SetConsoleCursorPosition expects 1-based. - position.X += 1 - position.Y += 1 - return position - - def set_cursor_position(self, position=None, on_stderr=False): - if position is None: - # I'm not currently tracking the position, so there is no default. - # position = self.get_position() - return - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleCursorPosition(handle, position) - - def cursor_adjust(self, x, y, on_stderr=False): - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - position = self.get_position(handle) - adjusted_position = (position.Y + y, position.X + x) - win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False) - - def erase_screen(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the screen. - # 1 should clear from the cursor to the beginning of the screen. - # 2 should clear the entire screen, and move cursor to (1,1) - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - # get the number of character cells in the current buffer - cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y - # get number of character cells before current cursor position - cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = cells_in_screen - cells_before_cursor - elif mode == 1: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_before_cursor - elif mode == 2: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_in_screen - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - if mode == 2: - # put the cursor where needed - win32.SetConsoleCursorPosition(handle, (1, 1)) - - def erase_line(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the line. - # 1 should clear from the cursor to the beginning of the line. - # 2 should clear the entire line. - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X - elif mode == 1: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwCursorPosition.X - elif mode == 2: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwSize.X - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - - def set_title(self, title): - win32.SetConsoleTitle(title) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py deleted file mode 100644 index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='PSPHead', - in_channels=64, - in_index=4, - channels=16, - pool_scales=(1, 2, 3, 6), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/__init__.py deleted file mode 100644 index 0f33124ed23fc6f27119a37bcb5ab004d3572be0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activation import build_activation_layer -from .context_block import ContextBlock -from .conv import build_conv_layer -from .conv2d_adaptive_padding import Conv2dAdaptivePadding -from .conv_module import ConvModule -from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d -from .depthwise_separable_conv_module import DepthwiseSeparableConvModule -from .drop import Dropout, DropPath -from .generalized_attention import GeneralizedAttention -from .hsigmoid import HSigmoid -from .hswish import HSwish -from .non_local import NonLocal1d, NonLocal2d, NonLocal3d -from .norm import build_norm_layer, is_norm -from .padding import build_padding_layer -from .plugin import build_plugin_layer -from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS) -from .scale import Scale -from .swish import Swish -from .upsample import build_upsample_layer -from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d, - Linear, MaxPool2d, MaxPool3d) - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', 'build_padding_layer', 'build_upsample_layer', - 'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention', - 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS', - 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d', - 'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear', - 'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d', - 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath' -] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/retina_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/retina_head.py deleted file mode 100644 index b12416fa8332f02b9a04bbfc7926f6d13875e61b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/retina_head.py +++ /dev/null @@ -1,114 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init - -from ..builder import HEADS -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class RetinaHead(AnchorHead): - r"""An anchor-based head used in `RetinaNet - `_. - - The head contains two subnetworks. The first classifies anchor boxes and - the second regresses deltas for the anchors. - - Example: - >>> import torch - >>> self = RetinaHead(11, 7) - >>> x = torch.rand(1, 7, 32, 32) - >>> cls_score, bbox_pred = self.forward_single(x) - >>> # Each anchor predicts a score for each class except background - >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors - >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors - >>> assert cls_per_anchor == (self.num_classes) - >>> assert box_per_anchor == 4 - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(RetinaHead, self).__init__( - num_classes, - in_channels, - anchor_generator=anchor_generator, - **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.retina_cls = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = nn.Conv2d( - self.feat_channels, self.num_anchors * 4, 3, padding=1) - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.retina_cls, std=0.01, bias=bias_cls) - normal_init(self.retina_reg, std=0.01) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_pred = self.retina_reg(reg_feat) - return cls_score, bbox_pred diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/ann_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/ann_head.py deleted file mode 100644 index 30aaacc2cafc568d3de71d1477b4de0dc0fea9d3..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/ann_head.py +++ /dev/null @@ -1,245 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PPMConcat(nn.ModuleList): - """Pyramid Pooling Module that only concat the features of each layer. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - """ - - def __init__(self, pool_scales=(1, 3, 6, 8)): - super(PPMConcat, self).__init__( - [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales]) - - def forward(self, feats): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(feats) - ppm_outs.append(ppm_out.view(*feats.shape[:2], -1)) - concat_outs = torch.cat(ppm_outs, dim=2) - return concat_outs - - -class SelfAttentionBlock(_SelfAttentionBlock): - """Make a ANN used SelfAttentionBlock. - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_scale (int): The scale of query feature map. - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, share_key_query, query_scale, key_pool_scales, - conv_cfg, norm_cfg, act_cfg): - key_psp = PPMConcat(key_pool_scales) - if query_scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=query_scale) - else: - query_downsample = None - super(SelfAttentionBlock, self).__init__( - key_in_channels=low_in_channels, - query_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=share_key_query, - query_downsample=query_downsample, - key_downsample=key_psp, - key_query_num_convs=1, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - -class AFNB(nn.Module): - """Asymmetric Fusion Non-local Block(AFNB) - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - and query projection. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, query_scales, key_pool_scales, conv_cfg, - norm_cfg, act_cfg): - super(AFNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=False, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - out_channels + high_in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, low_feats, high_feats): - """Forward function.""" - priors = [stage(high_feats, low_feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, high_feats], 1)) - return output - - -class APNB(nn.Module): - """Asymmetric Pyramid Non-local Block (APNB) - - Args: - in_channels (int): Input channels of key/query feature, - which is the key feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, in_channels, channels, out_channels, query_scales, - key_pool_scales, conv_cfg, norm_cfg, act_cfg): - super(APNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=in_channels, - high_in_channels=in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=True, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - 2 * in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, feats): - """Forward function.""" - priors = [stage(feats, feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, feats], 1)) - return output - - -@HEADS.register_module() -class ANNHead(BaseDecodeHead): - """Asymmetric Non-local Neural Networks for Semantic Segmentation. - - This head is the implementation of `ANNNet - `_. - - Args: - project_channels (int): Projection channels for Nonlocal. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): The pooling scales of key feature map. - Default: (1, 3, 6, 8). - """ - - def __init__(self, - project_channels, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - **kwargs): - super(ANNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(self.in_channels) == 2 - low_in_channels, high_in_channels = self.in_channels - self.project_channels = project_channels - self.fusion = AFNB( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - out_channels=high_in_channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - high_in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.context = APNB( - in_channels=self.channels, - out_channels=self.channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - low_feats, high_feats = self._transform_inputs(inputs) - output = self.fusion(low_feats, high_feats) - output = self.dropout(output) - output = self.bottleneck(output) - output = self.context(output) - output = self.cls_seg(output) - - return output diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/builder.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/builder.py deleted file mode 100644 index 77c96ba0b2f30ead9da23f293c5dc84dd3e4a74f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/builder.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -from ..utils import Registry - -RUNNERS = Registry('runner') -RUNNER_BUILDERS = Registry('runner builder') - - -def build_runner_constructor(cfg): - return RUNNER_BUILDERS.build(cfg) - - -def build_runner(cfg, default_args=None): - runner_cfg = copy.deepcopy(cfg) - constructor_type = runner_cfg.pop('constructor', - 'DefaultRunnerConstructor') - runner_constructor = build_runner_constructor( - dict( - type=constructor_type, - runner_cfg=runner_cfg, - default_args=default_args)) - runner = runner_constructor() - return runner diff --git a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/__init__.py b/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/__init__.py deleted file mode 100644 index 5ffda93f172142c03298972177b9a74b85867be6..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -from dataclasses import dataclass -from typing import List, Union - -import numpy as np - -import PIL -from PIL import Image - -from ...utils import BaseOutput, is_onnx_available, is_transformers_available - - -@dataclass -class StableDiffusionPipelineOutput(BaseOutput): - """ - Output class for Stable Diffusion pipelines. - - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, - num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. - nsfw_content_detected (`List[bool]`) - List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content. - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - nsfw_content_detected: List[bool] - - -if is_transformers_available(): - from .pipeline_stable_diffusion import StableDiffusionPipeline - from .pipeline_stable_diffusion_img2img import StableDiffusionImg2ImgPipeline - from .pipeline_stable_diffusion_inpaint import StableDiffusionInpaintPipeline - from .safety_checker import StableDiffusionSafetyChecker - -if is_transformers_available() and is_onnx_available(): - from .pipeline_stable_diffusion_onnx import StableDiffusionOnnxPipeline diff --git a/spaces/ServerX/PorcoDiaz/julius/lowpass.py b/spaces/ServerX/PorcoDiaz/julius/lowpass.py deleted file mode 100644 index 0eb46e382b20bfc2d93482f9f027986b863de6f0..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/julius/lowpass.py +++ /dev/null @@ -1,181 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -FIR windowed sinc lowpass filters. -""" - -import math -from typing import Sequence, Optional - -import torch -from torch.nn import functional as F - -from .core import sinc -from .fftconv import fft_conv1d -from .utils import simple_repr - - -class LowPassFilters(torch.nn.Module): - """ - Bank of low pass filters. Note that a high pass or band pass filter can easily - be implemented by substracting a same signal processed with low pass filters with different - frequencies (see `julius.bands.SplitBands` for instance). - This uses a windowed sinc filter, very similar to the one used in - `julius.resample`. However, because we do not change the sample rate here, - this filter can be much more efficiently implemented using the FFT convolution from - `julius.fftconv`. - - Args: - cutoffs (list[float]): list of cutoff frequencies, in [0, 0.5] expressed as `f/f_s` where - f_s is the samplerate and `f` is the cutoff frequency. - The upper limit is 0.5, because a signal sampled at `f_s` contains only - frequencies under `f_s / 2`. - stride (int): how much to decimate the output. Keep in mind that decimation - of the output is only acceptable if the cutoff frequency is under `1/ (2 * stride)` - of the original sampling rate. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. - Controls the receptive field of the Finite Impulse Response filter. - For lowpass filters with low cutoff frequency, e.g. 40Hz at 44.1kHz, - it is a bad idea to set this to a high value. - This is likely appropriate for most use. Lower values - will result in a faster filter, but with a slower attenuation around the - cutoff frequency. - fft (bool or None): if True, uses `julius.fftconv` rather than PyTorch convolutions. - If False, uses PyTorch convolutions. If None, either one will be chosen automatically - depending on the effective filter size. - - - ..warning:: - All the filters will use the same filter size, aligned on the lowest - frequency provided. If you combine a lot of filters with very diverse frequencies, it might - be more efficient to split them over multiple modules with similar frequencies. - - ..note:: - A lowpass with a cutoff frequency of 0 is defined as the null function - by convention here. This allows for a highpass with a cutoff of 0 to - be equal to identity, as defined in `julius.filters.HighPassFilters`. - - Shape: - - - Input: `[*, T]` - - Output: `[F, *, T']`, with `T'=T` if `pad` is True and `stride` is 1, and - `F` is the numer of cutoff frequencies. - - >>> lowpass = LowPassFilters([1/4]) - >>> x = torch.randn(4, 12, 21, 1024) - >>> list(lowpass(x).shape) - [1, 4, 12, 21, 1024] - """ - - def __init__(self, cutoffs: Sequence[float], stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - self.cutoffs = list(cutoffs) - if min(self.cutoffs) < 0: - raise ValueError("Minimum cutoff must be larger than zero.") - if max(self.cutoffs) > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.stride = stride - self.pad = pad - self.zeros = zeros - self.half_size = int(zeros / min([c for c in self.cutoffs if c > 0]) / 2) - if fft is None: - fft = self.half_size > 32 - self.fft = fft - window = torch.hann_window(2 * self.half_size + 1, periodic=False) - time = torch.arange(-self.half_size, self.half_size + 1) - filters = [] - for cutoff in cutoffs: - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * math.pi * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filters.append(filter_) - self.register_buffer("filters", torch.stack(filters)[:, None]) - - def forward(self, input): - shape = list(input.shape) - input = input.view(-1, 1, shape[-1]) - if self.pad: - input = F.pad(input, (self.half_size, self.half_size), mode='replicate') - if self.fft: - out = fft_conv1d(input, self.filters, stride=self.stride) - else: - out = F.conv1d(input, self.filters, stride=self.stride) - shape.insert(0, len(self.cutoffs)) - shape[-1] = out.shape[-1] - return out.permute(1, 0, 2).reshape(shape) - - def __repr__(self): - return simple_repr(self) - - -class LowPassFilter(torch.nn.Module): - """ - Same as `LowPassFilters` but applies a single low pass filter. - - Shape: - - - Input: `[*, T]` - - Output: `[*, T']`, with `T'=T` if `pad` is True and `stride` is 1. - - >>> lowpass = LowPassFilter(1/4, stride=2) - >>> x = torch.randn(4, 124) - >>> list(lowpass(x).shape) - [4, 62] - """ - - def __init__(self, cutoff: float, stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - self._lowpasses = LowPassFilters([cutoff], stride, pad, zeros, fft) - - @property - def cutoff(self): - return self._lowpasses.cutoffs[0] - - @property - def stride(self): - return self._lowpasses.stride - - @property - def pad(self): - return self._lowpasses.pad - - @property - def zeros(self): - return self._lowpasses.zeros - - @property - def fft(self): - return self._lowpasses.fft - - def forward(self, input): - return self._lowpasses(input)[0] - - def __repr__(self): - return simple_repr(self) - - -def lowpass_filters(input: torch.Tensor, cutoffs: Sequence[float], - stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `LowPassFilters`, refer to this class for more information. - """ - return LowPassFilters(cutoffs, stride, pad, zeros, fft).to(input)(input) - - -def lowpass_filter(input: torch.Tensor, cutoff: float, - stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Same as `lowpass_filters` but with a single cutoff frequency. - Output will not have a dimension inserted in the front. - """ - return lowpass_filters(input, [cutoff], stride, pad, zeros, fft)[0] diff --git a/spaces/ShawnLJW/image2coloringbook/app.py b/spaces/ShawnLJW/image2coloringbook/app.py deleted file mode 100644 index 04cd7209694cd438379eb83cf0ae1e32a5c09b8b..0000000000000000000000000000000000000000 --- a/spaces/ShawnLJW/image2coloringbook/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import gradio as gr -import numpy as np -import cv2 -from tqdm import trange -from sklearn.cluster import KMeans - -class KMeansClustering(): - def __init__(self, n_clusters=8, max_iter=300): - self.n_clusters = n_clusters - self.max_iter = max_iter - - def fit(self, X): - self.inertia_ = float('inf') - - # random init of clusters - idx = np.random.choice(range(X.shape[0]), self.n_clusters, replace=False) - self.cluster_centers_ = X[idx] - - print(f'Training for {self.max_iter} epochs') - epochs = trange(self.max_iter) - for i in epochs: - distances = X[:, np.newaxis, :] - self.cluster_centers_[np.newaxis, :, :] - distances = np.linalg.norm(distances, axis=2) - self.labels_ = np.argmin(distances, axis=1) - new_inertia = np.sum(np.min(distances, axis=1) ** 2) - - epochs.set_description(f'Epoch-{i+1}, Inertia-{new_inertia}') - - if new_inertia < self.inertia_: - self.inertia_ = new_inertia - else: - epochs.close() - print('Early Stopping. Inertia has converged.') - break - - self.cluster_centers_ = np.empty_like(self.cluster_centers_) - for cluster in range(self.n_clusters): - in_cluster = (self.labels_ == cluster) - if np.any(in_cluster): - self.cluster_centers_[cluster] = np.mean(X[in_cluster], axis=0) - else: - # cluster is empty, pick random point as next centroid - self.cluster_centers_[cluster] = X[np.random.randint(0, X.shape[0])] - - return self - - def predict(self, X): - distances = X[:, np.newaxis, :] - self.cluster_centers_[np.newaxis, :, :] - distances = np.linalg.norm(distances, axis=2) - labels = np.argmin(distances, axis=1) - return labels - - def fit_predict(self, X): - return self.fit(X).labels_ - -def segment_image(image, model: KMeansClustering): - w, b, c = image.shape - image = image.reshape(w*b, c) / 255 - - idx = np.random.choice(range(image.shape[0]), image.shape[0]//5, replace=False) - image_subset = image[idx] - model.fit(image_subset) # fit model on 20% sample of image - - labels = model.predict(image) - return labels.reshape(w,b), model - -def generate_outputs(image, implementation, num_colours): - if implementation == 'custom': - model = KMeansClustering(n_clusters=num_colours, max_iter=10) - elif implementation == 'sk-learn': - model = KMeans(n_clusters=num_colours, n_init='auto') - label_map, model = segment_image(image, model) - - clustered_image = model.cluster_centers_[label_map] - clustered_image = (clustered_image * 255).astype('uint8') - clustered_image = cv2.medianBlur(clustered_image,5) - edges = 255 - cv2.Canny(clustered_image, 0, 1) - edges = cv2.cvtColor(edges, cv2.COLOR_GRAY2RGB) - - return [(edges, 'Coloring Page'), (clustered_image, 'Filled Picture')] - -with gr.Blocks() as demo: - gr.Markdown( - """ - # image2coloringbook - - (image2coloringbook)[https://github.com/ShawnLJW/image2coloringbook] is a simple tool that converts an image into a coloring book. - """) - with gr.Row(): - with gr.Column(): - image = gr.Image() - submit = gr.Button('Generate') - with gr.Column(): - num_colours = gr.Slider( - minimum=1, - maximum=40, - value=24, - step=1, - label='Number of colours' - ) - implementation = gr.Dropdown( - choices=['sk-learn','custom'], - value='sk-learn', - label='Implementation' - ) - with gr.Row(): - output = gr.Gallery(preview=True) - - submit.click( - generate_outputs, - inputs=[image, implementation, num_colours], - outputs=[output] - ) - -if __name__ == '__main__': - demo.launch() \ No newline at end of file diff --git a/spaces/Silentlin/DiffSinger/tasks/run.py b/spaces/Silentlin/DiffSinger/tasks/run.py deleted file mode 100644 index 82c7559cec873eebf7c2c0ab6554895e21de7e7c..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/tasks/run.py +++ /dev/null @@ -1,15 +0,0 @@ -import importlib -from utils.hparams import set_hparams, hparams - - -def run_task(): - assert hparams['task_cls'] != '' - pkg = ".".join(hparams["task_cls"].split(".")[:-1]) - cls_name = hparams["task_cls"].split(".")[-1] - task_cls = getattr(importlib.import_module(pkg), cls_name) - task_cls.start() - - -if __name__ == '__main__': - set_hparams() - run_task() diff --git a/spaces/SkyYeXianer/vits-uma-genshin-honkai/models.py b/spaces/SkyYeXianer/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 52e15d1b9775038fd6e82b2efe6f95f51c66802d..0000000000000000000000000000000000000000 --- a/spaces/SkyYeXianer/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - device = next(self.parameters()).device # 获取模型所在的设备 - x, m_p, logs_p, x_mask = self.enc_p(x.to(device), x_lengths.to(device)) - if self.n_speakers > 0: - g = self.emb_g(sid.to(device)).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/models/lit_model.py b/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/models/lit_model.py deleted file mode 100644 index c0fbb90daa94b3c0694649d420f76681ad151044..0000000000000000000000000000000000000000 --- a/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/models/lit_model.py +++ /dev/null @@ -1,76 +0,0 @@ - -import torch -import torch.nn as nn -import torch.optim as optim -import torch.nn.functional as F -from torch.optim.lr_scheduler import OneCycleLR -import pytorch_lightning as pl -from torchmetrics import MaxMetric, MeanMetric -from torchmetrics.classification.accuracy import Accuracy - -class LitModule(pl.LightningModule): - - - def __init__(self,torch_model, num_classes,learning_rate): - super(LitModule, self).__init__() - self.save_hyperparameters() - self.lr = learning_rate - #self.epochs = epochs - self.criterion = torch.nn.CrossEntropyLoss() - self.train_acc = Accuracy(task="multiclass", num_classes=num_classes) - self.val_acc = Accuracy(task="multiclass", num_classes=num_classes) - - # for averaging loss across batches - self.train_loss = MeanMetric() - self.val_loss = MeanMetric() - self.net = torch_model - - def forward(self, x): - return self.net(x) - - - def configure_optimizers(self): - optimizer = optim.Adam(self.parameters(), lr=self.lr, weight_decay=1e-2) - scheduler = OneCycleLR(optimizer,max_lr= self.lr, - total_steps= self.trainer.max_epochs * self.trainer.estimated_stepping_batches, - pct_start=5/self.trainer.max_epochs, - div_factor=10, - final_div_factor=100, - anneal_strategy='linear') - lr_scheduler = {"scheduler": scheduler, "interval": "step"} - return [optimizer], [scheduler] - - def _common_step(self,batch): - images, labels = batch - output = self.forward(images) - loss = self.criterion(output,labels) - preds = torch.argmax(output,dim=1) - - return loss,preds,labels - - - - - - - def training_step(self, batch, batch_idx): - loss,preds,targets = self._common_step(batch) - self.train_loss(loss) - self.train_acc(preds,targets) - - self.log("train_acc", self.train_acc, prog_bar=True, on_epoch=True) - self.log("train_loss", self.train_loss, prog_bar=True) - return loss - - - def validation_step(self, batch, batch_idx): - loss,preds,targets = self._common_step(batch) - self.val_loss(loss) - self.val_acc(preds,targets) - - self.log("val_acc", self.val_acc, prog_bar=True, on_epoch=True) - self.log("val_loss", self.val_loss, prog_bar=True) - return loss - - def test_step(self, batch, batch_idx): - self.validation_step(batch, batch_idx) \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_hooks.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_hooks.py deleted file mode 100644 index 6e0b1c152fa8bc1d5e02637f0e4197e6c6113774..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_hooks.py +++ /dev/null @@ -1,76 +0,0 @@ -# -*- coding: utf-8 -*- -"""Tests for CommandChainDispatcher.""" - - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -import pytest -from IPython.core.error import TryNext -from IPython.core.hooks import CommandChainDispatcher - -#----------------------------------------------------------------------------- -# Local utilities -#----------------------------------------------------------------------------- - -# Define two classes, one which succeeds and one which raises TryNext. Each -# sets the attribute `called` to True when it is called. -class Okay(object): - def __init__(self, message): - self.message = message - self.called = False - def __call__(self): - self.called = True - return self.message - -class Fail(object): - def __init__(self, message): - self.message = message - self.called = False - def __call__(self): - self.called = True - raise TryNext(self.message) - -#----------------------------------------------------------------------------- -# Test functions -#----------------------------------------------------------------------------- - -def test_command_chain_dispatcher_ff(): - """Test two failing hooks""" - fail1 = Fail("fail1") - fail2 = Fail("fail2") - dp = CommandChainDispatcher([(0, fail1), (10, fail2)]) - - with pytest.raises(TryNext) as e: - dp() - assert str(e.value) == "fail2" - - assert fail1.called is True - assert fail2.called is True - -def test_command_chain_dispatcher_fofo(): - """Test a mixture of failing and succeeding hooks.""" - fail1 = Fail("fail1") - fail2 = Fail("fail2") - okay1 = Okay("okay1") - okay2 = Okay("okay2") - - dp = CommandChainDispatcher([(0, fail1), - # (5, okay1), # add this later - (10, fail2), - (15, okay2)]) - dp.add(okay1, 5) - - assert dp() == "okay1" - - assert fail1.called is True - assert okay1.called is True - assert fail2.called is False - assert okay2.called is False - -def test_command_chain_dispatcher_eq_priority(): - okay1 = Okay(u'okay1') - okay2 = Okay(u'okay2') - dp = CommandChainDispatcher([(1, okay1)]) - dp.add(okay2, 1) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/server/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/server/__init__.py deleted file mode 100644 index eccd02cb0681d6e0031874754a1bbe110b6b12a2..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/server/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from abc import ABC, abstractmethod - -from chromadb.config import Settings - - -class Server(ABC): - @abstractmethod - def __init__(self, settings: Settings): - pass diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_instr.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_instr.py deleted file mode 100644 index ca4a66a73a5250c09355c1fc02d8e20112e41661..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_instr.py +++ /dev/null @@ -1,362 +0,0 @@ - -import pytest -from tests_python.debugger_unittest import IS_PY36_OR_GREATER, IS_CPYTHON -from tests_python.debug_constants import TEST_CYTHON -pytestmark = pytest.mark.skipif(not IS_PY36_OR_GREATER or not IS_CPYTHON or not TEST_CYTHON, reason='Requires CPython >= 3.6') -#!/usr/bin/env python3 -import opcode -import unittest -from _pydevd_frame_eval.vendored.bytecode import ( - UNSET, - Label, - Instr, - CellVar, - FreeVar, - BasicBlock, - SetLineno, - Compare, -) -from _pydevd_frame_eval.vendored.bytecode.tests import TestCase - - -class SetLinenoTests(TestCase): - def test_lineno(self): - lineno = SetLineno(1) - self.assertEqual(lineno.lineno, 1) - - def test_equality(self): - lineno = SetLineno(1) - self.assertNotEqual(lineno, 1) - self.assertEqual(lineno, SetLineno(1)) - self.assertNotEqual(lineno, SetLineno(2)) - - -class VariableTests(TestCase): - def test_str(self): - for cls in (CellVar, FreeVar): - var = cls("a") - self.assertEqual(str(var), "a") - - def test_repr(self): - for cls in (CellVar, FreeVar): - var = cls("_a_x_a_") - r = repr(var) - self.assertIn("_a_x_a_", r) - self.assertIn(cls.__name__, r) - - def test_eq(self): - f1 = FreeVar("a") - f2 = FreeVar("b") - c1 = CellVar("a") - c2 = CellVar("b") - - for v1, v2, eq in ( - (f1, f1, True), - (f1, f2, False), - (f1, c1, False), - (c1, c1, True), - (c1, c2, False), - ): - if eq: - self.assertEqual(v1, v2) - else: - self.assertNotEqual(v1, v2) - - -class InstrTests(TestCase): - def test_constructor(self): - # invalid line number - with self.assertRaises(TypeError): - Instr("NOP", lineno="x") - with self.assertRaises(ValueError): - Instr("NOP", lineno=0) - - # invalid name - with self.assertRaises(TypeError): - Instr(1) - with self.assertRaises(ValueError): - Instr("xxx") - - def test_repr(self): - - # No arg - r = repr(Instr("NOP", lineno=10)) - self.assertIn("NOP", r) - self.assertIn("10", r) - self.assertIn("lineno", r) - - # Arg - r = repr(Instr("LOAD_FAST", "_x_", lineno=10)) - self.assertIn("LOAD_FAST", r) - self.assertIn("lineno", r) - self.assertIn("10", r) - self.assertIn("arg", r) - self.assertIn("_x_", r) - - def test_invalid_arg(self): - label = Label() - block = BasicBlock() - - # EXTENDED_ARG - self.assertRaises(ValueError, Instr, "EXTENDED_ARG", 0) - - # has_jump() - self.assertRaises(TypeError, Instr, "JUMP_ABSOLUTE", 1) - self.assertRaises(TypeError, Instr, "JUMP_ABSOLUTE", 1.0) - Instr("JUMP_ABSOLUTE", label) - Instr("JUMP_ABSOLUTE", block) - - # hasfree - self.assertRaises(TypeError, Instr, "LOAD_DEREF", "x") - Instr("LOAD_DEREF", CellVar("x")) - Instr("LOAD_DEREF", FreeVar("x")) - - # haslocal - self.assertRaises(TypeError, Instr, "LOAD_FAST", 1) - Instr("LOAD_FAST", "x") - - # hasname - self.assertRaises(TypeError, Instr, "LOAD_NAME", 1) - Instr("LOAD_NAME", "x") - - # hasconst - self.assertRaises(ValueError, Instr, "LOAD_CONST") # UNSET - self.assertRaises(ValueError, Instr, "LOAD_CONST", label) - self.assertRaises(ValueError, Instr, "LOAD_CONST", block) - Instr("LOAD_CONST", 1.0) - Instr("LOAD_CONST", object()) - - # hascompare - self.assertRaises(TypeError, Instr, "COMPARE_OP", 1) - Instr("COMPARE_OP", Compare.EQ) - - # HAVE_ARGUMENT - self.assertRaises(ValueError, Instr, "CALL_FUNCTION", -1) - self.assertRaises(TypeError, Instr, "CALL_FUNCTION", 3.0) - Instr("CALL_FUNCTION", 3) - - # test maximum argument - self.assertRaises(ValueError, Instr, "CALL_FUNCTION", 2147483647 + 1) - instr = Instr("CALL_FUNCTION", 2147483647) - self.assertEqual(instr.arg, 2147483647) - - # not HAVE_ARGUMENT - self.assertRaises(ValueError, Instr, "NOP", 0) - Instr("NOP") - - def test_require_arg(self): - i = Instr("CALL_FUNCTION", 3) - self.assertTrue(i.require_arg()) - i = Instr("NOP") - self.assertFalse(i.require_arg()) - - def test_attr(self): - instr = Instr("LOAD_CONST", 3, lineno=5) - self.assertEqual(instr.name, "LOAD_CONST") - self.assertEqual(instr.opcode, 100) - self.assertEqual(instr.arg, 3) - self.assertEqual(instr.lineno, 5) - - # invalid values/types - self.assertRaises(ValueError, setattr, instr, "lineno", 0) - self.assertRaises(TypeError, setattr, instr, "lineno", 1.0) - self.assertRaises(TypeError, setattr, instr, "name", 5) - self.assertRaises(TypeError, setattr, instr, "opcode", 1.0) - self.assertRaises(ValueError, setattr, instr, "opcode", -1) - self.assertRaises(ValueError, setattr, instr, "opcode", 255) - - # arg can take any attribute but cannot be deleted - instr.arg = -8 - instr.arg = object() - self.assertRaises(AttributeError, delattr, instr, "arg") - - # no argument - instr = Instr("ROT_TWO") - self.assertIs(instr.arg, UNSET) - - def test_modify_op(self): - instr = Instr("LOAD_NAME", "x") - load_fast = opcode.opmap["LOAD_FAST"] - instr.opcode = load_fast - self.assertEqual(instr.name, "LOAD_FAST") - self.assertEqual(instr.opcode, load_fast) - - def test_extended_arg(self): - instr = Instr("LOAD_CONST", 0x1234ABCD) - self.assertEqual(instr.arg, 0x1234ABCD) - - def test_slots(self): - instr = Instr("NOP") - with self.assertRaises(AttributeError): - instr.myattr = 1 - - def test_compare(self): - instr = Instr("LOAD_CONST", 3, lineno=7) - self.assertEqual(instr, Instr("LOAD_CONST", 3, lineno=7)) - self.assertNotEqual(instr, 1) - - # different lineno - self.assertNotEqual(instr, Instr("LOAD_CONST", 3)) - self.assertNotEqual(instr, Instr("LOAD_CONST", 3, lineno=6)) - # different op - self.assertNotEqual(instr, Instr("LOAD_FAST", "x", lineno=7)) - # different arg - self.assertNotEqual(instr, Instr("LOAD_CONST", 4, lineno=7)) - - def test_has_jump(self): - label = Label() - jump = Instr("JUMP_ABSOLUTE", label) - self.assertTrue(jump.has_jump()) - - instr = Instr("LOAD_FAST", "x") - self.assertFalse(instr.has_jump()) - - def test_is_cond_jump(self): - label = Label() - jump = Instr("POP_JUMP_IF_TRUE", label) - self.assertTrue(jump.is_cond_jump()) - - instr = Instr("LOAD_FAST", "x") - self.assertFalse(instr.is_cond_jump()) - - def test_is_uncond_jump(self): - label = Label() - jump = Instr("JUMP_ABSOLUTE", label) - self.assertTrue(jump.is_uncond_jump()) - - instr = Instr("POP_JUMP_IF_TRUE", label) - self.assertFalse(instr.is_uncond_jump()) - - def test_const_key_not_equal(self): - def check(value): - self.assertEqual(Instr("LOAD_CONST", value), Instr("LOAD_CONST", value)) - - def func(): - pass - - check(None) - check(0) - check(0.0) - check(b"bytes") - check("text") - check(Ellipsis) - check((1, 2, 3)) - check(frozenset({1, 2, 3})) - check(func.__code__) - check(object()) - - def test_const_key_equal(self): - neg_zero = -0.0 - pos_zero = +0.0 - - # int and float: 0 == 0.0 - self.assertNotEqual(Instr("LOAD_CONST", 0), Instr("LOAD_CONST", 0.0)) - - # float: -0.0 == +0.0 - self.assertNotEqual( - Instr("LOAD_CONST", neg_zero), Instr("LOAD_CONST", pos_zero) - ) - - # complex - self.assertNotEqual( - Instr("LOAD_CONST", complex(neg_zero, 1.0)), - Instr("LOAD_CONST", complex(pos_zero, 1.0)), - ) - self.assertNotEqual( - Instr("LOAD_CONST", complex(1.0, neg_zero)), - Instr("LOAD_CONST", complex(1.0, pos_zero)), - ) - - # tuple - self.assertNotEqual(Instr("LOAD_CONST", (0,)), Instr("LOAD_CONST", (0.0,))) - nested_tuple1 = (0,) - nested_tuple1 = (nested_tuple1,) - nested_tuple2 = (0.0,) - nested_tuple2 = (nested_tuple2,) - self.assertNotEqual( - Instr("LOAD_CONST", nested_tuple1), Instr("LOAD_CONST", nested_tuple2) - ) - - # frozenset - self.assertNotEqual( - Instr("LOAD_CONST", frozenset({0})), Instr("LOAD_CONST", frozenset({0.0})) - ) - - def test_stack_effects(self): - # Verify all opcodes are handled and that "jump=None" really returns - # the max of the other cases. - from _pydevd_frame_eval.vendored.bytecode.concrete import ConcreteInstr - - def check(instr): - jump = instr.stack_effect(jump=True) - no_jump = instr.stack_effect(jump=False) - max_effect = instr.stack_effect(jump=None) - self.assertEqual(instr.stack_effect(), max_effect) - self.assertEqual(max_effect, max(jump, no_jump)) - - if not instr.has_jump(): - self.assertEqual(jump, no_jump) - - for name, op in opcode.opmap.items(): - with self.subTest(name): - # Use ConcreteInstr instead of Instr because it doesn't care - # what kind of argument it is constructed with. - if op < opcode.HAVE_ARGUMENT: - check(ConcreteInstr(name)) - else: - for arg in range(256): - check(ConcreteInstr(name, arg)) - - # LOAD_CONST uses a concrete python object as its oparg, however, in - # dis.stack_effect(opcode.opmap['LOAD_CONST'], oparg), - # oparg should be the index of that python object in the constants. - # - # Fortunately, for an instruction whose oparg isn't equivalent to its - # form in binary files(pyc format), the stack effect is a - # constant which does not depend on its oparg. - # - # The second argument of dis.stack_effect cannot be - # more than 2**31 - 1. If stack effect of an instruction is - # independent of its oparg, we pass 0 as the second argument - # of dis.stack_effect. - # (As a result we can calculate stack_effect for - # any LOAD_CONST instructions, even for large integers) - - for arg in 2 ** 31, 2 ** 32, 2 ** 63, 2 ** 64, -1: - self.assertEqual(Instr("LOAD_CONST", arg).stack_effect(), 1) - - def test_code_object_containing_mutable_data(self): - from _pydevd_frame_eval.vendored.bytecode import Bytecode, Instr - from types import CodeType - - def f(): - def g(): - return "value" - - return g - - f_code = Bytecode.from_code(f.__code__) - instr_load_code = None - mutable_datum = [4, 2] - - for each in f_code: - if ( - isinstance(each, Instr) - and each.name == "LOAD_CONST" - and isinstance(each.arg, CodeType) - ): - instr_load_code = each - break - - self.assertIsNotNone(instr_load_code) - - g_code = Bytecode.from_code(instr_load_code.arg) - g_code[0].arg = mutable_datum - instr_load_code.arg = g_code.to_code() - f.__code__ = f_code.to_code() - - self.assertIs(f()(), mutable_datum) - - -if __name__ == "__main__": - unittest.main() # pragma: no cover diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/image/image_torch_tensor.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/image/image_torch_tensor.py deleted file mode 100644 index 103a936d705f1d99e314d5d5dd9bf6323e6fec70..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/image/image_torch_tensor.py +++ /dev/null @@ -1,49 +0,0 @@ -from typing import TypeVar - -from docarray.typing.proto_register import _register_proto -from docarray.typing.tensor.image.abstract_image_tensor import AbstractImageTensor -from docarray.typing.tensor.torch_tensor import TorchTensor, metaTorchAndNode - -T = TypeVar('T', bound='ImageTorchTensor') - - -@_register_proto(proto_type_name='image_torch_tensor') -class ImageTorchTensor(AbstractImageTensor, TorchTensor, metaclass=metaTorchAndNode): - """ - Subclass of [`TorchTensor`][docarray.typing.TorchTensor], to represent an image tensor. - Adds image-specific features to the tensor. - For instance the ability convert the tensor back to - [`ImageBytes`][docarray.typing.ImageBytes] which are - optimized to send over the wire. - - - --- - - ```python - from typing import Optional - - from docarray import BaseDoc - from docarray.typing import ImageBytes, ImageTorchTensor, ImageUrl - - - class MyImageDoc(BaseDoc): - title: str - tensor: Optional[ImageTorchTensor] - url: Optional[ImageUrl] - bytes: Optional[ImageBytes] - - - doc = MyImageDoc( - title='my_second_image_doc', - url="https://upload.wikimedia.org/wikipedia/commons/8/80/" - "Dag_Sebastian_Ahlander_at_G%C3%B6teborg_Book_Fair_2012b.jpg", - ) - - doc.tensor = doc.url.load() - doc.bytes = doc.tensor.to_bytes() - ``` - - --- - """ - - ... diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/__init__.py deleted file mode 100644 index ac66d3cfe0ea04af45c0f3594bf135841c3812e3..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from .ann_head import ANNHead -from .apc_head import APCHead -from .aspp_head import ASPPHead -from .cc_head import CCHead -from .da_head import DAHead -from .dm_head import DMHead -from .dnl_head import DNLHead -from .ema_head import EMAHead -from .enc_head import EncHead -from .fcn_head import FCNHead -from .fpn_head import FPNHead -from .gc_head import GCHead -from .lraspp_head import LRASPPHead -from .nl_head import NLHead -from .ocr_head import OCRHead -# from .point_head import PointHead -from .psa_head import PSAHead -from .psp_head import PSPHead -from .sep_aspp_head import DepthwiseSeparableASPPHead -from .sep_fcn_head import DepthwiseSeparableFCNHead -from .uper_head import UPerHead - -__all__ = [ - 'FCNHead', 'PSPHead', 'ASPPHead', 'PSAHead', 'NLHead', 'GCHead', 'CCHead', - 'UPerHead', 'DepthwiseSeparableASPPHead', 'ANNHead', 'DAHead', 'OCRHead', - 'EncHead', 'DepthwiseSeparableFCNHead', 'FPNHead', 'EMAHead', 'DNLHead', - 'APCHead', 'DMHead', 'LRASPPHead' -] diff --git a/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/compute_stats.py b/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/compute_stats.py deleted file mode 100644 index 7f7a159f41f8f1c837ca93d58d88085a88b5898a..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/compute_stats.py +++ /dev/null @@ -1,523 +0,0 @@ -import os -import io -import pickle -import sys - -from functools import partial -from inspect import signature -import matplotlib.pyplot as plt -from tqdm import tqdm - -from einops import repeat -import fire -import numpy as np -from pytorch_lightning.utilities.seed import seed_everything -import torch - -from risk_biased.utils.config_argparse import config_argparse -from risk_biased.utils.cost import TTCCostTorch, TTCCostParams, get_cost -from risk_biased.utils.risk import get_risk_estimator - -from risk_biased.utils.load_model import load_from_config - - -def to_device(batch, device): - output = [] - for item in batch: - output.append(item.to(device)) - return output - - -class CPU_Unpickler(pickle.Unpickler): - def find_class(self, module, name): - if module == "torch.storage" and name == "_load_from_bytes": - return lambda b: torch.load(io.BytesIO(b), map_location="cpu") - else: - return super().find_class(module, name) - - -def distance(pred, truth): - """ - pred (Tensor): (..., time, xy) - truth (Tensor): (..., time, xy) - mask_loss (Tensor): (..., time) Defaults to None. - """ - return torch.sqrt(torch.sum(torch.square(pred[..., :2] - truth[..., :2]), -1)) - - -def compute_metrics( - predictor, - batch, - cost, - risk_levels, - risk_estimator, - dt, - unnormalizer, - n_samples_risk, - n_samples_stats, -): - - # risk_unbiased - # risk_biased - # cost - # FDE: unbiased, biased(risk_level=[0, 0.3, 0.5, 0.8, 1]) (for all samples so minFDE can be computed later) - # ADE (for all samples so minADE can be computed later) - - x, mask_x, y, mask_y, mask_loss, map, mask_map, offset, x_ego, y_ego = batch - mask_z = mask_x.any(-1) - - _, z_mean_inference, z_log_std_inference = predictor.model( - x, - mask_x, - map, - mask_map, - offset=offset, - x_ego=x_ego, - y_ego=y_ego, - risk_level=None, - ) - - latent_distribs = { - "inference": { - "mean": z_mean_inference[:, 1].detach().cpu(), - "log_std": z_log_std_inference[:, 1].detach().cpu(), - } - } - inference_distances = [] - cost_list = [] - # Cut the number of samples in packs to avoid out-of-memory problems - # Compute and store cost for all packs - for _ in range(n_samples_risk // n_samples_stats): - z_samples_inference = predictor.model.inference_encoder.sample( - z_mean_inference, - z_log_std_inference, - n_samples=n_samples_stats, - ) - - y_samples = predictor.model.decode( - z_samples=z_samples_inference, - mask_z=mask_z, - x=x, - mask_x=mask_x, - map=map, - mask_map=mask_map, - offset=offset, - ) - - mask_loss_samples = repeat(mask_loss, "b a t -> b a s t", s=n_samples_stats) - # Computing unbiased cost - cost_list.append( - get_cost( - cost, - x, - y_samples, - offset, - x_ego, - y_ego, - dt, - unnormalizer, - mask_loss_samples, - )[:, 1:2] - ) - inference_distances.append(distance(y_samples, y.unsqueeze(2))[:, 1:2]) - cost_dic = {} - cost_dic["inference"] = torch.cat(cost_list, 2).detach().cpu() - distance_dic = {} - distance_dic["inference"] = torch.cat(inference_distances, 2).detach().cpu() - - # Set up the output risk tensor - risk_dic = {} - - # Loop on risk_level values to fill the risk estimation for each value and compute stats at each risk level - for rl in risk_levels: - risk_level = ( - torch.ones( - (x.shape[0], x.shape[1]), - device=x.device, - ) - * rl - ) - risk_dic[f"biased_{rl}"] = risk_estimator( - risk_level[:, 1:2].detach().cpu(), cost_dic["inference"] - ) - - y_samples_biased, z_mean_biased, z_log_std_biased = predictor.model( - x, - mask_x, - map, - mask_map, - offset=offset, - x_ego=x_ego, - y_ego=y_ego, - risk_level=risk_level, - n_samples=n_samples_stats, - ) - latent_distribs[f"biased_{rl}"] = { - "mean": z_mean_biased[:, 1].detach().cpu(), - "log_std": z_log_std_biased[:, 1].detach().cpu(), - } - - distance_dic[f"biased_{rl}"] = ( - distance(y_samples_biased, y.unsqueeze(2))[:, 1].detach().cpu() - ) - cost_dic[f"biased_{rl}"] = ( - get_cost( - cost, - x, - y_samples_biased, - offset, - x_ego, - y_ego, - dt, - unnormalizer, - mask_loss_samples, - )[:, 1] - .detach() - .cpu() - ) - - # Return risks for the batch and all risk values - return { - "risk": risk_dic, - "cost": cost_dic, - "distance": distance_dic, - "latent_distribs": latent_distribs, - "mask": mask_loss[:, 1].detach().cpu(), - } - - -def cat_metrics_rec(metrics1, metrics2, cat_to): - for key in metrics1.keys(): - if key not in metrics2.keys(): - raise RuntimeError( - f"Trying to concatenate objects with different keys: {key} is not in second argument keys." - ) - elif isinstance(metrics1[key], dict): - if key not in cat_to.keys(): - cat_to[key] = {} - cat_metrics_rec(metrics1[key], metrics2[key], cat_to[key]) - elif isinstance(metrics1[key], torch.Tensor): - cat_to[key] = torch.cat((metrics1[key], metrics2[key]), 0) - - -def cat_metrics(metrics1, metrics2): - out = {} - cat_metrics_rec(metrics1, metrics2, out) - return out - - -def masked_mean_std_ste(data, mask): - mask = mask.view(data.shape) - norm = mask.sum().clamp_min(1) - mean = (data * mask).sum() / norm - std = torch.sqrt(((data - mean) * mask).square().sum() / norm) - return mean.item(), std.item(), (std / torch.sqrt(norm)).item() - - -def masked_mean_range(data, mask): - data = data[mask] - mean = data.mean() - min = torch.quantile(data, 0.05) - max = torch.quantile(data, 0.95) - return mean, min, max - - -def masked_mean_dim(data, mask, dim): - norm = mask.sum(dim).clamp_min(1) - mean = (data * mask).sum(dim) / norm - return mean - - -def plot_risk_error(metrics, risk_levels, risk_estimator, max_n_samples, path_save): - cost_inference = metrics["cost"]["inference"] - cost_biased_0 = metrics["cost"]["biased_0"] - mask = metrics["mask"].any(1) - ones_tensor = torch.ones(mask.shape[0]) - n_samples = np.minimum(cost_biased_0.shape[1], max_n_samples) - - for rl in risk_levels: - key = f"biased_{rl}" - reference_risk = metrics["risk"][key] - mean_inference_risk_error_per_samples = np.zeros(n_samples - 1) - min_inference_risk_error_per_samples = np.zeros(n_samples - 1) - max_inference_risk_error_per_samples = np.zeros(n_samples - 1) - # mean_biased_0_risk_error_per_samples = np.zeros(n_samples-1) - # min_biased_0_risk_error_per_samples = np.zeros(n_samples-1) - # max_biased_0_risk_error_per_samples = np.zeros(n_samples-1) - mean_biased_risk_error_per_samples = np.zeros(n_samples - 1) - min_biased_risk_error_per_samples = np.zeros(n_samples - 1) - max_biased_risk_error_per_samples = np.zeros(n_samples - 1) - risk_level_tensor = ones_tensor * rl - for sub_samples in range(1, n_samples): - perm = torch.randperm(metrics["cost"][key].shape[1])[:sub_samples] - risk_error_biased = metrics["cost"][key][:, perm].mean(1) - reference_risk - ( - mean_biased_risk_error_per_samples[sub_samples - 1], - min_biased_risk_error_per_samples[sub_samples - 1], - max_biased_risk_error_per_samples[sub_samples - 1], - ) = masked_mean_range(risk_error_biased, mask) - risk_error_inference = ( - risk_estimator(risk_level_tensor, cost_inference[:, :, :sub_samples]) - - reference_risk - ) - ( - mean_inference_risk_error_per_samples[sub_samples - 1], - min_inference_risk_error_per_samples[sub_samples - 1], - max_inference_risk_error_per_samples[sub_samples - 1], - ) = masked_mean_range(risk_error_inference, mask) - # risk_error_biased_0 = risk_estimator(risk_level_tensor, cost_biased_0[:, :sub_samples]) - reference_risk - # (mean_biased_0_risk_error_per_samples[sub_samples-1], min_biased_0_risk_error_per_samples[sub_samples-1], max_biased_0_risk_error_per_samples[sub_samples-1]) = masked_mean_range(risk_error_biased_0, mask) - - plt.plot( - range(1, n_samples), - mean_inference_risk_error_per_samples, - label="Inference", - ) - plt.fill_between( - range(1, n_samples), - min_inference_risk_error_per_samples, - max_inference_risk_error_per_samples, - alpha=0.3, - ) - - # plt.plot(range(1, n_samples), mean_biased_0_risk_error_per_samples, label="Unbiased") - # plt.fill_between(range(1, n_samples), min_biased_0_risk_error_per_samples, max_biased_0_risk_error_per_samples, alpha=.3) - - plt.plot( - range(1, n_samples), mean_biased_risk_error_per_samples, label="Biased" - ) - plt.fill_between( - range(1, n_samples), - min_biased_risk_error_per_samples, - max_biased_risk_error_per_samples, - alpha=0.3, - ) - plt.ylim( - np.min(min_inference_risk_error_per_samples), - np.max(max_biased_risk_error_per_samples), - ) - - plt.hlines(y=0, xmin=0, xmax=n_samples, colors="black", linestyles="--", lw=0.3) - - plt.xlabel("Number of samples") - plt.ylabel("Risk estimation error") - plt.legend() - plt.title(f"Risk estimation error at risk-level={rl}") - plt.gcf().set_size_inches(4, 3) - plt.legend(loc="lower right") - plt.savefig(fname=os.path.join(path_save, f"risk_level_{rl}.svg")) - plt.savefig(fname=os.path.join(path_save, f"risk_level_{rl}.png")) - plt.clf() - # plt.show() - - -def compute_stats(metrics, n_samples_mean_cost=4): - biased_risk_estimate = {} - for key in metrics["cost"].keys(): - if key == "inference": - continue - risk = metrics["risk"][key] - mean_cost = metrics["cost"][key][:, :n_samples_mean_cost].mean(1) - risk_error = mean_cost - risk - biased_risk_estimate[key] = {} - ( - biased_risk_estimate[key]["mean"], - biased_risk_estimate[key]["std"], - biased_risk_estimate[key]["ste"], - ) = masked_mean_std_ste(risk_error, metrics["mask"].any(1)) - - ( - biased_risk_estimate[key]["mean_abs"], - biased_risk_estimate[key]["std_abs"], - biased_risk_estimate[key]["ste_abs"], - ) = masked_mean_std_ste(risk_error.abs(), metrics["mask"].any(1)) - - risk_stats = {} - for key in metrics["risk"].keys(): - risk_stats[key] = {} - ( - risk_stats[key]["mean"], - risk_stats[key]["std"], - risk_stats[key]["ste"], - ) = masked_mean_std_ste(metrics["risk"][key], metrics["mask"].any(1)) - - cost_stats = {} - for key in metrics["cost"].keys(): - cost_stats[key] = {} - ( - cost_stats[key]["mean"], - cost_stats[key]["std"], - cost_stats[key]["ste"], - ) = masked_mean_std_ste( - metrics["cost"][key], metrics["mask"].any(-1, keepdim=True) - ) - - distance_stats = {} - for key in metrics["distance"].keys(): - distance_stats[key] = {"FDE": {}, "ADE": {}, "minFDE": {}, "minADE": {}} - ( - distance_stats[key]["FDE"]["mean"], - distance_stats[key]["FDE"]["std"], - distance_stats[key]["FDE"]["ste"], - ) = masked_mean_std_ste( - metrics["distance"][key][..., -1], metrics["mask"][:, None, -1] - ) - mean_dist = masked_mean_dim( - metrics["distance"][key], metrics["mask"][:, None, :], -1 - ) - ( - distance_stats[key]["ADE"]["mean"], - distance_stats[key]["ADE"]["std"], - distance_stats[key]["ADE"]["ste"], - ) = masked_mean_std_ste(mean_dist, metrics["mask"].any(-1, keepdim=True)) - for i in [6, 16, 32]: - distance_stats[key]["minFDE"][i] = {} - min_dist, _ = metrics["distance"][key][:, :i, -1].min(1) - ( - distance_stats[key]["minFDE"][i]["mean"], - distance_stats[key]["minFDE"][i]["std"], - distance_stats[key]["minFDE"][i]["ste"], - ) = masked_mean_std_ste(min_dist, metrics["mask"][:, -1]) - distance_stats[key]["minADE"][i] = {} - mean_dist, _ = masked_mean_dim( - metrics["distance"][key][:, :i], metrics["mask"][:, None, :], -1 - ).min(1) - ( - distance_stats[key]["minADE"][i]["mean"], - distance_stats[key]["minADE"][i]["std"], - distance_stats[key]["minADE"][i]["ste"], - ) = masked_mean_std_ste(mean_dist, metrics["mask"].any(-1)) - return { - "risk": risk_stats, - "biased_risk_estimate": biased_risk_estimate, - "cost": cost_stats, - "distance": distance_stats, - } - - -def print_stats(stats, n_samples_mean_cost=4): - slash = "\\" - brace_open = "{" - brace_close = "}" - print("\\begin{tabular}{lccccc}") - print("\\hline") - print( - f"Predictive Model & ${slash}sigma$ & minFDE(16) & FDE (1) & Risk est. error ({n_samples_mean_cost}) & Risk est. $|$error$|$ ({n_samples_mean_cost}) {slash}{slash}" - ) - print("\\hline") - - for key in stats["distance"].keys(): - strg = ( - f" ${stats['distance'][key]['minFDE'][16]['mean']:.2f}$ {slash}scriptsize{brace_open}${slash}pm {stats['distance'][key]['minFDE'][16]['ste']:.2f}${brace_close}" - + f"& ${stats['distance'][key]['FDE']['mean']:.2f}$ {slash}scriptsize{brace_open}${slash}pm {stats['distance'][key]['FDE']['ste']:.2f}${brace_close}" - ) - - if key == "inference": - strg = ( - "Unbiased CVAE & " - + f"{slash}scriptsize{brace_open}NA{brace_close} &" - + strg - + f"& {slash}scriptsize{brace_open}NA{brace_close} & {slash}scriptsize{brace_open}NA{brace_close} {slash}{slash}" - ) - print(strg) - print("\\hline") - else: - strg = ( - "Biased CVAE & " - + f"{key[7:]} & " - + strg - + f"& ${stats['biased_risk_estimate'][key]['mean']:.2f}$ {slash}scriptsize{brace_open}${slash}pm {stats['biased_risk_estimate'][key]['ste']:.2f}${brace_close}" - + f"& ${stats['biased_risk_estimate'][key]['mean_abs']:.2f}$ {slash}scriptsize{brace_open}${slash}pm {stats['biased_risk_estimate'][key]['ste_abs']:.2f}${brace_close}" - + f"{slash}{slash}" - ) - print(strg) - print("\\hline") - print("\\end{tabular}") - - -def main( - log_path, - force_recompute, - n_samples_risk=256, - n_samples_stats=32, - n_samples_plot=16, - args_to_parser=[], -): - # Overwrite sys.argv so it doesn't mess up the parser. - sys.argv = sys.argv[0:1] + args_to_parser - working_dir = os.path.dirname(os.path.realpath(__file__)) - config_path = os.path.join( - working_dir, "..", "..", "risk_biased", "config", "learning_config.py" - ) - waymo_config_path = os.path.join( - working_dir, "..", "..", "risk_biased", "config", "waymo_config.py" - ) - cfg = config_argparse([config_path, waymo_config_path]) - - file_path = os.path.join(log_path, f"metrics_{cfg.load_from}.pickle") - fig_path = os.path.join(log_path, f"plots_{cfg.load_from}") - if not os.path.exists(fig_path): - os.makedirs(fig_path) - - risk_levels = [0, 0.3, 0.5, 0.8, 0.95, 1] - cost = TTCCostTorch(TTCCostParams.from_config(cfg)) - risk_estimator = get_risk_estimator(cfg.risk_estimator) - n_samples_mean_cost = 4 - - if not os.path.exists(file_path) or force_recompute: - with torch.no_grad(): - if cfg.seed is not None: - seed_everything(cfg.seed) - - predictor, dataloaders, cfg = load_from_config(cfg) - device = torch.device(cfg.gpus[0]) - predictor = predictor.to(device) - - val_loader = dataloaders.val_dataloader(shuffle=False, drop_last=False) - - # This loops over batches in the validation dataset - beg = 0 - metrics_all = None - for val_batch in tqdm(val_loader): - end = beg + val_batch[0].shape[0] - metrics = compute_metrics( - predictor=predictor, - batch=to_device(val_batch, device), - cost=cost, - risk_levels=risk_levels, - risk_estimator=risk_estimator, - dt=cfg.dt, - unnormalizer=dataloaders.unnormalize_trajectory, - n_samples_risk=n_samples_risk, - n_samples_stats=n_samples_stats, - ) - if metrics_all is None: - metrics_all = metrics - else: - metrics_all = cat_metrics(metrics_all, metrics) - beg = end - with open(file_path, "wb") as handle: - pickle.dump(metrics_all, handle) - else: - print(f"Loading pre-computed metrics from {file_path}") - with open(file_path, "rb") as handle: - metrics_all = CPU_Unpickler(handle).load() - - stats = compute_stats(metrics_all, n_samples_mean_cost=n_samples_mean_cost) - print_stats(stats, n_samples_mean_cost=n_samples_mean_cost) - plot_risk_error(metrics_all, risk_levels, risk_estimator, n_samples_plot, fig_path) - - -if __name__ == "__main__": - # main("./logs/002/", False, 256, 32, 16) - # Fire turns the main function into a script, then the risk_biased module argparse reads the other arguments. - # Thus, the way to use it would be: - # >python compute_stats.py - - # This is a hack to separate the Fire script args from the argparse arguments - args_to_parser = sys.argv[len(signature(main).parameters) :] - partial_main = partial(main, args_to_parser=args_to_parser) - sys.argv = sys.argv[: len(signature(main).parameters)] - - # Runs the main as a script - fire.Fire(partial_main) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/package_finder.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/package_finder.py deleted file mode 100644 index b6f8d57e854b77f60c04f59a7f3ff74476a5f5d6..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/package_finder.py +++ /dev/null @@ -1,1029 +0,0 @@ -"""Routines related to PyPI, indexes""" - -import enum -import functools -import itertools -import logging -import re -from typing import TYPE_CHECKING, FrozenSet, Iterable, List, Optional, Set, Tuple, Union - -from pip._vendor.packaging import specifiers -from pip._vendor.packaging.tags import Tag -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.packaging.version import _BaseVersion -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.exceptions import ( - BestVersionAlreadyInstalled, - DistributionNotFound, - InvalidWheelFilename, - UnsupportedWheel, -) -from pip._internal.index.collector import LinkCollector, parse_links -from pip._internal.models.candidate import InstallationCandidate -from pip._internal.models.format_control import FormatControl -from pip._internal.models.link import Link -from pip._internal.models.search_scope import SearchScope -from pip._internal.models.selection_prefs import SelectionPreferences -from pip._internal.models.target_python import TargetPython -from pip._internal.models.wheel import Wheel -from pip._internal.req import InstallRequirement -from pip._internal.utils._log import getLogger -from pip._internal.utils.filetypes import WHEEL_EXTENSION -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.logging import indent_log -from pip._internal.utils.misc import build_netloc -from pip._internal.utils.packaging import check_requires_python -from pip._internal.utils.unpacking import SUPPORTED_EXTENSIONS - -if TYPE_CHECKING: - from pip._vendor.typing_extensions import TypeGuard - -__all__ = ["FormatControl", "BestCandidateResult", "PackageFinder"] - - -logger = getLogger(__name__) - -BuildTag = Union[Tuple[()], Tuple[int, str]] -CandidateSortingKey = Tuple[int, int, int, _BaseVersion, Optional[int], BuildTag] - - -def _check_link_requires_python( - link: Link, - version_info: Tuple[int, int, int], - ignore_requires_python: bool = False, -) -> bool: - """ - Return whether the given Python version is compatible with a link's - "Requires-Python" value. - - :param version_info: A 3-tuple of ints representing the Python - major-minor-micro version to check. - :param ignore_requires_python: Whether to ignore the "Requires-Python" - value if the given Python version isn't compatible. - """ - try: - is_compatible = check_requires_python( - link.requires_python, - version_info=version_info, - ) - except specifiers.InvalidSpecifier: - logger.debug( - "Ignoring invalid Requires-Python (%r) for link: %s", - link.requires_python, - link, - ) - else: - if not is_compatible: - version = ".".join(map(str, version_info)) - if not ignore_requires_python: - logger.verbose( - "Link requires a different Python (%s not in: %r): %s", - version, - link.requires_python, - link, - ) - return False - - logger.debug( - "Ignoring failed Requires-Python check (%s not in: %r) for link: %s", - version, - link.requires_python, - link, - ) - - return True - - -class LinkType(enum.Enum): - candidate = enum.auto() - different_project = enum.auto() - yanked = enum.auto() - format_unsupported = enum.auto() - format_invalid = enum.auto() - platform_mismatch = enum.auto() - requires_python_mismatch = enum.auto() - - -class LinkEvaluator: - - """ - Responsible for evaluating links for a particular project. - """ - - _py_version_re = re.compile(r"-py([123]\.?[0-9]?)$") - - # Don't include an allow_yanked default value to make sure each call - # site considers whether yanked releases are allowed. This also causes - # that decision to be made explicit in the calling code, which helps - # people when reading the code. - def __init__( - self, - project_name: str, - canonical_name: str, - formats: FrozenSet[str], - target_python: TargetPython, - allow_yanked: bool, - ignore_requires_python: Optional[bool] = None, - ) -> None: - """ - :param project_name: The user supplied package name. - :param canonical_name: The canonical package name. - :param formats: The formats allowed for this package. Should be a set - with 'binary' or 'source' or both in it. - :param target_python: The target Python interpreter to use when - evaluating link compatibility. This is used, for example, to - check wheel compatibility, as well as when checking the Python - version, e.g. the Python version embedded in a link filename - (or egg fragment) and against an HTML link's optional PEP 503 - "data-requires-python" attribute. - :param allow_yanked: Whether files marked as yanked (in the sense - of PEP 592) are permitted to be candidates for install. - :param ignore_requires_python: Whether to ignore incompatible - PEP 503 "data-requires-python" values in HTML links. Defaults - to False. - """ - if ignore_requires_python is None: - ignore_requires_python = False - - self._allow_yanked = allow_yanked - self._canonical_name = canonical_name - self._ignore_requires_python = ignore_requires_python - self._formats = formats - self._target_python = target_python - - self.project_name = project_name - - def evaluate_link(self, link: Link) -> Tuple[LinkType, str]: - """ - Determine whether a link is a candidate for installation. - - :return: A tuple (result, detail), where *result* is an enum - representing whether the evaluation found a candidate, or the reason - why one is not found. If a candidate is found, *detail* will be the - candidate's version string; if one is not found, it contains the - reason the link fails to qualify. - """ - version = None - if link.is_yanked and not self._allow_yanked: - reason = link.yanked_reason or "" - return (LinkType.yanked, f"yanked for reason: {reason}") - - if link.egg_fragment: - egg_info = link.egg_fragment - ext = link.ext - else: - egg_info, ext = link.splitext() - if not ext: - return (LinkType.format_unsupported, "not a file") - if ext not in SUPPORTED_EXTENSIONS: - return ( - LinkType.format_unsupported, - f"unsupported archive format: {ext}", - ) - if "binary" not in self._formats and ext == WHEEL_EXTENSION: - reason = f"No binaries permitted for {self.project_name}" - return (LinkType.format_unsupported, reason) - if "macosx10" in link.path and ext == ".zip": - return (LinkType.format_unsupported, "macosx10 one") - if ext == WHEEL_EXTENSION: - try: - wheel = Wheel(link.filename) - except InvalidWheelFilename: - return ( - LinkType.format_invalid, - "invalid wheel filename", - ) - if canonicalize_name(wheel.name) != self._canonical_name: - reason = f"wrong project name (not {self.project_name})" - return (LinkType.different_project, reason) - - supported_tags = self._target_python.get_tags() - if not wheel.supported(supported_tags): - # Include the wheel's tags in the reason string to - # simplify troubleshooting compatibility issues. - file_tags = ", ".join(wheel.get_formatted_file_tags()) - reason = ( - f"none of the wheel's tags ({file_tags}) are compatible " - f"(run pip debug --verbose to show compatible tags)" - ) - return (LinkType.platform_mismatch, reason) - - version = wheel.version - - # This should be up by the self.ok_binary check, but see issue 2700. - if "source" not in self._formats and ext != WHEEL_EXTENSION: - reason = f"No sources permitted for {self.project_name}" - return (LinkType.format_unsupported, reason) - - if not version: - version = _extract_version_from_fragment( - egg_info, - self._canonical_name, - ) - if not version: - reason = f"Missing project version for {self.project_name}" - return (LinkType.format_invalid, reason) - - match = self._py_version_re.search(version) - if match: - version = version[: match.start()] - py_version = match.group(1) - if py_version != self._target_python.py_version: - return ( - LinkType.platform_mismatch, - "Python version is incorrect", - ) - - supports_python = _check_link_requires_python( - link, - version_info=self._target_python.py_version_info, - ignore_requires_python=self._ignore_requires_python, - ) - if not supports_python: - reason = f"{version} Requires-Python {link.requires_python}" - return (LinkType.requires_python_mismatch, reason) - - logger.debug("Found link %s, version: %s", link, version) - - return (LinkType.candidate, version) - - -def filter_unallowed_hashes( - candidates: List[InstallationCandidate], - hashes: Optional[Hashes], - project_name: str, -) -> List[InstallationCandidate]: - """ - Filter out candidates whose hashes aren't allowed, and return a new - list of candidates. - - If at least one candidate has an allowed hash, then all candidates with - either an allowed hash or no hash specified are returned. Otherwise, - the given candidates are returned. - - Including the candidates with no hash specified when there is a match - allows a warning to be logged if there is a more preferred candidate - with no hash specified. Returning all candidates in the case of no - matches lets pip report the hash of the candidate that would otherwise - have been installed (e.g. permitting the user to more easily update - their requirements file with the desired hash). - """ - if not hashes: - logger.debug( - "Given no hashes to check %s links for project %r: " - "discarding no candidates", - len(candidates), - project_name, - ) - # Make sure we're not returning back the given value. - return list(candidates) - - matches_or_no_digest = [] - # Collect the non-matches for logging purposes. - non_matches = [] - match_count = 0 - for candidate in candidates: - link = candidate.link - if not link.has_hash: - pass - elif link.is_hash_allowed(hashes=hashes): - match_count += 1 - else: - non_matches.append(candidate) - continue - - matches_or_no_digest.append(candidate) - - if match_count: - filtered = matches_or_no_digest - else: - # Make sure we're not returning back the given value. - filtered = list(candidates) - - if len(filtered) == len(candidates): - discard_message = "discarding no candidates" - else: - discard_message = "discarding {} non-matches:\n {}".format( - len(non_matches), - "\n ".join(str(candidate.link) for candidate in non_matches), - ) - - logger.debug( - "Checked %s links for project %r against %s hashes " - "(%s matches, %s no digest): %s", - len(candidates), - project_name, - hashes.digest_count, - match_count, - len(matches_or_no_digest) - match_count, - discard_message, - ) - - return filtered - - -class CandidatePreferences: - - """ - Encapsulates some of the preferences for filtering and sorting - InstallationCandidate objects. - """ - - def __init__( - self, - prefer_binary: bool = False, - allow_all_prereleases: bool = False, - ) -> None: - """ - :param allow_all_prereleases: Whether to allow all pre-releases. - """ - self.allow_all_prereleases = allow_all_prereleases - self.prefer_binary = prefer_binary - - -class BestCandidateResult: - """A collection of candidates, returned by `PackageFinder.find_best_candidate`. - - This class is only intended to be instantiated by CandidateEvaluator's - `compute_best_candidate()` method. - """ - - def __init__( - self, - candidates: List[InstallationCandidate], - applicable_candidates: List[InstallationCandidate], - best_candidate: Optional[InstallationCandidate], - ) -> None: - """ - :param candidates: A sequence of all available candidates found. - :param applicable_candidates: The applicable candidates. - :param best_candidate: The most preferred candidate found, or None - if no applicable candidates were found. - """ - assert set(applicable_candidates) <= set(candidates) - - if best_candidate is None: - assert not applicable_candidates - else: - assert best_candidate in applicable_candidates - - self._applicable_candidates = applicable_candidates - self._candidates = candidates - - self.best_candidate = best_candidate - - def iter_all(self) -> Iterable[InstallationCandidate]: - """Iterate through all candidates.""" - return iter(self._candidates) - - def iter_applicable(self) -> Iterable[InstallationCandidate]: - """Iterate through the applicable candidates.""" - return iter(self._applicable_candidates) - - -class CandidateEvaluator: - - """ - Responsible for filtering and sorting candidates for installation based - on what tags are valid. - """ - - @classmethod - def create( - cls, - project_name: str, - target_python: Optional[TargetPython] = None, - prefer_binary: bool = False, - allow_all_prereleases: bool = False, - specifier: Optional[specifiers.BaseSpecifier] = None, - hashes: Optional[Hashes] = None, - ) -> "CandidateEvaluator": - """Create a CandidateEvaluator object. - - :param target_python: The target Python interpreter to use when - checking compatibility. If None (the default), a TargetPython - object will be constructed from the running Python. - :param specifier: An optional object implementing `filter` - (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable - versions. - :param hashes: An optional collection of allowed hashes. - """ - if target_python is None: - target_python = TargetPython() - if specifier is None: - specifier = specifiers.SpecifierSet() - - supported_tags = target_python.get_tags() - - return cls( - project_name=project_name, - supported_tags=supported_tags, - specifier=specifier, - prefer_binary=prefer_binary, - allow_all_prereleases=allow_all_prereleases, - hashes=hashes, - ) - - def __init__( - self, - project_name: str, - supported_tags: List[Tag], - specifier: specifiers.BaseSpecifier, - prefer_binary: bool = False, - allow_all_prereleases: bool = False, - hashes: Optional[Hashes] = None, - ) -> None: - """ - :param supported_tags: The PEP 425 tags supported by the target - Python in order of preference (most preferred first). - """ - self._allow_all_prereleases = allow_all_prereleases - self._hashes = hashes - self._prefer_binary = prefer_binary - self._project_name = project_name - self._specifier = specifier - self._supported_tags = supported_tags - # Since the index of the tag in the _supported_tags list is used - # as a priority, precompute a map from tag to index/priority to be - # used in wheel.find_most_preferred_tag. - self._wheel_tag_preferences = { - tag: idx for idx, tag in enumerate(supported_tags) - } - - def get_applicable_candidates( - self, - candidates: List[InstallationCandidate], - ) -> List[InstallationCandidate]: - """ - Return the applicable candidates from a list of candidates. - """ - # Using None infers from the specifier instead. - allow_prereleases = self._allow_all_prereleases or None - specifier = self._specifier - versions = { - str(v) - for v in specifier.filter( - # We turn the version object into a str here because otherwise - # when we're debundled but setuptools isn't, Python will see - # packaging.version.Version and - # pkg_resources._vendor.packaging.version.Version as different - # types. This way we'll use a str as a common data interchange - # format. If we stop using the pkg_resources provided specifier - # and start using our own, we can drop the cast to str(). - (str(c.version) for c in candidates), - prereleases=allow_prereleases, - ) - } - - # Again, converting version to str to deal with debundling. - applicable_candidates = [c for c in candidates if str(c.version) in versions] - - filtered_applicable_candidates = filter_unallowed_hashes( - candidates=applicable_candidates, - hashes=self._hashes, - project_name=self._project_name, - ) - - return sorted(filtered_applicable_candidates, key=self._sort_key) - - def _sort_key(self, candidate: InstallationCandidate) -> CandidateSortingKey: - """ - Function to pass as the `key` argument to a call to sorted() to sort - InstallationCandidates by preference. - - Returns a tuple such that tuples sorting as greater using Python's - default comparison operator are more preferred. - - The preference is as follows: - - First and foremost, candidates with allowed (matching) hashes are - always preferred over candidates without matching hashes. This is - because e.g. if the only candidate with an allowed hash is yanked, - we still want to use that candidate. - - Second, excepting hash considerations, candidates that have been - yanked (in the sense of PEP 592) are always less preferred than - candidates that haven't been yanked. Then: - - If not finding wheels, they are sorted by version only. - If finding wheels, then the sort order is by version, then: - 1. existing installs - 2. wheels ordered via Wheel.support_index_min(self._supported_tags) - 3. source archives - If prefer_binary was set, then all wheels are sorted above sources. - - Note: it was considered to embed this logic into the Link - comparison operators, but then different sdist links - with the same version, would have to be considered equal - """ - valid_tags = self._supported_tags - support_num = len(valid_tags) - build_tag: BuildTag = () - binary_preference = 0 - link = candidate.link - if link.is_wheel: - # can raise InvalidWheelFilename - wheel = Wheel(link.filename) - try: - pri = -( - wheel.find_most_preferred_tag( - valid_tags, self._wheel_tag_preferences - ) - ) - except ValueError: - raise UnsupportedWheel( - "{} is not a supported wheel for this platform. It " - "can't be sorted.".format(wheel.filename) - ) - if self._prefer_binary: - binary_preference = 1 - if wheel.build_tag is not None: - match = re.match(r"^(\d+)(.*)$", wheel.build_tag) - assert match is not None, "guaranteed by filename validation" - build_tag_groups = match.groups() - build_tag = (int(build_tag_groups[0]), build_tag_groups[1]) - else: # sdist - pri = -(support_num) - has_allowed_hash = int(link.is_hash_allowed(self._hashes)) - yank_value = -1 * int(link.is_yanked) # -1 for yanked. - return ( - has_allowed_hash, - yank_value, - binary_preference, - candidate.version, - pri, - build_tag, - ) - - def sort_best_candidate( - self, - candidates: List[InstallationCandidate], - ) -> Optional[InstallationCandidate]: - """ - Return the best candidate per the instance's sort order, or None if - no candidate is acceptable. - """ - if not candidates: - return None - best_candidate = max(candidates, key=self._sort_key) - return best_candidate - - def compute_best_candidate( - self, - candidates: List[InstallationCandidate], - ) -> BestCandidateResult: - """ - Compute and return a `BestCandidateResult` instance. - """ - applicable_candidates = self.get_applicable_candidates(candidates) - - best_candidate = self.sort_best_candidate(applicable_candidates) - - return BestCandidateResult( - candidates, - applicable_candidates=applicable_candidates, - best_candidate=best_candidate, - ) - - -class PackageFinder: - """This finds packages. - - This is meant to match easy_install's technique for looking for - packages, by reading pages and looking for appropriate links. - """ - - def __init__( - self, - link_collector: LinkCollector, - target_python: TargetPython, - allow_yanked: bool, - format_control: Optional[FormatControl] = None, - candidate_prefs: Optional[CandidatePreferences] = None, - ignore_requires_python: Optional[bool] = None, - ) -> None: - """ - This constructor is primarily meant to be used by the create() class - method and from tests. - - :param format_control: A FormatControl object, used to control - the selection of source packages / binary packages when consulting - the index and links. - :param candidate_prefs: Options to use when creating a - CandidateEvaluator object. - """ - if candidate_prefs is None: - candidate_prefs = CandidatePreferences() - - format_control = format_control or FormatControl(set(), set()) - - self._allow_yanked = allow_yanked - self._candidate_prefs = candidate_prefs - self._ignore_requires_python = ignore_requires_python - self._link_collector = link_collector - self._target_python = target_python - - self.format_control = format_control - - # These are boring links that have already been logged somehow. - self._logged_links: Set[Tuple[Link, LinkType, str]] = set() - - # Don't include an allow_yanked default value to make sure each call - # site considers whether yanked releases are allowed. This also causes - # that decision to be made explicit in the calling code, which helps - # people when reading the code. - @classmethod - def create( - cls, - link_collector: LinkCollector, - selection_prefs: SelectionPreferences, - target_python: Optional[TargetPython] = None, - ) -> "PackageFinder": - """Create a PackageFinder. - - :param selection_prefs: The candidate selection preferences, as a - SelectionPreferences object. - :param target_python: The target Python interpreter to use when - checking compatibility. If None (the default), a TargetPython - object will be constructed from the running Python. - """ - if target_python is None: - target_python = TargetPython() - - candidate_prefs = CandidatePreferences( - prefer_binary=selection_prefs.prefer_binary, - allow_all_prereleases=selection_prefs.allow_all_prereleases, - ) - - return cls( - candidate_prefs=candidate_prefs, - link_collector=link_collector, - target_python=target_python, - allow_yanked=selection_prefs.allow_yanked, - format_control=selection_prefs.format_control, - ignore_requires_python=selection_prefs.ignore_requires_python, - ) - - @property - def target_python(self) -> TargetPython: - return self._target_python - - @property - def search_scope(self) -> SearchScope: - return self._link_collector.search_scope - - @search_scope.setter - def search_scope(self, search_scope: SearchScope) -> None: - self._link_collector.search_scope = search_scope - - @property - def find_links(self) -> List[str]: - return self._link_collector.find_links - - @property - def index_urls(self) -> List[str]: - return self.search_scope.index_urls - - @property - def trusted_hosts(self) -> Iterable[str]: - for host_port in self._link_collector.session.pip_trusted_origins: - yield build_netloc(*host_port) - - @property - def allow_all_prereleases(self) -> bool: - return self._candidate_prefs.allow_all_prereleases - - def set_allow_all_prereleases(self) -> None: - self._candidate_prefs.allow_all_prereleases = True - - @property - def prefer_binary(self) -> bool: - return self._candidate_prefs.prefer_binary - - def set_prefer_binary(self) -> None: - self._candidate_prefs.prefer_binary = True - - def requires_python_skipped_reasons(self) -> List[str]: - reasons = { - detail - for _, result, detail in self._logged_links - if result == LinkType.requires_python_mismatch - } - return sorted(reasons) - - def make_link_evaluator(self, project_name: str) -> LinkEvaluator: - canonical_name = canonicalize_name(project_name) - formats = self.format_control.get_allowed_formats(canonical_name) - - return LinkEvaluator( - project_name=project_name, - canonical_name=canonical_name, - formats=formats, - target_python=self._target_python, - allow_yanked=self._allow_yanked, - ignore_requires_python=self._ignore_requires_python, - ) - - def _sort_links(self, links: Iterable[Link]) -> List[Link]: - """ - Returns elements of links in order, non-egg links first, egg links - second, while eliminating duplicates - """ - eggs, no_eggs = [], [] - seen: Set[Link] = set() - for link in links: - if link not in seen: - seen.add(link) - if link.egg_fragment: - eggs.append(link) - else: - no_eggs.append(link) - return no_eggs + eggs - - def _log_skipped_link(self, link: Link, result: LinkType, detail: str) -> None: - entry = (link, result, detail) - if entry not in self._logged_links: - # Put the link at the end so the reason is more visible and because - # the link string is usually very long. - logger.debug("Skipping link: %s: %s", detail, link) - self._logged_links.add(entry) - - def get_install_candidate( - self, link_evaluator: LinkEvaluator, link: Link - ) -> Optional[InstallationCandidate]: - """ - If the link is a candidate for install, convert it to an - InstallationCandidate and return it. Otherwise, return None. - """ - result, detail = link_evaluator.evaluate_link(link) - if result != LinkType.candidate: - self._log_skipped_link(link, result, detail) - return None - - return InstallationCandidate( - name=link_evaluator.project_name, - link=link, - version=detail, - ) - - def evaluate_links( - self, link_evaluator: LinkEvaluator, links: Iterable[Link] - ) -> List[InstallationCandidate]: - """ - Convert links that are candidates to InstallationCandidate objects. - """ - candidates = [] - for link in self._sort_links(links): - candidate = self.get_install_candidate(link_evaluator, link) - if candidate is not None: - candidates.append(candidate) - - return candidates - - def process_project_url( - self, project_url: Link, link_evaluator: LinkEvaluator - ) -> List[InstallationCandidate]: - logger.debug( - "Fetching project page and analyzing links: %s", - project_url, - ) - index_response = self._link_collector.fetch_response(project_url) - if index_response is None: - return [] - - page_links = list(parse_links(index_response)) - - with indent_log(): - package_links = self.evaluate_links( - link_evaluator, - links=page_links, - ) - - return package_links - - @functools.lru_cache(maxsize=None) - def find_all_candidates(self, project_name: str) -> List[InstallationCandidate]: - """Find all available InstallationCandidate for project_name - - This checks index_urls and find_links. - All versions found are returned as an InstallationCandidate list. - - See LinkEvaluator.evaluate_link() for details on which files - are accepted. - """ - link_evaluator = self.make_link_evaluator(project_name) - - collected_sources = self._link_collector.collect_sources( - project_name=project_name, - candidates_from_page=functools.partial( - self.process_project_url, - link_evaluator=link_evaluator, - ), - ) - - page_candidates_it = itertools.chain.from_iterable( - source.page_candidates() - for sources in collected_sources - for source in sources - if source is not None - ) - page_candidates = list(page_candidates_it) - - file_links_it = itertools.chain.from_iterable( - source.file_links() - for sources in collected_sources - for source in sources - if source is not None - ) - file_candidates = self.evaluate_links( - link_evaluator, - sorted(file_links_it, reverse=True), - ) - - if logger.isEnabledFor(logging.DEBUG) and file_candidates: - paths = [] - for candidate in file_candidates: - assert candidate.link.url # we need to have a URL - try: - paths.append(candidate.link.file_path) - except Exception: - paths.append(candidate.link.url) # it's not a local file - - logger.debug("Local files found: %s", ", ".join(paths)) - - # This is an intentional priority ordering - return file_candidates + page_candidates - - def make_candidate_evaluator( - self, - project_name: str, - specifier: Optional[specifiers.BaseSpecifier] = None, - hashes: Optional[Hashes] = None, - ) -> CandidateEvaluator: - """Create a CandidateEvaluator object to use.""" - candidate_prefs = self._candidate_prefs - return CandidateEvaluator.create( - project_name=project_name, - target_python=self._target_python, - prefer_binary=candidate_prefs.prefer_binary, - allow_all_prereleases=candidate_prefs.allow_all_prereleases, - specifier=specifier, - hashes=hashes, - ) - - @functools.lru_cache(maxsize=None) - def find_best_candidate( - self, - project_name: str, - specifier: Optional[specifiers.BaseSpecifier] = None, - hashes: Optional[Hashes] = None, - ) -> BestCandidateResult: - """Find matches for the given project and specifier. - - :param specifier: An optional object implementing `filter` - (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable - versions. - - :return: A `BestCandidateResult` instance. - """ - candidates = self.find_all_candidates(project_name) - candidate_evaluator = self.make_candidate_evaluator( - project_name=project_name, - specifier=specifier, - hashes=hashes, - ) - return candidate_evaluator.compute_best_candidate(candidates) - - def find_requirement( - self, req: InstallRequirement, upgrade: bool - ) -> Optional[InstallationCandidate]: - """Try to find a Link matching req - - Expects req, an InstallRequirement and upgrade, a boolean - Returns a InstallationCandidate if found, - Raises DistributionNotFound or BestVersionAlreadyInstalled otherwise - """ - hashes = req.hashes(trust_internet=False) - best_candidate_result = self.find_best_candidate( - req.name, - specifier=req.specifier, - hashes=hashes, - ) - best_candidate = best_candidate_result.best_candidate - - installed_version: Optional[_BaseVersion] = None - if req.satisfied_by is not None: - installed_version = req.satisfied_by.version - - def _format_versions(cand_iter: Iterable[InstallationCandidate]) -> str: - # This repeated parse_version and str() conversion is needed to - # handle different vendoring sources from pip and pkg_resources. - # If we stop using the pkg_resources provided specifier and start - # using our own, we can drop the cast to str(). - return ( - ", ".join( - sorted( - {str(c.version) for c in cand_iter}, - key=parse_version, - ) - ) - or "none" - ) - - if installed_version is None and best_candidate is None: - logger.critical( - "Could not find a version that satisfies the requirement %s " - "(from versions: %s)", - req, - _format_versions(best_candidate_result.iter_all()), - ) - - raise DistributionNotFound( - "No matching distribution found for {}".format(req) - ) - - def _should_install_candidate( - candidate: Optional[InstallationCandidate], - ) -> "TypeGuard[InstallationCandidate]": - if installed_version is None: - return True - if best_candidate is None: - return False - return best_candidate.version > installed_version - - if not upgrade and installed_version is not None: - if _should_install_candidate(best_candidate): - logger.debug( - "Existing installed version (%s) satisfies requirement " - "(most up-to-date version is %s)", - installed_version, - best_candidate.version, - ) - else: - logger.debug( - "Existing installed version (%s) is most up-to-date and " - "satisfies requirement", - installed_version, - ) - return None - - if _should_install_candidate(best_candidate): - logger.debug( - "Using version %s (newest of versions: %s)", - best_candidate.version, - _format_versions(best_candidate_result.iter_applicable()), - ) - return best_candidate - - # We have an existing version, and its the best version - logger.debug( - "Installed version (%s) is most up-to-date (past versions: %s)", - installed_version, - _format_versions(best_candidate_result.iter_applicable()), - ) - raise BestVersionAlreadyInstalled - - -def _find_name_version_sep(fragment: str, canonical_name: str) -> int: - """Find the separator's index based on the package's canonical name. - - :param fragment: A + filename "fragment" (stem) or - egg fragment. - :param canonical_name: The package's canonical name. - - This function is needed since the canonicalized name does not necessarily - have the same length as the egg info's name part. An example:: - - >>> fragment = 'foo__bar-1.0' - >>> canonical_name = 'foo-bar' - >>> _find_name_version_sep(fragment, canonical_name) - 8 - """ - # Project name and version must be separated by one single dash. Find all - # occurrences of dashes; if the string in front of it matches the canonical - # name, this is the one separating the name and version parts. - for i, c in enumerate(fragment): - if c != "-": - continue - if canonicalize_name(fragment[:i]) == canonical_name: - return i - raise ValueError(f"{fragment} does not match {canonical_name}") - - -def _extract_version_from_fragment(fragment: str, canonical_name: str) -> Optional[str]: - """Parse the version string from a + filename - "fragment" (stem) or egg fragment. - - :param fragment: The string to parse. E.g. foo-2.1 - :param canonical_name: The canonicalized name of the package this - belongs to. - """ - try: - version_start = _find_name_version_sep(fragment, canonical_name) + 1 - except ValueError: - return None - version = fragment[version_start:] - if not version: - return None - return version diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py deleted file mode 100644 index d2dddd6a106f021a4723c1e8f5953ccc09e55e1f..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py +++ /dev/null @@ -1,51 +0,0 @@ -import re - - -SPLIT_RE = re.compile(r'[\.\[\]]+') - - -class JsonSchemaException(ValueError): - """ - Base exception of ``fastjsonschema`` library. - """ - - -class JsonSchemaValueException(JsonSchemaException): - """ - Exception raised by validation function. Available properties: - - * ``message`` containing human-readable information what is wrong (e.g. ``data.property[index] must be smaller than or equal to 42``), - * invalid ``value`` (e.g. ``60``), - * ``name`` of a path in the data structure (e.g. ``data.property[index]``), - * ``path`` as an array in the data structure (e.g. ``['data', 'property', 'index']``), - * the whole ``definition`` which the ``value`` has to fulfil (e.g. ``{'type': 'number', 'maximum': 42}``), - * ``rule`` which the ``value`` is breaking (e.g. ``maximum``) - * and ``rule_definition`` (e.g. ``42``). - - .. versionchanged:: 2.14.0 - Added all extra properties. - """ - - def __init__(self, message, value=None, name=None, definition=None, rule=None): - super().__init__(message) - self.message = message - self.value = value - self.name = name - self.definition = definition - self.rule = rule - - @property - def path(self): - return [item for item in SPLIT_RE.split(self.name) if item != ''] - - @property - def rule_definition(self): - if not self.rule or not self.definition: - return None - return self.definition.get(self.rule) - - -class JsonSchemaDefinitionException(JsonSchemaException): - """ - Exception raised by generator of validation function. - """ diff --git a/spaces/XzJosh/Azusa-Bert-VITS2/monotonic_align/core.py b/spaces/XzJosh/Azusa-Bert-VITS2/monotonic_align/core.py deleted file mode 100644 index dddc688d76172b880054e544b7a217acd013f14f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azusa-Bert-VITS2/monotonic_align/core.py +++ /dev/null @@ -1,35 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:,:,::1], numba.float32[:,:,::1], numba.int32[::1], numba.int32[::1]), nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val=-1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y-1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y-1, x-1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - index = index - 1 diff --git a/spaces/XzJosh/JM-Bert-VITS2/text/__init__.py b/spaces/XzJosh/JM-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/JM-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/XzJosh/Jianmo-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/Jianmo-Bert-VITS2/preprocess_text.py deleted file mode 100644 index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jianmo-Bert-VITS2/preprocess_text.py +++ /dev/null @@ -1,64 +0,0 @@ -import json -from random import shuffle - -import tqdm -from text.cleaner import clean_text -from collections import defaultdict -stage = [1,2,3] - -transcription_path = 'filelists/genshin.list' -train_path = 'filelists/train.list' -val_path = 'filelists/val.list' -config_path = "configs/config.json" -val_per_spk = 4 -max_val_total = 8 - -if 1 in stage: - with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f: - for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()): - try: - utt, spk, language, text = line.strip().split('|') - norm_text, phones, tones, word2ph = clean_text(text, language) - f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]))) - except Exception as error : - print("err!", utt, error) - -if 2 in stage: - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open( transcription_path+'.cleaned', encoding='utf-8') as f: - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split('|') - spk_utt_map[spk].append(line) - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - train_list = [] - val_list = [] - - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list+=utts[:val_per_spk] - train_list+=utts[val_per_spk:] - if len(val_list) > max_val_total: - train_list+=val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open( train_path,"w", encoding='utf-8') as f: - for line in train_list: - f.write(line) - - with open(val_path, "w", encoding='utf-8') as f: - for line in val_list: - f.write(line) - -if 3 in stage: - assert 2 in stage - config = json.load(open(config_path, encoding='utf-8')) - config["data"]['spk2id'] = spk_id_map - with open(config_path, 'w', encoding='utf-8') as f: - json.dump(config, f, indent=2, ensure_ascii=False) diff --git a/spaces/XzJosh/ranran-Bert-VITS2/commons.py b/spaces/XzJosh/ranran-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ranran-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YUANAI/DiffspeechResearch/inference/tts/ds.py b/spaces/YUANAI/DiffspeechResearch/inference/tts/ds.py deleted file mode 100644 index 04b5b4925bfcbfc0e05732054fd3746f1e89bf02..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/inference/tts/ds.py +++ /dev/null @@ -1,30 +0,0 @@ -import torch -# from inference.tts.fs import FastSpeechInfer -# from modules.tts.fs2_orig import FastSpeech2Orig -from inference.tts.base_tts_infer import BaseTTSInfer -from modules.tts.diffspeech.shallow_diffusion_tts import GaussianDiffusion -from utils.commons.ckpt_utils import load_ckpt -from utils.commons.hparams import hparams - - -class DiffSpeechInfer(BaseTTSInfer): - def build_model(self): - dict_size = len(self.ph_encoder) - model = GaussianDiffusion(dict_size, self.hparams) - model.eval() - load_ckpt(model, hparams['work_dir'], 'model') - return model - - def forward_model(self, inp): - sample = self.input_to_batch(inp) - txt_tokens = sample['txt_tokens'] # [B, T_t] - spk_id = sample.get('spk_ids') - with torch.no_grad(): - output = self.model(txt_tokens, spk_id=spk_id, ref_mels=None, infer=True) - mel_out = output['mel_out'] - wav_out = self.run_vocoder(mel_out) - wav_out = wav_out.cpu().numpy() - return wav_out[0] - -if __name__ == '__main__': - DiffSpeechInfer.example_run() diff --git a/spaces/YangHao520/testShare/app.py b/spaces/YangHao520/testShare/app.py deleted file mode 100644 index 151795c1bbbd93e664e42306e0eb8f2ed8f51816..0000000000000000000000000000000000000000 --- a/spaces/YangHao520/testShare/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -def sketch_recognition(img): - pass -def question_answer(context,question): - return context+'ASD',question+'SSS' -def img2(img): - return img - -def detect(): - gr.Interface(fn=sketch_recognition,inputs='sketchpad',outputs='label').launch() - -def QA(): - gra=gr.Interface(fn=question_answer,inputs=['text','text'],outputs=['textbox','text']) - gra.launch() -def imageTest(): - gra=gr.Interface(fn=img2,inputs='image',outputs='image') - gra.launch() -def image_classifier(inp): - return {'cat': 0.3, 'dog': 0.7} -def greet(name): - return "Hello"+name - -def main(): - #detect() - #QA() #文本 - #imageTest() - demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label",live=True,title='Test for gradio',description='test') - demo.launch(auth=("admin","123"),auth_message='欢迎来到这里',inbrowser=True,enable_queue=False) -if __name__=="__main__": - main() \ No newline at end of file diff --git a/spaces/YanzBotz/YanzBotz-Models/lib/infer_pack/commons.py b/spaces/YanzBotz/YanzBotz-Models/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/YanzBotz/YanzBotz-Models/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Yudha515/Rvc-Models/tests/modules/test_seanet.py b/spaces/Yudha515/Rvc-Models/tests/modules/test_seanet.py deleted file mode 100644 index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/tests/modules/test_seanet.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock -from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d - - -class TestSEANetModel: - - def test_base(self): - encoder = SEANetEncoder() - decoder = SEANetDecoder() - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_causal(self): - encoder = SEANetEncoder(causal=True) - decoder = SEANetDecoder(causal=True) - x = torch.randn(1, 1, 24000) - - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_conv_skip_connection(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False) - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_seanet_encoder_decoder_final_act(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False, final_activation='Tanh') - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in encoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - # here we add + 1 to n_blocks as we increment n_blocks just after the block - assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm - - def test_encoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_encoder_blocks_norm(encoder, disable_blocks, norm) - - def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in decoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, StreamableConvTranspose1d): - n_blocks += 1 - assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - assert resnet_layer.conv.norm_type == 'none' \ - if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - - def test_decoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_decoder_blocks_norm(decoder, disable_blocks, norm) - - def test_disable_norm_raises_exception(self): - # Invalid disable_norm_outer_blocks values raise exceptions - with pytest.raises(AssertionError): - SEANetEncoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) - - with pytest.raises(AssertionError): - SEANetDecoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) diff --git a/spaces/Zengwengen/nb/Dockerfile b/spaces/Zengwengen/nb/Dockerfile deleted file mode 100644 index d7e7c7bffc3869c8deee43f15ee92f32ea3d40fb..0000000000000000000000000000000000000000 --- a/spaces/Zengwengen/nb/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录 -WORKDIR /workspace/app - -# 编译go项目。 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级alpine镜像作为基础镜像 - -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量 -ENV Go_Proxy_BingAI_USER_TOKEN_1="0a078d7eae634397b7603622ce1a7c30" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/aaronb/Anything2Image/app.py b/spaces/aaronb/Anything2Image/app.py deleted file mode 100644 index 7256a601406d4e0cefb9c9ee2d58322f4a563a67..0000000000000000000000000000000000000000 --- a/spaces/aaronb/Anything2Image/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import os -import gradio as gr -from anything2image.api import Anything2Image - - -anything2img = Anything2Image(imagebind_download_dir='checkpoints') - -with gr.Blocks() as demo: - gr.HTML( - """ -

Anything To Image

-

Generate image from anything with ImageBind's unified latent space and stable-diffusion-2-1-unclip.

-

https://github.com/Zeqiang-Lai/Anything2Image

- """ - ) - with gr.Tab('Audio to Image'): - wav_dir = 'assets/wav' - def audio2image(audio): return anything2img(audio=audio) - gr.Interface( - fn=audio2image, - inputs="audio", - outputs="image", - examples=[os.path.join(wav_dir, name) for name in os.listdir(wav_dir)], - ) - with gr.Tab('Audio+Text to Image'): - wav_dir = 'assets/wav' - def audiotext2image(prompt, audio): return anything2img(prompt=prompt, audio=audio) - gr.Interface( - fn=audiotext2image, - inputs=["text","audio"], - outputs="image", - examples=[ - ['A painting', 'assets/wav/cat.wav'], - ['A photo', 'assets/wav/cat.wav'], - ['A painting', 'assets/wav/dog_audio.wav'], - ['A photo', 'assets/wav/dog_audio.wav'], - ], - ) - with gr.Tab('Audio+Image to Image'): - wav_dir = 'assets/wav' - def audioimage2image(audio, image): return anything2img(image=image, audio=audio) - gr.Interface( - fn=audioimage2image, - inputs=["audio","image"], - outputs="image", - examples=[ - ['assets/wav/wave.wav', 'assets/image/bird.png'], - ['assets/wav/wave.wav', 'assets/image/dog_image.jpg'], - ['assets/wav/wave.wav', 'assets/image/room.png'], - ['assets/wav/rain.wav', 'assets/image/room.png'], - ], - ) - with gr.Tab('Image to Image'): - image_dir = 'assets/image' - def image2image(image): return anything2img(image=image) - gr.Interface( - fn=image2image, - inputs=["image"], - outputs="image", - examples=[os.path.join(image_dir, name) for name in os.listdir(image_dir)], - ) - with gr.Tab('Text to Image'): - def text2image(text): return anything2img(text=text) - gr.Interface( - fn=text2image, - inputs=["text"], - outputs="image", - examples=['A sunset over the ocean.', - 'A photo of a car', - "A bird's-eye view of a cityscape.", - "A close-up of a flower."], - ) - with gr.Tab('Text+Any to Image'): - def textany2image(prompt, image, audio): return anything2img(prompt=prompt, image=image, audio=audio) - gr.Interface( - fn=textany2image, - inputs=["text", "image", "audio"], - outputs="image", - examples=[['A painting.', 'assets/image/bird.png', 'assets/wav/wave.wav']], - ) - -demo.queue(1).launch() \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/midas/__init__.py b/spaces/abhishek/sketch-to-image/annotator/midas/__init__.py deleted file mode 100644 index 426c7b2328f9cb475c344e80aeb828c866559aba..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/midas/__init__.py +++ /dev/null @@ -1,52 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -# Midas Depth Estimation -# From https://github.com/isl-org/MiDaS -# MIT LICENSE - -import cv2 -import numpy as np -import torch - -from einops import rearrange -from .api import MiDaSInference - - -class MidasDetector: - def __init__(self): - self.model = MiDaSInference(model_type="dpt_hybrid").cuda() - - def __call__(self, input_image, a=np.pi * 2.0, bg_th=0.1): - assert input_image.ndim == 3 - image_depth = input_image - with torch.no_grad(): - image_depth = torch.from_numpy(image_depth).float().cuda() - image_depth = image_depth / 127.5 - 1.0 - image_depth = rearrange(image_depth, 'h w c -> 1 c h w') - depth = self.model(image_depth)[0] - - depth_pt = depth.clone() - depth_pt -= torch.min(depth_pt) - depth_pt /= torch.max(depth_pt) - depth_pt = depth_pt.cpu().numpy() - depth_image = (depth_pt * 255.0).clip(0, 255).astype(np.uint8) - - depth_np = depth.cpu().numpy() - x = cv2.Sobel(depth_np, cv2.CV_32F, 1, 0, ksize=3) - y = cv2.Sobel(depth_np, cv2.CV_32F, 0, 1, ksize=3) - z = np.ones_like(x) * a - x[depth_pt < bg_th] = 0 - y[depth_pt < bg_th] = 0 - normal = np.stack([x, y, z], axis=2) - normal /= np.sum(normal ** 2.0, axis=2, keepdims=True) ** 0.5 - normal_image = (normal * 127.5 + 127.5).clip(0, 255).astype(np.uint8) - - return depth_image, normal_image diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/image/io.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/image/io.py deleted file mode 100644 index d3fa2e8cc06b1a7b0b69de6406980b15d61a1e5d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/image/io.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os.path as osp -from pathlib import Path - -import cv2 -import numpy as np -from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION, - IMREAD_UNCHANGED) - -from annotator.uniformer.mmcv.utils import check_file_exist, is_str, mkdir_or_exist - -try: - from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG -except ImportError: - TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None - -try: - from PIL import Image, ImageOps -except ImportError: - Image = None - -try: - import tifffile -except ImportError: - tifffile = None - -jpeg = None -supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile'] - -imread_flags = { - 'color': IMREAD_COLOR, - 'grayscale': IMREAD_GRAYSCALE, - 'unchanged': IMREAD_UNCHANGED, - 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR, - 'grayscale_ignore_orientation': - IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE -} - -imread_backend = 'cv2' - - -def use_backend(backend): - """Select a backend for image decoding. - - Args: - backend (str): The image decoding backend type. Options are `cv2`, - `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG) - and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg` - file format. - """ - assert backend in supported_backends - global imread_backend - imread_backend = backend - if imread_backend == 'turbojpeg': - if TurboJPEG is None: - raise ImportError('`PyTurboJPEG` is not installed') - global jpeg - if jpeg is None: - jpeg = TurboJPEG() - elif imread_backend == 'pillow': - if Image is None: - raise ImportError('`Pillow` is not installed') - elif imread_backend == 'tifffile': - if tifffile is None: - raise ImportError('`tifffile` is not installed') - - -def _jpegflag(flag='color', channel_order='bgr'): - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'color': - if channel_order == 'bgr': - return TJPF_BGR - elif channel_order == 'rgb': - return TJCS_RGB - elif flag == 'grayscale': - return TJPF_GRAY - else: - raise ValueError('flag must be "color" or "grayscale"') - - -def _pillow2array(img, flag='color', channel_order='bgr'): - """Convert a pillow image to numpy array. - - Args: - img (:obj:`PIL.Image.Image`): The image loaded using PIL - flag (str): Flags specifying the color type of a loaded image, - candidates are 'color', 'grayscale' and 'unchanged'. - Default to 'color'. - channel_order (str): The channel order of the output image array, - candidates are 'bgr' and 'rgb'. Default to 'bgr'. - - Returns: - np.ndarray: The converted numpy array - """ - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'unchanged': - array = np.array(img) - if array.ndim >= 3 and array.shape[2] >= 3: # color image - array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR - else: - # Handle exif orientation tag - if flag in ['color', 'grayscale']: - img = ImageOps.exif_transpose(img) - # If the image mode is not 'RGB', convert it to 'RGB' first. - if img.mode != 'RGB': - if img.mode != 'LA': - # Most formats except 'LA' can be directly converted to RGB - img = img.convert('RGB') - else: - # When the mode is 'LA', the default conversion will fill in - # the canvas with black, which sometimes shadows black objects - # in the foreground. - # - # Therefore, a random color (124, 117, 104) is used for canvas - img_rgba = img.convert('RGBA') - img = Image.new('RGB', img_rgba.size, (124, 117, 104)) - img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha - if flag in ['color', 'color_ignore_orientation']: - array = np.array(img) - if channel_order != 'rgb': - array = array[:, :, ::-1] # RGB to BGR - elif flag in ['grayscale', 'grayscale_ignore_orientation']: - img = img.convert('L') - array = np.array(img) - else: - raise ValueError( - 'flag must be "color", "grayscale", "unchanged", ' - f'"color_ignore_orientation" or "grayscale_ignore_orientation"' - f' but got {flag}') - return array - - -def imread(img_or_path, flag='color', channel_order='bgr', backend=None): - """Read an image. - - Args: - img_or_path (ndarray or str or Path): Either a numpy array or str or - pathlib.Path. If it is a numpy array (loaded image), then - it will be returned as is. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale`, `unchanged`, - `color_ignore_orientation` and `grayscale_ignore_orientation`. - By default, `cv2` and `pillow` backend would rotate the image - according to its EXIF info unless called with `unchanged` or - `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend - always ignore image's EXIF info regardless of the flag. - The `turbojpeg` backend only supports `color` and `grayscale`. - channel_order (str): Order of channel, candidates are `bgr` and `rgb`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. - If backend is None, the global imread_backend specified by - ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if isinstance(img_or_path, Path): - img_or_path = str(img_or_path) - - if isinstance(img_or_path, np.ndarray): - return img_or_path - elif is_str(img_or_path): - check_file_exist(img_or_path, - f'img file does not exist: {img_or_path}') - if backend == 'turbojpeg': - with open(img_or_path, 'rb') as in_file: - img = jpeg.decode(in_file.read(), - _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - img = Image.open(img_or_path) - img = _pillow2array(img, flag, channel_order) - return img - elif backend == 'tifffile': - img = tifffile.imread(img_or_path) - return img - else: - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imread(img_or_path, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - else: - raise TypeError('"img" must be a numpy array or a str or ' - 'a pathlib.Path object') - - -def imfrombytes(content, flag='color', channel_order='bgr', backend=None): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Same as :func:`imread`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the - global imread_backend specified by ``mmcv.use_backend()`` will be - used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if backend == 'turbojpeg': - img = jpeg.decode(content, _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - buff = io.BytesIO(content) - img = Image.open(buff) - img = _pillow2array(img, flag, channel_order) - return img - else: - img_np = np.frombuffer(content, np.uint8) - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imdecode(img_np, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = osp.abspath(osp.dirname(file_path)) - mkdir_or_exist(dir_name) - return cv2.imwrite(file_path, img, params) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/guided_anchor_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/guided_anchor_head.py deleted file mode 100644 index 997ebb751ade2ebae3fce335a08c46f596c60913..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/guided_anchor_head.py +++ /dev/null @@ -1,860 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import bias_init_with_prob, normal_init -from mmcv.ops import DeformConv2d, MaskedConv2d -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_anchor_generator, - build_assigner, build_bbox_coder, build_sampler, - calc_region, images_to_levels, multi_apply, - multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class FeatureAdaption(nn.Module): - """Feature Adaption Module. - - Feature Adaption Module is implemented based on DCN v1. - It uses anchor shape prediction rather than feature map to - predict offsets of deform conv layer. - - Args: - in_channels (int): Number of channels in the input feature map. - out_channels (int): Number of channels in the output feature map. - kernel_size (int): Deformable conv kernel size. - deform_groups (int): Deformable conv group size. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4): - super(FeatureAdaption, self).__init__() - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 2, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def init_weights(self): - normal_init(self.conv_offset, std=0.1) - normal_init(self.conv_adaption, std=0.01) - - def forward(self, x, shape): - offset = self.conv_offset(shape.detach()) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class GuidedAnchorHead(AnchorHead): - """Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.). - - This GuidedAnchorHead will predict high-quality feature guided - anchors and locations where anchors will be kept in inference. - There are mainly 3 categories of bounding-boxes. - - - Sampled 9 pairs for target assignment. (approxes) - - The square boxes where the predicted anchors are based on. (squares) - - Guided anchors. - - Please refer to https://arxiv.org/abs/1901.03278 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. - approx_anchor_generator (dict): Config dict for approx generator - square_anchor_generator (dict): Config dict for square generator - anchor_coder (dict): Config dict for anchor coder - bbox_coder (dict): Config dict for bbox coder - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - deform_groups: (int): Group number of DCN in - FeatureAdaption module. - loc_filter_thr (float): Threshold to filter out unconcerned regions. - loss_loc (dict): Config of location loss. - loss_shape (dict): Config of anchor shape loss. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of bbox regression loss. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=8, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[8], - strides=[4, 8, 16, 32, 64]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - reg_decoded_bbox=False, - deform_groups=4, - loc_filter_thr=0.01, - train_cfg=None, - test_cfg=None, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)): # yapf: disable - super(AnchorHead, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.deform_groups = deform_groups - self.loc_filter_thr = loc_filter_thr - - # build approx_anchor_generator and square_anchor_generator - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - self.approx_anchor_generator = build_anchor_generator( - approx_anchor_generator) - self.square_anchor_generator = build_anchor_generator( - square_anchor_generator) - self.approxs_per_octave = self.approx_anchor_generator \ - .num_base_anchors[0] - - self.reg_decoded_bbox = reg_decoded_bbox - - # one anchor per location - self.num_anchors = 1 - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.loc_focal_loss = loss_loc['type'] in ['FocalLoss'] - self.sampling = loss_cls['type'] not in ['FocalLoss'] - self.ga_sampling = train_cfg is not None and hasattr( - train_cfg, 'ga_sampler') - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - - # build bbox_coder - self.anchor_coder = build_bbox_coder(anchor_coder) - self.bbox_coder = build_bbox_coder(bbox_coder) - - # build losses - self.loss_loc = build_loss(loss_loc) - self.loss_shape = build_loss(loss_shape) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.ga_assigner = build_assigner(self.train_cfg.ga_assigner) - if self.ga_sampling: - ga_sampler_cfg = self.train_cfg.ga_sampler - else: - ga_sampler_cfg = dict(type='PseudoSampler') - self.ga_sampler = build_sampler(ga_sampler_cfg, context=self) - - self.fp16_enabled = False - - self._init_layers() - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.conv_loc = nn.Conv2d(self.in_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.in_channels, self.num_anchors * 2, 1) - self.feature_adaption = FeatureAdaption( - self.in_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = MaskedConv2d(self.feat_channels, - self.num_anchors * self.cls_out_channels, - 1) - self.conv_reg = MaskedConv2d(self.feat_channels, self.num_anchors * 4, - 1) - - def init_weights(self): - normal_init(self.conv_cls, std=0.01) - normal_init(self.conv_reg, std=0.01) - - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_loc, std=0.01, bias=bias_cls) - normal_init(self.conv_shape, std=0.01) - - self.feature_adaption.init_weights() - - def forward_single(self, x): - loc_pred = self.conv_loc(x) - shape_pred = self.conv_shape(x) - x = self.feature_adaption(x, shape_pred) - # masked conv is only used during inference for speed-up - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.conv_cls(x, mask) - bbox_pred = self.conv_reg(x, mask) - return cls_score, bbox_pred, shape_pred, loc_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_sampled_approxs(self, featmap_sizes, img_metas, device='cuda'): - """Get sampled approxs and inside flags according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: approxes of each image, inside flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # approxes for one time - multi_level_approxs = self.approx_anchor_generator.grid_anchors( - featmap_sizes, device=device) - approxs_list = [multi_level_approxs for _ in range(num_imgs)] - - # for each image, we compute inside flags of multi level approxes - inside_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = [] - multi_level_approxs = approxs_list[img_id] - - # obtain valid flags for each approx first - multi_level_approx_flags = self.approx_anchor_generator \ - .valid_flags(featmap_sizes, - img_meta['pad_shape'], - device=device) - - for i, flags in enumerate(multi_level_approx_flags): - approxs = multi_level_approxs[i] - inside_flags_list = [] - for i in range(self.approxs_per_octave): - split_valid_flags = flags[i::self.approxs_per_octave] - split_approxs = approxs[i::self.approxs_per_octave, :] - inside_flags = anchor_inside_flags( - split_approxs, split_valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - inside_flags_list.append(inside_flags) - # inside_flag for a position is true if any anchor in this - # position is true - inside_flags = ( - torch.stack(inside_flags_list, 0).sum(dim=0) > 0) - multi_level_flags.append(inside_flags) - inside_flag_list.append(multi_level_flags) - return approxs_list, inside_flag_list - - def get_anchors(self, - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=False, - device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - shape_preds (list[tensor]): Multi-level shape predictions. - loc_preds (list[tensor]): Multi-level location predictions. - img_metas (list[dict]): Image meta info. - use_loc_filter (bool): Use loc filter or not. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image, guided anchors of each image, - loc masks of each image - """ - num_imgs = len(img_metas) - num_levels = len(featmap_sizes) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_anchors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - # for each image, we compute multi level guided anchors - guided_anchors_list = [] - loc_mask_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_guided_anchors = [] - multi_level_loc_mask = [] - for i in range(num_levels): - squares = squares_list[img_id][i] - shape_pred = shape_preds[i][img_id] - loc_pred = loc_preds[i][img_id] - guided_anchors, loc_mask = self._get_guided_anchors_single( - squares, - shape_pred, - loc_pred, - use_loc_filter=use_loc_filter) - multi_level_guided_anchors.append(guided_anchors) - multi_level_loc_mask.append(loc_mask) - guided_anchors_list.append(multi_level_guided_anchors) - loc_mask_list.append(multi_level_loc_mask) - return squares_list, guided_anchors_list, loc_mask_list - - def _get_guided_anchors_single(self, - squares, - shape_pred, - loc_pred, - use_loc_filter=False): - """Get guided anchors and loc masks for a single level. - - Args: - square (tensor): Squares of a single level. - shape_pred (tensor): Shape predections of a single level. - loc_pred (tensor): Loc predections of a single level. - use_loc_filter (list[tensor]): Use loc filter or not. - - Returns: - tuple: guided anchors, location masks - """ - # calculate location filtering mask - loc_pred = loc_pred.sigmoid().detach() - if use_loc_filter: - loc_mask = loc_pred >= self.loc_filter_thr - else: - loc_mask = loc_pred >= 0.0 - mask = loc_mask.permute(1, 2, 0).expand(-1, -1, self.num_anchors) - mask = mask.contiguous().view(-1) - # calculate guided anchors - squares = squares[mask] - anchor_deltas = shape_pred.permute(1, 2, 0).contiguous().view( - -1, 2).detach()[mask] - bbox_deltas = anchor_deltas.new_full(squares.size(), 0) - bbox_deltas[:, 2:] = anchor_deltas - guided_anchors = self.anchor_coder.decode( - squares, bbox_deltas, wh_ratio_clip=1e-6) - return guided_anchors, mask - - def ga_loc_targets(self, gt_bboxes_list, featmap_sizes): - """Compute location targets for guided anchoring. - - Each feature map is divided into positive, negative and ignore regions. - - positive regions: target 1, weight 1 - - ignore regions: target 0, weight 0 - - negative regions: target 0, weight 0.1 - - Args: - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - featmap_sizes (list[tuple]): Multi level sizes of each feature - maps. - - Returns: - tuple - """ - anchor_scale = self.approx_anchor_generator.octave_base_scale - anchor_strides = self.approx_anchor_generator.strides - # Currently only supports same stride in x and y direction. - for stride in anchor_strides: - assert (stride[0] == stride[1]) - anchor_strides = [stride[0] for stride in anchor_strides] - - center_ratio = self.train_cfg.center_ratio - ignore_ratio = self.train_cfg.ignore_ratio - img_per_gpu = len(gt_bboxes_list) - num_lvls = len(featmap_sizes) - r1 = (1 - center_ratio) / 2 - r2 = (1 - ignore_ratio) / 2 - all_loc_targets = [] - all_loc_weights = [] - all_ignore_map = [] - for lvl_id in range(num_lvls): - h, w = featmap_sizes[lvl_id] - loc_targets = torch.zeros( - img_per_gpu, - 1, - h, - w, - device=gt_bboxes_list[0].device, - dtype=torch.float32) - loc_weights = torch.full_like(loc_targets, -1) - ignore_map = torch.zeros_like(loc_targets) - all_loc_targets.append(loc_targets) - all_loc_weights.append(loc_weights) - all_ignore_map.append(ignore_map) - for img_id in range(img_per_gpu): - gt_bboxes = gt_bboxes_list[img_id] - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - # assign gt bboxes to different feature levels w.r.t. their scales - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - for gt_id in range(gt_bboxes.size(0)): - lvl = target_lvls[gt_id].item() - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[lvl] - # calculate ignore regions - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[lvl]) - # calculate positive (center) regions - ctr_x1, ctr_y1, ctr_x2, ctr_y2 = calc_region( - gt_, r1, featmap_sizes[lvl]) - all_loc_targets[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - all_loc_weights[lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 0 - all_loc_weights[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - # calculate ignore map on nearby low level feature - if lvl > 0: - d_lvl = lvl - 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[d_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[d_lvl]) - all_ignore_map[d_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - # calculate ignore map on nearby high level feature - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[u_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[u_lvl]) - all_ignore_map[u_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - for lvl_id in range(num_lvls): - # ignore negative regions w.r.t. ignore map - all_loc_weights[lvl_id][(all_loc_weights[lvl_id] < 0) - & (all_ignore_map[lvl_id] > 0)] = 0 - # set negative regions with weight 0.1 - all_loc_weights[lvl_id][all_loc_weights[lvl_id] < 0] = 0.1 - # loc average factor to balance loss - loc_avg_factor = sum( - [t.size(0) * t.size(-1) * t.size(-2) - for t in all_loc_targets]) / 200 - return all_loc_targets, all_loc_weights, loc_avg_factor - - def _ga_shape_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - img_meta, - unmap_outputs=True): - """Compute guided anchoring targets. - - This function returns sampled anchors and gt bboxes directly - rather than calculates regression targets. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image. - img_meta (dict): Meta info of a single image. - approxs_per_octave (int): number of approxs per octave - cfg (dict): RPN train configs. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - if not inside_flags.any(): - return (None, ) * 5 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.ga_assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.ga_sampler.sample(assign_result, squares, - gt_bboxes) - - bbox_anchors = torch.zeros_like(squares) - bbox_gts = torch.zeros_like(squares) - bbox_weights = torch.zeros_like(squares) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - bbox_anchors[pos_inds, :] = sampling_result.pos_bboxes - bbox_gts[pos_inds, :] = sampling_result.pos_gt_bboxes - bbox_weights[pos_inds, :] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - bbox_anchors = unmap(bbox_anchors, num_total_anchors, inside_flags) - bbox_gts = unmap(bbox_gts, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (bbox_anchors, bbox_gts, bbox_weights, pos_inds, neg_inds) - - def ga_shape_targets(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - unmap_outputs=True): - """Compute guided anchoring targets. - - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - (all_bbox_anchors, all_bbox_gts, all_bbox_weights, pos_inds_list, - neg_inds_list) = multi_apply( - self._ga_shape_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - img_metas, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([bbox_anchors is None for bbox_anchors in all_bbox_anchors]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - bbox_anchors_list = images_to_levels(all_bbox_anchors, - num_level_squares) - bbox_gts_list = images_to_levels(all_bbox_gts, num_level_squares) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_squares) - return (bbox_anchors_list, bbox_gts_list, bbox_weights_list, - num_total_pos, num_total_neg) - - def loss_shape_single(self, shape_pred, bbox_anchors, bbox_gts, - anchor_weights, anchor_total_num): - shape_pred = shape_pred.permute(0, 2, 3, 1).contiguous().view(-1, 2) - bbox_anchors = bbox_anchors.contiguous().view(-1, 4) - bbox_gts = bbox_gts.contiguous().view(-1, 4) - anchor_weights = anchor_weights.contiguous().view(-1, 4) - bbox_deltas = bbox_anchors.new_full(bbox_anchors.size(), 0) - bbox_deltas[:, 2:] += shape_pred - # filter out negative samples to speed-up weighted_bounded_iou_loss - inds = torch.nonzero( - anchor_weights[:, 0] > 0, as_tuple=False).squeeze(1) - bbox_deltas_ = bbox_deltas[inds] - bbox_anchors_ = bbox_anchors[inds] - bbox_gts_ = bbox_gts[inds] - anchor_weights_ = anchor_weights[inds] - pred_anchors_ = self.anchor_coder.decode( - bbox_anchors_, bbox_deltas_, wh_ratio_clip=1e-6) - loss_shape = self.loss_shape( - pred_anchors_, - bbox_gts_, - anchor_weights_, - avg_factor=anchor_total_num) - return loss_shape - - def loss_loc_single(self, loc_pred, loc_target, loc_weight, - loc_avg_factor): - loss_loc = self.loss_loc( - loc_pred.reshape(-1, 1), - loc_target.reshape(-1).long(), - loc_weight.reshape(-1), - avg_factor=loc_avg_factor) - return loss_loc - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get loc targets - loc_targets, loc_weights, loc_avg_factor = self.ga_loc_targets( - gt_bboxes, featmap_sizes) - - # get sampled approxes - approxs_list, inside_flag_list = self.get_sampled_approxs( - featmap_sizes, img_metas, device=device) - # get squares and guided anchors - squares_list, guided_anchors_list, _ = self.get_anchors( - featmap_sizes, shape_preds, loc_preds, img_metas, device=device) - - # get shape targets - shape_targets = self.ga_shape_targets(approxs_list, inside_flag_list, - squares_list, gt_bboxes, - img_metas) - if shape_targets is None: - return None - (bbox_anchors_list, bbox_gts_list, anchor_weights_list, anchor_fg_num, - anchor_bg_num) = shape_targets - anchor_total_num = ( - anchor_fg_num if not self.ga_sampling else anchor_fg_num + - anchor_bg_num) - - # get anchor targets - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - guided_anchors_list, - inside_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [ - anchors.size(0) for anchors in guided_anchors_list[0] - ] - # concat all level anchors to a single tensor - concat_anchor_list = [] - for i in range(len(guided_anchors_list)): - concat_anchor_list.append(torch.cat(guided_anchors_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - # get classification and bbox regression losses - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # get anchor location loss - losses_loc = [] - for i in range(len(loc_preds)): - loss_loc = self.loss_loc_single( - loc_preds[i], - loc_targets[i], - loc_weights[i], - loc_avg_factor=loc_avg_factor) - losses_loc.append(loss_loc) - - # get anchor shape loss - losses_shape = [] - for i in range(len(shape_preds)): - loss_shape = self.loss_shape_single( - shape_preds[i], - bbox_anchors_list[i], - bbox_gts_list[i], - anchor_weights_list[i], - anchor_total_num=anchor_total_num) - losses_shape.append(loss_shape) - - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_shape=losses_shape, - loss_loc=losses_loc) - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) == len(shape_preds) == len( - loc_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - # get guided anchors - _, guided_anchors, loc_masks = self.get_anchors( - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=not self.training, - device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - guided_anchor_list = [ - guided_anchors[img_id][i].detach() for i in range(num_levels) - ] - loc_mask_list = [ - loc_masks[img_id][i].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - guided_anchor_list, - loc_mask_list, img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors, mask in zip(cls_scores, bbox_preds, - mlvl_anchors, - mlvl_masks): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - # reshape scores and bbox_pred - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask, :] - bbox_pred = bbox_pred[mask, :] - if scores.dim() == 0: - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - bbox_pred = bbox_pred.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - # multi class NMS - det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/fsaf.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/fsaf.py deleted file mode 100644 index 9f10fa1ae10f31e6cb5de65505b14a4fc97dd022..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/fsaf.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FSAF(SingleStageDetector): - """Implementation of `FSAF `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FSAF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/utils/builder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/utils/builder.py deleted file mode 100644 index f362d1c92ca9d4ed95a2b3d28d3e6baedd14e462..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/utils/builder.py +++ /dev/null @@ -1,14 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -TRANSFORMER = Registry('Transformer') -POSITIONAL_ENCODING = Registry('Position encoding') - - -def build_transformer(cfg, default_args=None): - """Builder for Transformer.""" - return build_from_cfg(cfg, TRANSFORMER, default_args) - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/hungarian_assigner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/hungarian_assigner.py deleted file mode 100644 index e10cc14afac4ddfcb9395c1a250ece1fbfe3263c..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/hungarian_assigner.py +++ /dev/null @@ -1,145 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..match_costs import build_match_cost -from ..transforms import bbox_cxcywh_to_xyxy -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -try: - from scipy.optimize import linear_sum_assignment -except ImportError: - linear_sum_assignment = None - - -@BBOX_ASSIGNERS.register_module() -class HungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, regression L1 cost and regression iou cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_weight (int | float, optional): The scale factor for classification - cost. Default 1.0. - bbox_weight (int | float, optional): The scale factor for regression - L1 cost. Default 1.0. - iou_weight (int | float, optional): The scale factor for regression - iou cost. Default 1.0. - iou_calculator (dict | optional): The config for the iou calculation. - Default type `BboxOverlaps2D`. - iou_mode (str | optional): "iou" (intersection over union), "iof" - (intersection over foreground), or "giou" (generalized - intersection over union). Default "giou". - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=1.0), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.reg_cost = build_match_cost(reg_cost) - self.iou_cost = build_match_cost(iou_cost) - - def assign(self, - bbox_pred, - cls_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - This method assign each query prediction to a ground truth or - background. The `assigned_gt_inds` with -1 means don't care, - 0 means negative sample, and positive number is the index (1-based) - of assigned gt. - The assignment is done in the following steps, the order matters. - - 1. assign every prediction to -1 - 2. compute the weighted costs - 3. do Hungarian matching on CPU based on the costs - 4. assign all to 0 (background) first, then for each matched pair - between predictions and gts, treat this prediction as foreground - and assign the corresponding gt index (plus 1) to it. - - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - [num_query, 4]. - cls_pred (Tensor): Predicted classification logits, shape - [num_query, num_class]. - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - img_h, img_w, _ = img_meta['img_shape'] - factor = gt_bboxes.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - - # 2. compute the weighted costs - # classification and bboxcost. - cls_cost = self.cls_cost(cls_pred, gt_labels) - # regression L1 cost - normalize_gt_bboxes = gt_bboxes / factor - reg_cost = self.reg_cost(bbox_pred, normalize_gt_bboxes) - # regression iou cost, defaultly giou is used in official DETR. - bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor - iou_cost = self.iou_cost(bboxes, gt_bboxes) - # weighted sum of above three costs - cost = cls_cost + reg_cost + iou_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - if linear_sum_assignment is None: - raise ImportError('Please run "pip install scipy" ' - 'to install scipy first.') - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - bbox_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - bbox_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/fpn_uniformer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/fpn_uniformer.py deleted file mode 100644 index 8aae98c5991055bfcc08e82ccdc09f8b1d9f8a8d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/fpn_uniformer.py +++ /dev/null @@ -1,35 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - mlp_ratio=4., - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1), - neck=dict( - type='FPN', - in_channels=[64, 128, 320, 512], - out_channels=256, - num_outs=4), - decode_head=dict( - type='FPNHead', - in_channels=[256, 256, 256, 256], - in_index=[0, 1, 2, 3], - feature_strides=[4, 8, 16, 32], - channels=128, - dropout_ratio=0.1, - num_classes=150, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole') -) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/runner/hooks/memory.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/runner/hooks/memory.py deleted file mode 100644 index 70cf9a838fb314e3bd3c07aadbc00921a81e83ed..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/runner/hooks/memory.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class EmptyCacheHook(Hook): - - def __init__(self, before_epoch=False, after_epoch=True, after_iter=False): - self._before_epoch = before_epoch - self._after_epoch = after_epoch - self._after_iter = after_iter - - def after_iter(self, runner): - if self._after_iter: - torch.cuda.empty_cache() - - def before_epoch(self, runner): - if self._before_epoch: - torch.cuda.empty_cache() - - def after_epoch(self, runner): - if self._after_epoch: - torch.cuda.empty_cache() diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/cocoa.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/cocoa.py deleted file mode 100644 index f3cbe84f3b690cf33b870a0e1da872136265af15..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/cocoa.py +++ /dev/null @@ -1,159 +0,0 @@ -from pyglet.app.base import PlatformEventLoop -from pyglet.libs.darwin import cocoapy - -NSApplication = cocoapy.ObjCClass('NSApplication') -NSMenu = cocoapy.ObjCClass('NSMenu') -NSMenuItem = cocoapy.ObjCClass('NSMenuItem') -NSAutoreleasePool = cocoapy.ObjCClass('NSAutoreleasePool') -NSDate = cocoapy.ObjCClass('NSDate') -NSEvent = cocoapy.ObjCClass('NSEvent') -NSUserDefaults = cocoapy.ObjCClass('NSUserDefaults') - - -class AutoReleasePool: - def __enter__(self): - self.pool = NSAutoreleasePool.alloc().init() - return self.pool - - def __exit__(self, exc_type, exc_value, traceback): - self.pool.drain() - del self.pool - - -def add_menu_item(menu, title, action, key): - with AutoReleasePool(): - title = cocoapy.CFSTR(title) - action = cocoapy.get_selector(action) - key = cocoapy.CFSTR(key) - menuItem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_( - title, action, key) - menu.addItem_(menuItem) - - # cleanup - title.release() - key.release() - menuItem.release() - - -def create_menu(): - with AutoReleasePool(): - appMenu = NSMenu.alloc().init() - - # Hide still doesn't work!? - add_menu_item(appMenu, 'Hide!', 'hide:', 'h') - appMenu.addItem_(NSMenuItem.separatorItem()) - add_menu_item(appMenu, 'Quit!', 'terminate:', 'q') - - menubar = NSMenu.alloc().init() - appMenuItem = NSMenuItem.alloc().init() - appMenuItem.setSubmenu_(appMenu) - menubar.addItem_(appMenuItem) - NSApp = NSApplication.sharedApplication() - NSApp.setMainMenu_(menubar) - - # cleanup - appMenu.release() - menubar.release() - appMenuItem.release() - - -class CocoaEventLoop(PlatformEventLoop): - - def __init__(self): - super(CocoaEventLoop, self).__init__() - with AutoReleasePool(): - # Prepare the default application. - self.NSApp = NSApplication.sharedApplication() - if self.NSApp.isRunning(): - # Application was already started by GUI library (e.g. wxPython). - return - if not self.NSApp.mainMenu(): - create_menu() - self.NSApp.setActivationPolicy_(cocoapy.NSApplicationActivationPolicyRegular) - # Prevent Lion / Mountain Lion from automatically saving application state. - # If we don't do this, new windows will not display on 10.8 after finishLaunching - # has been called. - defaults = NSUserDefaults.standardUserDefaults() - ignoreState = cocoapy.CFSTR("ApplePersistenceIgnoreState") - if not defaults.objectForKey_(ignoreState): - defaults.setBool_forKey_(True, ignoreState) - self._finished_launching = False - - def start(self): - with AutoReleasePool(): - if not self.NSApp.isRunning() and not self._finished_launching: - # finishLaunching should be called only once. However isRunning will not - # guard this, as we are not using the normal event loop. - self.NSApp.finishLaunching() - self.NSApp.activateIgnoringOtherApps_(True) - self._finished_launching = True - - def step(self, timeout=None): - with AutoReleasePool(): - self.dispatch_posted_events() - - # Determine the timeout date. - if timeout is None: - # Using distantFuture as untilDate means that nextEventMatchingMask - # will wait until the next event comes along. - timeout_date = NSDate.distantFuture() - else: - timeout_date = NSDate.dateWithTimeIntervalSinceNow_(timeout) - - # Retrieve the next event (if any). We wait for an event to show up - # and then process it, or if timeout_date expires we simply return. - # We only process one event per call of step(). - self._is_running.set() - event = self.NSApp.nextEventMatchingMask_untilDate_inMode_dequeue_( - cocoapy.NSAnyEventMask, timeout_date, cocoapy.NSDefaultRunLoopMode, True) - - # Dispatch the event (if any). - if event is not None: - event_type = event.type() - if event_type != cocoapy.NSApplicationDefined: - # Send out event as normal. Responders will still receive - # keyUp:, keyDown:, and flagsChanged: events. - self.NSApp.sendEvent_(event) - - # Resend key events as special pyglet-specific messages - # which supplant the keyDown:, keyUp:, and flagsChanged: messages - # because NSApplication translates multiple key presses into key - # equivalents before sending them on, which means that some keyUp: - # messages are never sent for individual keys. Our pyglet-specific - # replacements ensure that we see all the raw key presses & releases. - # We also filter out key-down repeats since pyglet only sends one - # on_key_press event per key press. - if event_type == cocoapy.NSKeyDown and not event.isARepeat(): - self.NSApp.sendAction_to_from_(cocoapy.get_selector("pygletKeyDown:"), None, event) - elif event_type == cocoapy.NSKeyUp: - self.NSApp.sendAction_to_from_(cocoapy.get_selector("pygletKeyUp:"), None, event) - elif event_type == cocoapy.NSFlagsChanged: - self.NSApp.sendAction_to_from_(cocoapy.get_selector("pygletFlagsChanged:"), None, event) - - self.NSApp.updateWindows() - did_time_out = False - else: - did_time_out = True - - self._is_running.clear() - - return did_time_out - - def stop(self): - pass - - def notify(self): - with AutoReleasePool(): - notifyEvent = NSEvent.otherEventWithType_location_modifierFlags_timestamp_windowNumber_context_subtype_data1_data2_( - cocoapy.NSApplicationDefined, # type - cocoapy.NSPoint(0.0, 0.0), # location - 0, # modifierFlags - 0, # timestamp - 0, # windowNumber - None, # graphicsContext - 0, # subtype - 0, # data1 - 0, # data2 - ) - - self.NSApp.postEvent_atStart_(notifyEvent, False) diff --git a/spaces/adirik/kakao-brain-vit/backbone/.ipynb_checkpoints/classification-checkpoint.py b/spaces/adirik/kakao-brain-vit/backbone/.ipynb_checkpoints/classification-checkpoint.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/aijack/Track/app.py b/spaces/aijack/Track/app.py deleted file mode 100644 index 58169a7025054ed5f17856fd86f08b83ff761aa6..0000000000000000000000000000000000000000 --- a/spaces/aijack/Track/app.py +++ /dev/null @@ -1,90 +0,0 @@ -import os -os.system("pip3 install cython_bbox gdown 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'") -from torchyolo import YoloHub -import gradio as gr -from utils import attempt_download_from_hub - - - -def object_tracker( - source: str, - model_type: str, - model_path: str, - tracker_type: str, - tracker_config_path: str, - StrongSort_OsNet_Path: str = None, -): - model = YoloHub( - config_path="default_config.yaml", - model_type=model_type, - model_path=model_path, - ) - - if tracker_type == "STRONGSORT": - StrongSort_OsNet_Path = attempt_download_from_hub(StrongSort_OsNet_Path) - - model.predict( - source=source, - tracker_type=tracker_type, - tracker_weight_path=StrongSort_OsNet_Path, - tracker_config_path=tracker_config_path, - ) - - return 'output.mp4' - - -inputs = [ - gr.Video(), - gr.inputs.Dropdown( - label="Model Type", - choices=["yolov5"], - default="yolov5", - ), - gr.inputs.Dropdown( - label="Model Path", - choices=[ - "aijack/v5s" ], - default="aijack/v5s", - ), - gr.inputs.Dropdown( - label="Tracker Type", - choices=["OCSORT"], - default="OCSORT", - ), - gr.inputs.Dropdown( - label="Tracker Config Path", - choices=[ - "tracker/oc_sort.yaml", - ], - default="tracker/oc_sort.yaml", - ), - gr.inputs.Dropdown( - label="Tracker Weight Path", - choices=[ - "aijack/osnet" - ], - default="aijack/osnet", - ), -] -examples = [ - [ - "01.mp4", - "yolov5", - "aijack/v5s", - "OCSORT", - "tracker/oc_sort.yaml", - ] -] -outputs = gr.Video() -title = "YOLOV5 Object Detection and Track Algorithm Library" -article = "

Claireye | 2023

" -demo_app = gr.Interface( - fn=object_tracker, - inputs=inputs, - examples=examples, - outputs=outputs, - title=title, - article = article, - cache_examples=False -) -demo_app.launch(debug=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/akhaliq/Counterfeit-V2.5/app.py b/spaces/akhaliq/Counterfeit-V2.5/app.py deleted file mode 100644 index 440be88b0c3bef643167853e139905c56573ec82..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Counterfeit-V2.5/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'gsdf/Counterfeit-V2.5' -prefix = '' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
-
-

Counterfeit V2.5

-
-

- Demo for Counterfeit V2.5 Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

- Duplicate Space -
- """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
-
-

This space was created using SD Space Creator.

-
- """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Document.pod b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Document.pod deleted file mode 100644 index f8e7b81c9e6c53689280b4e0976c9c832b2f02d5..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Document.pod +++ /dev/null @@ -1,220 +0,0 @@ -=head1 NAME - -XML::DOM::Document - An XML document node in XML::DOM - -=head1 DESCRIPTION - -XML::DOM::Document extends L. - -It is the main root of the XML document structure as returned by -XML::DOM::Parser::parse and XML::DOM::Parser::parsefile. - -Since elements, text nodes, comments, processing instructions, etc. -cannot exist outside the context of a Document, the Document interface -also contains the factory methods needed to create these objects. The -Node objects created have a getOwnerDocument method which associates -them with the Document within whose context they were created. - -=head2 METHODS - -=over 4 - -=item getDocumentElement - -This is a convenience method that allows direct access to -the child node that is the root Element of the document. - -=item getDoctype - -The Document Type Declaration (see DocumentType) associated -with this document. For HTML documents as well as XML -documents without a document type declaration this returns -undef. The DOM Level 1 does not support editing the Document -Type Declaration. - -B: This implementation allows editing the doctype. -See I for details. - -=item getImplementation - -The DOMImplementation object that handles this document. A -DOM application may use objects from multiple implementations. - -=item createElement (tagName) - -Creates an element of the type specified. Note that the -instance returned implements the Element interface, so -attributes can be specified directly on the returned object. - -DOMExceptions: - -=over 4 - -=item * INVALID_CHARACTER_ERR - -Raised if the tagName does not conform to the XML spec. - -=back - -=item createTextNode (data) - -Creates a Text node given the specified string. - -=item createComment (data) - -Creates a Comment node given the specified string. - -=item createCDATASection (data) - -Creates a CDATASection node given the specified string. - -=item createAttribute (name [, value [, specified ]]) - -Creates an Attr of the given name. Note that the Attr -instance can then be set on an Element using the setAttribute method. - -B: The DOM Spec does not allow passing the value or the -specified property in this method. In this implementation they are optional. - -Parameters: - I The attribute's value. See Attr::setValue for details. - If the value is not supplied, the specified property is set to 0. - I Whether the attribute value was specified or whether the default - value was used. If not supplied, it's assumed to be 1. - -DOMExceptions: - -=over 4 - -=item * INVALID_CHARACTER_ERR - -Raised if the name does not conform to the XML spec. - -=back - -=item createProcessingInstruction (target, data) - -Creates a ProcessingInstruction node given the specified name and data strings. - -Parameters: - I The target part of the processing instruction. - I The data for the node. - -DOMExceptions: - -=over 4 - -=item * INVALID_CHARACTER_ERR - -Raised if the target does not conform to the XML spec. - -=back - -=item createDocumentFragment - -Creates an empty DocumentFragment object. - -=item createEntityReference (name) - -Creates an EntityReference object. - -=back - -=head2 Additional methods not in the DOM Spec - -=over 4 - -=item getXMLDecl and setXMLDecl (xmlDecl) - -Returns the XMLDecl for this Document or undef if none was specified. -Note that XMLDecl is not part of the list of child nodes. - -=item setDoctype (doctype) - -Sets or replaces the DocumentType. -B: Don't use appendChild or insertBefore to set the DocumentType. -Even though doctype will be part of the list of child nodes, it is handled -specially. - -=item getDefaultAttrValue (elem, attr) - -Returns the default attribute value as a string or undef, if none is available. - -Parameters: - I The element tagName. - I The attribute name. - -=item getEntity (name) - -Returns the Entity with the specified name. - -=item createXMLDecl (version, encoding, standalone) - -Creates an XMLDecl object. All parameters may be undefined. - -=item createDocumentType (name, sysId, pubId) - -Creates a DocumentType object. SysId and pubId may be undefined. - -=item createNotation (name, base, sysId, pubId) - -Creates a new Notation object. Consider using -XML::DOM::DocumentType::addNotation! - -=item createEntity (parameter, notationName, value, sysId, pubId, ndata) - -Creates an Entity object. Consider using XML::DOM::DocumentType::addEntity! - -=item createElementDecl (name, model) - -Creates an ElementDecl object. - -DOMExceptions: - -=over 4 - -=item * INVALID_CHARACTER_ERR - -Raised if the element name (tagName) does not conform to the XML spec. - -=back - -=item createAttlistDecl (name) - -Creates an AttlistDecl object. - -DOMExceptions: - -=over 4 - -=item * INVALID_CHARACTER_ERR - -Raised if the element name (tagName) does not conform to the XML spec. - -=back - -=item expandEntity (entity [, parameter]) - -Expands the specified entity or parameter entity (if parameter=1) and returns -its value as a string, or undef if the entity does not exist. -(The entity name should not contain the '%', '&' or ';' delimiters.) - -=item check ( [$checker] ) - -Uses the specified L to validate the document. -If no XML::Checker is supplied, a new XML::Checker is created. -See L for details. - -=item check_sax ( [$checker] ) - -Similar to check() except it uses the SAX interface to XML::Checker instead of -the expat interface. This method may disappear or replace check() at some time. - -=item createChecker () - -Creates an XML::Checker based on the document's DTD. -The $checker can be reused to check any elements within the document. -Create a new L whenever the DOCTYPE section of the document -is altered! - -=back diff --git a/spaces/aliceoq/vozes-da-loirinha/utils.py b/spaces/aliceoq/vozes-da-loirinha/utils.py deleted file mode 100644 index 62be8d03a8e8b839f8747310ef0ec0e82fb8ff0a..0000000000000000000000000000000000000000 --- a/spaces/aliceoq/vozes-da-loirinha/utils.py +++ /dev/null @@ -1,151 +0,0 @@ -import ffmpeg -import numpy as np - -# import praatio -# import praatio.praat_scripts -import os -import sys - -import random - -import csv - -platform_stft_mapping = { - "linux": "stftpitchshift", - "darwin": "stftpitchshift", - "win32": "stftpitchshift.exe", -} - -stft = platform_stft_mapping.get(sys.platform) -# praatEXE = join('.',os.path.abspath(os.getcwd()) + r"\Praat.exe") - - -def CSVutil(file, rw, type, *args): - if type == "formanting": - if rw == "r": - with open(file) as fileCSVread: - csv_reader = list(csv.reader(fileCSVread)) - return ( - (csv_reader[0][0], csv_reader[0][1], csv_reader[0][2]) - if csv_reader is not None - else (lambda: exec('raise ValueError("No data")'))() - ) - else: - if args: - doformnt = args[0] - else: - doformnt = False - qfr = args[1] if len(args) > 1 else 1.0 - tmb = args[2] if len(args) > 2 else 1.0 - with open(file, rw, newline="") as fileCSVwrite: - csv_writer = csv.writer(fileCSVwrite, delimiter=",") - csv_writer.writerow([doformnt, qfr, tmb]) - elif type == "stop": - stop = args[0] if args else False - with open(file, rw, newline="") as fileCSVwrite: - csv_writer = csv.writer(fileCSVwrite, delimiter=",") - csv_writer.writerow([stop]) - - -def load_audio(file, sr, DoFormant, Quefrency, Timbre): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() diff --git a/spaces/allknowingroger/Image-Models-Test172/README.md b/spaces/allknowingroger/Image-Models-Test172/README.md deleted file mode 100644 index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test172/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test191/README.md b/spaces/allknowingroger/Image-Models-Test191/README.md deleted file mode 100644 index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test191/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test ---- - - \ No newline at end of file diff --git a/spaces/alvanlii/domain-expansion/torch_utils/persistence.py b/spaces/alvanlii/domain-expansion/torch_utils/persistence.py deleted file mode 100644 index 0186cfd97bca0fcb397a7b73643520c1d1105a02..0000000000000000000000000000000000000000 --- a/spaces/alvanlii/domain-expansion/torch_utils/persistence.py +++ /dev/null @@ -1,251 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for pickling Python code alongside other data. - -The pickled code is automatically imported into a separate Python module -during unpickling. This way, any previously exported pickles will remain -usable even if the original code is no longer available, or if the current -version of the code is not consistent with what was originally pickled.""" - -import sys -import pickle -import io -import inspect -import copy -import uuid -import types -import dnnlib - -#---------------------------------------------------------------------------- - -_version = 6 # internal version number -_decorators = set() # {decorator_class, ...} -_import_hooks = [] # [hook_function, ...] -_module_to_src_dict = dict() # {module: src, ...} -_src_to_module_dict = dict() # {src: module, ...} - -#---------------------------------------------------------------------------- - -def persistent_class(orig_class): - r"""Class decorator that extends a given class to save its source code - when pickled. - - Example: - - from torch_utils import persistence - - @persistence.persistent_class - class MyNetwork(torch.nn.Module): - def __init__(self, num_inputs, num_outputs): - super().__init__() - self.fc = MyLayer(num_inputs, num_outputs) - ... - - @persistence.persistent_class - class MyLayer(torch.nn.Module): - ... - - When pickled, any instance of `MyNetwork` and `MyLayer` will save its - source code alongside other internal state (e.g., parameters, buffers, - and submodules). This way, any previously exported pickle will remain - usable even if the class definitions have been modified or are no - longer available. - - The decorator saves the source code of the entire Python module - containing the decorated class. It does *not* save the source code of - any imported modules. Thus, the imported modules must be available - during unpickling, also including `torch_utils.persistence` itself. - - It is ok to call functions defined in the same module from the - decorated class. However, if the decorated class depends on other - classes defined in the same module, they must be decorated as well. - This is illustrated in the above example in the case of `MyLayer`. - - It is also possible to employ the decorator just-in-time before - calling the constructor. For example: - - cls = MyLayer - if want_to_make_it_persistent: - cls = persistence.persistent_class(cls) - layer = cls(num_inputs, num_outputs) - - As an additional feature, the decorator also keeps track of the - arguments that were used to construct each instance of the decorated - class. The arguments can be queried via `obj.init_args` and - `obj.init_kwargs`, and they are automatically pickled alongside other - object state. A typical use case is to first unpickle a previous - instance of a persistent class, and then upgrade it to use the latest - version of the source code: - - with open('old_pickle.pkl', 'rb') as f: - old_net = pickle.load(f) - new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs) - misc.copy_params_and_buffers(old_net, new_net, require_all=True) - """ - assert isinstance(orig_class, type) - if is_persistent(orig_class): - return orig_class - - assert orig_class.__module__ in sys.modules - orig_module = sys.modules[orig_class.__module__] - orig_module_src = _module_to_src(orig_module) - - class Decorator(orig_class): - _orig_module_src = orig_module_src - _orig_class_name = orig_class.__name__ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init_args = copy.deepcopy(args) - self._init_kwargs = copy.deepcopy(kwargs) - assert orig_class.__name__ in orig_module.__dict__ - _check_pickleable(self.__reduce__()) - - @property - def init_args(self): - return copy.deepcopy(self._init_args) - - @property - def init_kwargs(self): - return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs)) - - def __reduce__(self): - fields = list(super().__reduce__()) - fields += [None] * max(3 - len(fields), 0) - if fields[0] is not _reconstruct_persistent_obj: - meta = dict(type='class', version=_version, module_src=self._orig_module_src, class_name=self._orig_class_name, state=fields[2]) - fields[0] = _reconstruct_persistent_obj # reconstruct func - fields[1] = (meta,) # reconstruct args - fields[2] = None # state dict - return tuple(fields) - - Decorator.__name__ = orig_class.__name__ - _decorators.add(Decorator) - return Decorator - -#---------------------------------------------------------------------------- - -def is_persistent(obj): - r"""Test whether the given object or class is persistent, i.e., - whether it will save its source code when pickled. - """ - try: - if obj in _decorators: - return True - except TypeError: - pass - return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck - -#---------------------------------------------------------------------------- - -def import_hook(hook): - r"""Register an import hook that is called whenever a persistent object - is being unpickled. A typical use case is to patch the pickled source - code to avoid errors and inconsistencies when the API of some imported - module has changed. - - The hook should have the following signature: - - hook(meta) -> modified meta - - `meta` is an instance of `dnnlib.EasyDict` with the following fields: - - type: Type of the persistent object, e.g. `'class'`. - version: Internal version number of `torch_utils.persistence`. - module_src Original source code of the Python module. - class_name: Class name in the original Python module. - state: Internal state of the object. - - Example: - - @persistence.import_hook - def wreck_my_network(meta): - if meta.class_name == 'MyNetwork': - print('MyNetwork is being imported. I will wreck it!') - meta.module_src = meta.module_src.replace("True", "False") - return meta - """ - assert callable(hook) - _import_hooks.append(hook) - -#---------------------------------------------------------------------------- - -def _reconstruct_persistent_obj(meta): - r"""Hook that is called internally by the `pickle` module to unpickle - a persistent object. - """ - meta = dnnlib.EasyDict(meta) - meta.state = dnnlib.EasyDict(meta.state) - for hook in _import_hooks: - meta = hook(meta) - assert meta is not None - - assert meta.version == _version - module = _src_to_module(meta.module_src) - - assert meta.type == 'class' - orig_class = module.__dict__[meta.class_name] - decorator_class = persistent_class(orig_class) - obj = decorator_class.__new__(decorator_class) - - setstate = getattr(obj, '__setstate__', None) - if callable(setstate): - setstate(meta.state) # pylint: disable=not-callable - else: - obj.__dict__.update(meta.state) - return obj - -#---------------------------------------------------------------------------- - -def _module_to_src(module): - r"""Query the source code of a given Python module. - """ - src = _module_to_src_dict.get(module, None) - if src is None: - src = inspect.getsource(module) - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - return src - -def _src_to_module(src): - r"""Get or create a Python module for the given source code. - """ - module = _src_to_module_dict.get(src, None) - if module is None: - module_name = "_imported_module_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - exec(src, module.__dict__) # pylint: disable=exec-used - return module - -#---------------------------------------------------------------------------- - -def _check_pickleable(obj): - r"""Check that the given object is pickleable, raising an exception if - it is not. This function is expected to be considerably more efficient - than actually pickling the object. - """ - def recurse(obj): - if isinstance(obj, (list, tuple, set)): - return [recurse(x) for x in obj] - if isinstance(obj, dict): - return [[recurse(x), recurse(y)] for x, y in obj.items()] - if isinstance(obj, (str, int, float, bool, bytes, bytearray)): - return None # Python primitive types are pickleable. - if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor']: - return None # NumPy arrays and PyTorch tensors are pickleable. - if is_persistent(obj): - return None # Persistent objects are pickleable, by virtue of the constructor check. - return obj - with io.BytesIO() as f: - pickle.dump(recurse(obj), f) - -#---------------------------------------------------------------------------- diff --git a/spaces/anhnv125/recipe_generation/README.md b/spaces/anhnv125/recipe_generation/README.md deleted file mode 100644 index d1045a6a970ac70dc7c1f6214eaf6b0bcae61e40..0000000000000000000000000000000000000000 --- a/spaces/anhnv125/recipe_generation/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Recipe Generation -emoji: 🧑‍🍳 -colorFrom: blue -python_version: 3.9.13 -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/RWKV.py b/spaces/antonovmaxim/text-generation-webui-space/modules/RWKV.py deleted file mode 100644 index bb6bab50c7d644f499e1ada84ec8b09d6adadf7e..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/modules/RWKV.py +++ /dev/null @@ -1,145 +0,0 @@ -import copy -import os -from pathlib import Path - -import numpy as np -from tokenizers import Tokenizer - -import modules.shared as shared -from modules.callbacks import Iteratorize - -np.set_printoptions(precision=4, suppress=True, linewidth=200) - -os.environ['RWKV_JIT_ON'] = '1' -os.environ["RWKV_CUDA_ON"] = '1' if shared.args.rwkv_cuda_on else '0' # use CUDA kernel for seq mode (much faster) - -from rwkv.model import RWKV -from rwkv.utils import PIPELINE, PIPELINE_ARGS - - -class RWKVModel: - def __init__(self): - pass - - @classmethod - def from_pretrained(self, path, dtype="fp16", device="cuda"): - tokenizer_path = Path(f"{path.parent}/20B_tokenizer.json") - if shared.args.rwkv_strategy is None: - model = RWKV(model=str(path), strategy=f'{device} {dtype}') - else: - model = RWKV(model=str(path), strategy=shared.args.rwkv_strategy) - - pipeline = PIPELINE(model, str(tokenizer_path)) - result = self() - result.pipeline = pipeline - result.model = model - result.cached_context = "" - result.cached_model_state = None - result.cached_output_logits = None - return result - - def generate(self, context="", token_count=20, temperature=1, top_p=1, top_k=50, repetition_penalty=None, alpha_frequency=0.1, alpha_presence=0.1, token_ban=None, token_stop=None, callback=None): - args = PIPELINE_ARGS( - temperature=temperature, - top_p=top_p, - top_k=top_k, - alpha_frequency=alpha_frequency, # Frequency Penalty (as in GPT-3) - alpha_presence=alpha_presence, # Presence Penalty (as in GPT-3) - token_ban=token_ban or [0], # ban the generation of some tokens - token_stop=token_stop or [] - ) - - if self.cached_context != "": - if context.startswith(self.cached_context): - context = context[len(self.cached_context):] - else: - self.cached_context = "" - self.cached_model_state = None - self.cached_output_logits = None - - # out = self.pipeline.generate(context, token_count=token_count, args=args, callback=callback) - out = self.generate_from_cached_state(context, token_count=token_count, args=args, callback=callback) - return out - - def generate_with_streaming(self, **kwargs): - with Iteratorize(self.generate, kwargs, callback=None) as generator: - reply = '' - for token in generator: - reply += token - yield reply - - # Similar to the PIPELINE.generate, but lets us maintain the cached_model_state - def generate_from_cached_state(self, ctx="", token_count=20, args=None, callback=None): - all_tokens = [] - out_str = '' - occurrence = {} - state = copy.deepcopy(self.cached_model_state) if self.cached_model_state is not None else None - - # if we ended up with an empty context, just reuse the cached logits - # this can happen if a user undoes a message and then sends the exact message again - # in that case the full context ends up being the same as the cached_context, so the remaining context is empty. - if ctx == "": - out = self.cached_output_logits - - for i in range(token_count): - # forward - tokens = self.pipeline.encode(ctx) if i == 0 else [token] - while len(tokens) > 0: - out, state = self.model.forward(tokens[:args.chunk_len], state) - tokens = tokens[args.chunk_len:] - - # cache the model state after scanning the context - # we don't cache the state after processing our own generated tokens because - # the output string might be post-processed arbitrarily. Therefore, what's fed into the model - # on the next round of chat might be slightly different what what it output on the previous round - if i == 0: - self.cached_context += ctx - self.cached_model_state = copy.deepcopy(state) - self.cached_output_logits = copy.deepcopy(out) - - # adjust probabilities - for n in args.token_ban: - out[n] = -float('inf') - - for n in occurrence: - out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency) - - # sampler - token = self.pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p, top_k=args.top_k) - if token in args.token_stop: - break - - all_tokens += [token] - if token not in occurrence: - occurrence[token] = 1 - else: - occurrence[token] += 1 - - # output - tmp = self.pipeline.decode([token]) - if '\ufffd' not in tmp: # is valid utf-8 string? - if callback: - callback(tmp) - - out_str += tmp - - return out_str - - -class RWKVTokenizer: - def __init__(self): - pass - - @classmethod - def from_pretrained(self, path): - tokenizer_path = path / "20B_tokenizer.json" - tokenizer = Tokenizer.from_file(str(tokenizer_path)) - result = self() - result.tokenizer = tokenizer - return result - - def encode(self, prompt): - return self.tokenizer.encode(prompt).ids - - def decode(self, ids): - return self.tokenizer.decode(ids) diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/google_translate/script.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/google_translate/script.py deleted file mode 100644 index 63226107b2c2afe086fc343c7b7f7df78bef3f8a..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/google_translate/script.py +++ /dev/null @@ -1,46 +0,0 @@ -import gradio as gr -from deep_translator import GoogleTranslator - -params = { - "language string": "ja", -} - -language_codes = {'Afrikaans': 'af', 'Albanian': 'sq', 'Amharic': 'am', 'Arabic': 'ar', 'Armenian': 'hy', 'Azerbaijani': 'az', 'Basque': 'eu', 'Belarusian': 'be', 'Bengali': 'bn', 'Bosnian': 'bs', 'Bulgarian': 'bg', 'Catalan': 'ca', 'Cebuano': 'ceb', 'Chinese (Simplified)': 'zh-CN', 'Chinese (Traditional)': 'zh-TW', 'Corsican': 'co', 'Croatian': 'hr', 'Czech': 'cs', 'Danish': 'da', 'Dutch': 'nl', 'English': 'en', 'Esperanto': 'eo', 'Estonian': 'et', 'Finnish': 'fi', 'French': 'fr', 'Frisian': 'fy', 'Galician': 'gl', 'Georgian': 'ka', 'German': 'de', 'Greek': 'el', 'Gujarati': 'gu', 'Haitian Creole': 'ht', 'Hausa': 'ha', 'Hawaiian': 'haw', 'Hebrew': 'iw', 'Hindi': 'hi', 'Hmong': 'hmn', 'Hungarian': 'hu', 'Icelandic': 'is', 'Igbo': 'ig', 'Indonesian': 'id', 'Irish': 'ga', 'Italian': 'it', 'Japanese': 'ja', 'Javanese': 'jw', 'Kannada': 'kn', 'Kazakh': 'kk', 'Khmer': 'km', 'Korean': 'ko', 'Kurdish': 'ku', 'Kyrgyz': 'ky', 'Lao': 'lo', 'Latin': 'la', 'Latvian': 'lv', 'Lithuanian': 'lt', 'Luxembourgish': 'lb', 'Macedonian': 'mk', 'Malagasy': 'mg', 'Malay': 'ms', 'Malayalam': 'ml', 'Maltese': 'mt', 'Maori': 'mi', 'Marathi': 'mr', 'Mongolian': 'mn', 'Myanmar (Burmese)': 'my', 'Nepali': 'ne', 'Norwegian': 'no', 'Nyanja (Chichewa)': 'ny', 'Pashto': 'ps', 'Persian': 'fa', 'Polish': 'pl', 'Portuguese (Portugal, Brazil)': 'pt', 'Punjabi': 'pa', 'Romanian': 'ro', 'Russian': 'ru', 'Samoan': 'sm', 'Scots Gaelic': 'gd', 'Serbian': 'sr', 'Sesotho': 'st', 'Shona': 'sn', 'Sindhi': 'sd', 'Sinhala (Sinhalese)': 'si', 'Slovak': 'sk', 'Slovenian': 'sl', 'Somali': 'so', 'Spanish': 'es', 'Sundanese': 'su', 'Swahili': 'sw', 'Swedish': 'sv', 'Tagalog (Filipino)': 'tl', 'Tajik': 'tg', 'Tamil': 'ta', 'Telugu': 'te', 'Thai': 'th', 'Turkish': 'tr', 'Ukrainian': 'uk', 'Urdu': 'ur', 'Uzbek': 'uz', 'Vietnamese': 'vi', 'Welsh': 'cy', 'Xhosa': 'xh', 'Yiddish': 'yi', 'Yoruba': 'yo', 'Zulu': 'zu'} - - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - return GoogleTranslator(source=params['language string'], target='en').translate(string) - - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - return GoogleTranslator(source='en', target=params['language string']).translate(string) - - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - - -def ui(): - # Finding the language name from the language code to use as the default value - language_name = list(language_codes.keys())[list(language_codes.values()).index(params['language string'])] - - # Gradio elements - language = gr.Dropdown(value=language_name, choices=[k for k in language_codes], label='Language') - - # Event functions to update the parameters in the backend - language.change(lambda x: params.update({"language string": language_codes[x]}), language, None) diff --git a/spaces/archietram/Multiple_Object_Detector_PASCAL_2007/README.md b/spaces/archietram/Multiple_Object_Detector_PASCAL_2007/README.md deleted file mode 100644 index 3f0fc655d4ff6526550b4e57d46cbfcebec12d3c..0000000000000000000000000000000000000000 --- a/spaces/archietram/Multiple_Object_Detector_PASCAL_2007/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multiple Object Detector PASCAL 2007 -emoji: 💻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/audio/torch_transforms.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/audio/torch_transforms.py deleted file mode 100644 index fd40ebb048b915a836ba0d84dc22054d23b1d886..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/audio/torch_transforms.py +++ /dev/null @@ -1,165 +0,0 @@ -import librosa -import torch -from torch import nn - - -class TorchSTFT(nn.Module): # pylint: disable=abstract-method - """Some of the audio processing funtions using Torch for faster batch processing. - - Args: - - n_fft (int): - FFT window size for STFT. - - hop_length (int): - number of frames between STFT columns. - - win_length (int, optional): - STFT window length. - - pad_wav (bool, optional): - If True pad the audio with (n_fft - hop_length) / 2). Defaults to False. - - window (str, optional): - The name of a function to create a window tensor that is applied/multiplied to each frame/window. Defaults to "hann_window" - - sample_rate (int, optional): - target audio sampling rate. Defaults to None. - - mel_fmin (int, optional): - minimum filter frequency for computing melspectrograms. Defaults to None. - - mel_fmax (int, optional): - maximum filter frequency for computing melspectrograms. Defaults to None. - - n_mels (int, optional): - number of melspectrogram dimensions. Defaults to None. - - use_mel (bool, optional): - If True compute the melspectrograms otherwise. Defaults to False. - - do_amp_to_db_linear (bool, optional): - enable/disable amplitude to dB conversion of linear spectrograms. Defaults to False. - - spec_gain (float, optional): - gain applied when converting amplitude to DB. Defaults to 1.0. - - power (float, optional): - Exponent for the magnitude spectrogram, e.g., 1 for energy, 2 for power, etc. Defaults to None. - - use_htk (bool, optional): - Use HTK formula in mel filter instead of Slaney. - - mel_norm (None, 'slaney', or number, optional): - If 'slaney', divide the triangular mel weights by the width of the mel band - (area normalization). - - If numeric, use `librosa.util.normalize` to normalize each filter by to unit l_p norm. - See `librosa.util.normalize` for a full description of supported norm values - (including `+-np.inf`). - - Otherwise, leave all the triangles aiming for a peak value of 1.0. Defaults to "slaney". - """ - - def __init__( - self, - n_fft, - hop_length, - win_length, - pad_wav=False, - window="hann_window", - sample_rate=None, - mel_fmin=0, - mel_fmax=None, - n_mels=80, - use_mel=False, - do_amp_to_db=False, - spec_gain=1.0, - power=None, - use_htk=False, - mel_norm="slaney", - normalized=False, - ): - super().__init__() - self.n_fft = n_fft - self.hop_length = hop_length - self.win_length = win_length - self.pad_wav = pad_wav - self.sample_rate = sample_rate - self.mel_fmin = mel_fmin - self.mel_fmax = mel_fmax - self.n_mels = n_mels - self.use_mel = use_mel - self.do_amp_to_db = do_amp_to_db - self.spec_gain = spec_gain - self.power = power - self.use_htk = use_htk - self.mel_norm = mel_norm - self.window = nn.Parameter(getattr(torch, window)(win_length), requires_grad=False) - self.mel_basis = None - self.normalized = normalized - if use_mel: - self._build_mel_basis() - - def __call__(self, x): - """Compute spectrogram frames by torch based stft. - - Args: - x (Tensor): input waveform - - Returns: - Tensor: spectrogram frames. - - Shapes: - x: [B x T] or [:math:`[B, 1, T]`] - """ - if x.ndim == 2: - x = x.unsqueeze(1) - if self.pad_wav: - padding = int((self.n_fft - self.hop_length) / 2) - x = torch.nn.functional.pad(x, (padding, padding), mode="reflect") - # B x D x T x 2 - o = torch.stft( - x.squeeze(1), - self.n_fft, - self.hop_length, - self.win_length, - self.window, - center=True, - pad_mode="reflect", # compatible with audio.py - normalized=self.normalized, - onesided=True, - return_complex=False, - ) - M = o[:, :, :, 0] - P = o[:, :, :, 1] - S = torch.sqrt(torch.clamp(M**2 + P**2, min=1e-8)) - - if self.power is not None: - S = S**self.power - - if self.use_mel: - S = torch.matmul(self.mel_basis.to(x), S) - if self.do_amp_to_db: - S = self._amp_to_db(S, spec_gain=self.spec_gain) - return S - - def _build_mel_basis(self): - mel_basis = librosa.filters.mel( - sr=self.sample_rate, - n_fft=self.n_fft, - n_mels=self.n_mels, - fmin=self.mel_fmin, - fmax=self.mel_fmax, - htk=self.use_htk, - norm=self.mel_norm, - ) - self.mel_basis = torch.from_numpy(mel_basis).float() - - @staticmethod - def _amp_to_db(x, spec_gain=1.0): - return torch.log(torch.clamp(x, min=1e-5) * spec_gain) - - @staticmethod - def _db_to_amp(x, spec_gain=1.0): - return torch.exp(x) / spec_gain diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/FusedNode.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/FusedNode.py deleted file mode 100644 index 26d6ffd3d65e0194982791e8c0ca97f1a5ea7277..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/FusedNode.py +++ /dev/null @@ -1,901 +0,0 @@ -from __future__ import absolute_import - -import copy - -from . import (ExprNodes, PyrexTypes, MemoryView, - ParseTreeTransforms, StringEncoding, Errors) -from .ExprNodes import CloneNode, ProxyNode, TupleNode -from .Nodes import FuncDefNode, CFuncDefNode, StatListNode, DefNode -from ..Utils import OrderedSet - - -class FusedCFuncDefNode(StatListNode): - """ - This node replaces a function with fused arguments. It deep-copies the - function for every permutation of fused types, and allocates a new local - scope for it. It keeps track of the original function in self.node, and - the entry of the original function in the symbol table is given the - 'fused_cfunction' attribute which points back to us. - Then when a function lookup occurs (to e.g. call it), the call can be - dispatched to the right function. - - node FuncDefNode the original function - nodes [FuncDefNode] list of copies of node with different specific types - py_func DefNode the fused python function subscriptable from - Python space - __signatures__ A DictNode mapping signature specialization strings - to PyCFunction nodes - resulting_fused_function PyCFunction for the fused DefNode that delegates - to specializations - fused_func_assignment Assignment of the fused function to the function name - defaults_tuple TupleNode of defaults (letting PyCFunctionNode build - defaults would result in many different tuples) - specialized_pycfuncs List of synthesized pycfunction nodes for the - specializations - code_object CodeObjectNode shared by all specializations and the - fused function - - fused_compound_types All fused (compound) types (e.g. floating[:]) - """ - - __signatures__ = None - resulting_fused_function = None - fused_func_assignment = None - defaults_tuple = None - decorators = None - - child_attrs = StatListNode.child_attrs + [ - '__signatures__', 'resulting_fused_function', 'fused_func_assignment'] - - def __init__(self, node, env): - super(FusedCFuncDefNode, self).__init__(node.pos) - - self.nodes = [] - self.node = node - - is_def = isinstance(self.node, DefNode) - if is_def: - # self.node.decorators = [] - self.copy_def(env) - else: - self.copy_cdef(env) - - # Perform some sanity checks. If anything fails, it's a bug - for n in self.nodes: - assert not n.entry.type.is_fused - assert not n.local_scope.return_type.is_fused - if node.return_type.is_fused: - assert not n.return_type.is_fused - - if not is_def and n.cfunc_declarator.optional_arg_count: - assert n.type.op_arg_struct - - node.entry.fused_cfunction = self - # Copy the nodes as AnalyseDeclarationsTransform will prepend - # self.py_func to self.stats, as we only want specialized - # CFuncDefNodes in self.nodes - self.stats = self.nodes[:] - - def copy_def(self, env): - """ - Create a copy of the original def or lambda function for specialized - versions. - """ - fused_compound_types = PyrexTypes.unique( - [arg.type for arg in self.node.args if arg.type.is_fused]) - fused_types = self._get_fused_base_types(fused_compound_types) - permutations = PyrexTypes.get_all_specialized_permutations(fused_types) - - self.fused_compound_types = fused_compound_types - - if self.node.entry in env.pyfunc_entries: - env.pyfunc_entries.remove(self.node.entry) - - for cname, fused_to_specific in permutations: - copied_node = copy.deepcopy(self.node) - # keep signature object identity for special casing in DefNode.analyse_declarations() - copied_node.entry.signature = self.node.entry.signature - - self._specialize_function_args(copied_node.args, fused_to_specific) - copied_node.return_type = self.node.return_type.specialize( - fused_to_specific) - - copied_node.analyse_declarations(env) - # copied_node.is_staticmethod = self.node.is_staticmethod - # copied_node.is_classmethod = self.node.is_classmethod - self.create_new_local_scope(copied_node, env, fused_to_specific) - self.specialize_copied_def(copied_node, cname, self.node.entry, - fused_to_specific, fused_compound_types) - - PyrexTypes.specialize_entry(copied_node.entry, cname) - copied_node.entry.used = True - env.entries[copied_node.entry.name] = copied_node.entry - - if not self.replace_fused_typechecks(copied_node): - break - - self.orig_py_func = self.node - self.py_func = self.make_fused_cpdef(self.node, env, is_def=True) - - def copy_cdef(self, env): - """ - Create a copy of the original c(p)def function for all specialized - versions. - """ - permutations = self.node.type.get_all_specialized_permutations() - # print 'Node %s has %d specializations:' % (self.node.entry.name, - # len(permutations)) - # import pprint; pprint.pprint([d for cname, d in permutations]) - - # Prevent copying of the python function - self.orig_py_func = orig_py_func = self.node.py_func - self.node.py_func = None - if orig_py_func: - env.pyfunc_entries.remove(orig_py_func.entry) - - fused_types = self.node.type.get_fused_types() - self.fused_compound_types = fused_types - - new_cfunc_entries = [] - for cname, fused_to_specific in permutations: - copied_node = copy.deepcopy(self.node) - - # Make the types in our CFuncType specific. - type = copied_node.type.specialize(fused_to_specific) - entry = copied_node.entry - type.specialize_entry(entry, cname) - - # Reuse existing Entries (e.g. from .pxd files). - for i, orig_entry in enumerate(env.cfunc_entries): - if entry.cname == orig_entry.cname and type.same_as_resolved_type(orig_entry.type): - copied_node.entry = env.cfunc_entries[i] - if not copied_node.entry.func_cname: - copied_node.entry.func_cname = entry.func_cname - entry = copied_node.entry - type = entry.type - break - else: - new_cfunc_entries.append(entry) - - copied_node.type = type - entry.type, type.entry = type, entry - - entry.used = (entry.used or - self.node.entry.defined_in_pxd or - env.is_c_class_scope or - entry.is_cmethod) - - if self.node.cfunc_declarator.optional_arg_count: - self.node.cfunc_declarator.declare_optional_arg_struct( - type, env, fused_cname=cname) - - copied_node.return_type = type.return_type - self.create_new_local_scope(copied_node, env, fused_to_specific) - - # Make the argument types in the CFuncDeclarator specific - self._specialize_function_args(copied_node.cfunc_declarator.args, - fused_to_specific) - - # If a cpdef, declare all specialized cpdefs (this - # also calls analyse_declarations) - copied_node.declare_cpdef_wrapper(env) - if copied_node.py_func: - env.pyfunc_entries.remove(copied_node.py_func.entry) - - self.specialize_copied_def( - copied_node.py_func, cname, self.node.entry.as_variable, - fused_to_specific, fused_types) - - if not self.replace_fused_typechecks(copied_node): - break - - # replace old entry with new entries - try: - cindex = env.cfunc_entries.index(self.node.entry) - except ValueError: - env.cfunc_entries.extend(new_cfunc_entries) - else: - env.cfunc_entries[cindex:cindex+1] = new_cfunc_entries - - if orig_py_func: - self.py_func = self.make_fused_cpdef(orig_py_func, env, - is_def=False) - else: - self.py_func = orig_py_func - - def _get_fused_base_types(self, fused_compound_types): - """ - Get a list of unique basic fused types, from a list of - (possibly) compound fused types. - """ - base_types = [] - seen = set() - for fused_type in fused_compound_types: - fused_type.get_fused_types(result=base_types, seen=seen) - return base_types - - def _specialize_function_args(self, args, fused_to_specific): - for arg in args: - if arg.type.is_fused: - arg.type = arg.type.specialize(fused_to_specific) - if arg.type.is_memoryviewslice: - arg.type.validate_memslice_dtype(arg.pos) - - def create_new_local_scope(self, node, env, f2s): - """ - Create a new local scope for the copied node and append it to - self.nodes. A new local scope is needed because the arguments with the - fused types are already in the local scope, and we need the specialized - entries created after analyse_declarations on each specialized version - of the (CFunc)DefNode. - f2s is a dict mapping each fused type to its specialized version - """ - node.create_local_scope(env) - node.local_scope.fused_to_specific = f2s - - # This is copied from the original function, set it to false to - # stop recursion - node.has_fused_arguments = False - self.nodes.append(node) - - def specialize_copied_def(self, node, cname, py_entry, f2s, fused_compound_types): - """Specialize the copy of a DefNode given the copied node, - the specialization cname and the original DefNode entry""" - fused_types = self._get_fused_base_types(fused_compound_types) - type_strings = [ - PyrexTypes.specialization_signature_string(fused_type, f2s) - for fused_type in fused_types - ] - - node.specialized_signature_string = '|'.join(type_strings) - - node.entry.pymethdef_cname = PyrexTypes.get_fused_cname( - cname, node.entry.pymethdef_cname) - node.entry.doc = py_entry.doc - node.entry.doc_cname = py_entry.doc_cname - - def replace_fused_typechecks(self, copied_node): - """ - Branch-prune fused type checks like - - if fused_t is int: - ... - - Returns whether an error was issued and whether we should stop in - in order to prevent a flood of errors. - """ - num_errors = Errors.num_errors - transform = ParseTreeTransforms.ReplaceFusedTypeChecks( - copied_node.local_scope) - transform(copied_node) - - if Errors.num_errors > num_errors: - return False - - return True - - def _fused_instance_checks(self, normal_types, pyx_code, env): - """ - Generate Cython code for instance checks, matching an object to - specialized types. - """ - for specialized_type in normal_types: - # all_numeric = all_numeric and specialized_type.is_numeric - pyx_code.context.update( - py_type_name=specialized_type.py_type_name(), - specialized_type_name=specialized_type.specialization_string, - ) - pyx_code.put_chunk( - u""" - if isinstance(arg, {{py_type_name}}): - dest_sig[{{dest_sig_idx}}] = '{{specialized_type_name}}'; break - """) - - def _dtype_name(self, dtype): - if dtype.is_typedef: - return '___pyx_%s' % dtype - return str(dtype).replace(' ', '_') - - def _dtype_type(self, dtype): - if dtype.is_typedef: - return self._dtype_name(dtype) - return str(dtype) - - def _sizeof_dtype(self, dtype): - if dtype.is_pyobject: - return 'sizeof(void *)' - else: - return "sizeof(%s)" % self._dtype_type(dtype) - - def _buffer_check_numpy_dtype_setup_cases(self, pyx_code): - "Setup some common cases to match dtypes against specializations" - if pyx_code.indenter("if kind in b'iu':"): - pyx_code.putln("pass") - pyx_code.named_insertion_point("dtype_int") - pyx_code.dedent() - - if pyx_code.indenter("elif kind == b'f':"): - pyx_code.putln("pass") - pyx_code.named_insertion_point("dtype_float") - pyx_code.dedent() - - if pyx_code.indenter("elif kind == b'c':"): - pyx_code.putln("pass") - pyx_code.named_insertion_point("dtype_complex") - pyx_code.dedent() - - if pyx_code.indenter("elif kind == b'O':"): - pyx_code.putln("pass") - pyx_code.named_insertion_point("dtype_object") - pyx_code.dedent() - - match = "dest_sig[{{dest_sig_idx}}] = '{{specialized_type_name}}'" - no_match = "dest_sig[{{dest_sig_idx}}] = None" - def _buffer_check_numpy_dtype(self, pyx_code, specialized_buffer_types, pythran_types): - """ - Match a numpy dtype object to the individual specializations. - """ - self._buffer_check_numpy_dtype_setup_cases(pyx_code) - - for specialized_type in pythran_types+specialized_buffer_types: - final_type = specialized_type - if specialized_type.is_pythran_expr: - specialized_type = specialized_type.org_buffer - dtype = specialized_type.dtype - pyx_code.context.update( - itemsize_match=self._sizeof_dtype(dtype) + " == itemsize", - signed_match="not (%s_is_signed ^ dtype_signed)" % self._dtype_name(dtype), - dtype=dtype, - specialized_type_name=final_type.specialization_string) - - dtypes = [ - (dtype.is_int, pyx_code.dtype_int), - (dtype.is_float, pyx_code.dtype_float), - (dtype.is_complex, pyx_code.dtype_complex) - ] - - for dtype_category, codewriter in dtypes: - if dtype_category: - cond = '{{itemsize_match}} and (arg.ndim) == %d' % ( - specialized_type.ndim,) - if dtype.is_int: - cond += ' and {{signed_match}}' - - if final_type.is_pythran_expr: - cond += ' and arg_is_pythran_compatible' - - if codewriter.indenter("if %s:" % cond): - #codewriter.putln("print 'buffer match found based on numpy dtype'") - codewriter.putln(self.match) - codewriter.putln("break") - codewriter.dedent() - - def _buffer_parse_format_string_check(self, pyx_code, decl_code, - specialized_type, env): - """ - For each specialized type, try to coerce the object to a memoryview - slice of that type. This means obtaining a buffer and parsing the - format string. - TODO: separate buffer acquisition from format parsing - """ - dtype = specialized_type.dtype - if specialized_type.is_buffer: - axes = [('direct', 'strided')] * specialized_type.ndim - else: - axes = specialized_type.axes - - memslice_type = PyrexTypes.MemoryViewSliceType(dtype, axes) - memslice_type.create_from_py_utility_code(env) - pyx_code.context.update( - coerce_from_py_func=memslice_type.from_py_function, - dtype=dtype) - decl_code.putln( - "{{memviewslice_cname}} {{coerce_from_py_func}}(object, int)") - - pyx_code.context.update( - specialized_type_name=specialized_type.specialization_string, - sizeof_dtype=self._sizeof_dtype(dtype)) - - pyx_code.put_chunk( - u""" - # try {{dtype}} - if itemsize == -1 or itemsize == {{sizeof_dtype}}: - memslice = {{coerce_from_py_func}}(arg, 0) - if memslice.memview: - __PYX_XDEC_MEMVIEW(&memslice, 1) - # print 'found a match for the buffer through format parsing' - %s - break - else: - __pyx_PyErr_Clear() - """ % self.match) - - def _buffer_checks(self, buffer_types, pythran_types, pyx_code, decl_code, env): - """ - Generate Cython code to match objects to buffer specializations. - First try to get a numpy dtype object and match it against the individual - specializations. If that fails, try naively to coerce the object - to each specialization, which obtains the buffer each time and tries - to match the format string. - """ - # The first thing to find a match in this loop breaks out of the loop - pyx_code.put_chunk( - u""" - """ + (u"arg_is_pythran_compatible = False" if pythran_types else u"") + u""" - if ndarray is not None: - if isinstance(arg, ndarray): - dtype = arg.dtype - """ + (u"arg_is_pythran_compatible = True" if pythran_types else u"") + u""" - elif __pyx_memoryview_check(arg): - arg_base = arg.base - if isinstance(arg_base, ndarray): - dtype = arg_base.dtype - else: - dtype = None - else: - dtype = None - - itemsize = -1 - if dtype is not None: - itemsize = dtype.itemsize - kind = ord(dtype.kind) - dtype_signed = kind == 'i' - """) - pyx_code.indent(2) - if pythran_types: - pyx_code.put_chunk( - u""" - # Pythran only supports the endianness of the current compiler - byteorder = dtype.byteorder - if byteorder == "<" and not __Pyx_Is_Little_Endian(): - arg_is_pythran_compatible = False - elif byteorder == ">" and __Pyx_Is_Little_Endian(): - arg_is_pythran_compatible = False - if arg_is_pythran_compatible: - cur_stride = itemsize - shape = arg.shape - strides = arg.strides - for i in range(arg.ndim-1, -1, -1): - if (strides[i]) != cur_stride: - arg_is_pythran_compatible = False - break - cur_stride *= shape[i] - else: - arg_is_pythran_compatible = not (arg.flags.f_contiguous and (arg.ndim) > 1) - """) - pyx_code.named_insertion_point("numpy_dtype_checks") - self._buffer_check_numpy_dtype(pyx_code, buffer_types, pythran_types) - pyx_code.dedent(2) - - for specialized_type in buffer_types: - self._buffer_parse_format_string_check( - pyx_code, decl_code, specialized_type, env) - - def _buffer_declarations(self, pyx_code, decl_code, all_buffer_types, pythran_types): - """ - If we have any buffer specializations, write out some variable - declarations and imports. - """ - decl_code.put_chunk( - u""" - ctypedef struct {{memviewslice_cname}}: - void *memview - - void __PYX_XDEC_MEMVIEW({{memviewslice_cname}} *, int have_gil) - bint __pyx_memoryview_check(object) - """) - - pyx_code.local_variable_declarations.put_chunk( - u""" - cdef {{memviewslice_cname}} memslice - cdef Py_ssize_t itemsize - cdef bint dtype_signed - cdef char kind - - itemsize = -1 - """) - - if pythran_types: - pyx_code.local_variable_declarations.put_chunk(u""" - cdef bint arg_is_pythran_compatible - cdef Py_ssize_t cur_stride - """) - - pyx_code.imports.put_chunk( - u""" - cdef type ndarray - ndarray = __Pyx_ImportNumPyArrayTypeIfAvailable() - """) - - seen_typedefs = set() - seen_int_dtypes = set() - for buffer_type in all_buffer_types: - dtype = buffer_type.dtype - dtype_name = self._dtype_name(dtype) - if dtype.is_typedef: - if dtype_name not in seen_typedefs: - seen_typedefs.add(dtype_name) - decl_code.putln( - 'ctypedef %s %s "%s"' % (dtype.resolve(), dtype_name, - dtype.empty_declaration_code())) - - if buffer_type.dtype.is_int: - if str(dtype) not in seen_int_dtypes: - seen_int_dtypes.add(str(dtype)) - pyx_code.context.update(dtype_name=dtype_name, - dtype_type=self._dtype_type(dtype)) - pyx_code.local_variable_declarations.put_chunk( - u""" - cdef bint {{dtype_name}}_is_signed - {{dtype_name}}_is_signed = not (<{{dtype_type}}> -1 > 0) - """) - - def _split_fused_types(self, arg): - """ - Specialize fused types and split into normal types and buffer types. - """ - specialized_types = PyrexTypes.get_specialized_types(arg.type) - - # Prefer long over int, etc by sorting (see type classes in PyrexTypes.py) - specialized_types.sort() - - seen_py_type_names = set() - normal_types, buffer_types, pythran_types = [], [], [] - has_object_fallback = False - for specialized_type in specialized_types: - py_type_name = specialized_type.py_type_name() - if py_type_name: - if py_type_name in seen_py_type_names: - continue - seen_py_type_names.add(py_type_name) - if py_type_name == 'object': - has_object_fallback = True - else: - normal_types.append(specialized_type) - elif specialized_type.is_pythran_expr: - pythran_types.append(specialized_type) - elif specialized_type.is_buffer or specialized_type.is_memoryviewslice: - buffer_types.append(specialized_type) - - return normal_types, buffer_types, pythran_types, has_object_fallback - - def _unpack_argument(self, pyx_code): - pyx_code.put_chunk( - u""" - # PROCESSING ARGUMENT {{arg_tuple_idx}} - if {{arg_tuple_idx}} < len(args): - arg = (args)[{{arg_tuple_idx}}] - elif kwargs is not None and '{{arg.name}}' in kwargs: - arg = (kwargs)['{{arg.name}}'] - else: - {{if arg.default}} - arg = (defaults)[{{default_idx}}] - {{else}} - {{if arg_tuple_idx < min_positional_args}} - raise TypeError("Expected at least %d argument%s, got %d" % ( - {{min_positional_args}}, {{'"s"' if min_positional_args != 1 else '""'}}, len(args))) - {{else}} - raise TypeError("Missing keyword-only argument: '%s'" % "{{arg.default}}") - {{endif}} - {{endif}} - """) - - def make_fused_cpdef(self, orig_py_func, env, is_def): - """ - This creates the function that is indexable from Python and does - runtime dispatch based on the argument types. The function gets the - arg tuple and kwargs dict (or None) and the defaults tuple - as arguments from the Binding Fused Function's tp_call. - """ - from . import TreeFragment, Code, UtilityCode - - fused_types = self._get_fused_base_types([ - arg.type for arg in self.node.args if arg.type.is_fused]) - - context = { - 'memviewslice_cname': MemoryView.memviewslice_cname, - 'func_args': self.node.args, - 'n_fused': len(fused_types), - 'min_positional_args': - self.node.num_required_args - self.node.num_required_kw_args - if is_def else - sum(1 for arg in self.node.args if arg.default is None), - 'name': orig_py_func.entry.name, - } - - pyx_code = Code.PyxCodeWriter(context=context) - decl_code = Code.PyxCodeWriter(context=context) - decl_code.put_chunk( - u""" - cdef extern from *: - void __pyx_PyErr_Clear "PyErr_Clear" () - type __Pyx_ImportNumPyArrayTypeIfAvailable() - int __Pyx_Is_Little_Endian() - """) - decl_code.indent() - - pyx_code.put_chunk( - u""" - def __pyx_fused_cpdef(signatures, args, kwargs, defaults): - # FIXME: use a typed signature - currently fails badly because - # default arguments inherit the types we specify here! - - dest_sig = [None] * {{n_fused}} - - if kwargs is not None and not kwargs: - kwargs = None - - cdef Py_ssize_t i - - # instance check body - """) - - pyx_code.indent() # indent following code to function body - pyx_code.named_insertion_point("imports") - pyx_code.named_insertion_point("func_defs") - pyx_code.named_insertion_point("local_variable_declarations") - - fused_index = 0 - default_idx = 0 - all_buffer_types = OrderedSet() - seen_fused_types = set() - for i, arg in enumerate(self.node.args): - if arg.type.is_fused: - arg_fused_types = arg.type.get_fused_types() - if len(arg_fused_types) > 1: - raise NotImplementedError("Determination of more than one fused base " - "type per argument is not implemented.") - fused_type = arg_fused_types[0] - - if arg.type.is_fused and fused_type not in seen_fused_types: - seen_fused_types.add(fused_type) - - context.update( - arg_tuple_idx=i, - arg=arg, - dest_sig_idx=fused_index, - default_idx=default_idx, - ) - - normal_types, buffer_types, pythran_types, has_object_fallback = self._split_fused_types(arg) - self._unpack_argument(pyx_code) - - # 'unrolled' loop, first match breaks out of it - if pyx_code.indenter("while 1:"): - if normal_types: - self._fused_instance_checks(normal_types, pyx_code, env) - if buffer_types or pythran_types: - env.use_utility_code(Code.UtilityCode.load_cached("IsLittleEndian", "ModuleSetupCode.c")) - self._buffer_checks(buffer_types, pythran_types, pyx_code, decl_code, env) - if has_object_fallback: - pyx_code.context.update(specialized_type_name='object') - pyx_code.putln(self.match) - else: - pyx_code.putln(self.no_match) - pyx_code.putln("break") - pyx_code.dedent() - - fused_index += 1 - all_buffer_types.update(buffer_types) - all_buffer_types.update(ty.org_buffer for ty in pythran_types) - - if arg.default: - default_idx += 1 - - if all_buffer_types: - self._buffer_declarations(pyx_code, decl_code, all_buffer_types, pythran_types) - env.use_utility_code(Code.UtilityCode.load_cached("Import", "ImportExport.c")) - env.use_utility_code(Code.UtilityCode.load_cached("ImportNumPyArray", "ImportExport.c")) - - pyx_code.put_chunk( - u""" - candidates = [] - for sig in signatures: - match_found = False - src_sig = sig.strip('()').split('|') - for i in range(len(dest_sig)): - dst_type = dest_sig[i] - if dst_type is not None: - if src_sig[i] == dst_type: - match_found = True - else: - match_found = False - break - - if match_found: - candidates.append(sig) - - if not candidates: - raise TypeError("No matching signature found") - elif len(candidates) > 1: - raise TypeError("Function call with ambiguous argument types") - else: - return (signatures)[candidates[0]] - """) - - fragment_code = pyx_code.getvalue() - # print decl_code.getvalue() - # print fragment_code - from .Optimize import ConstantFolding - fragment = TreeFragment.TreeFragment( - fragment_code, level='module', pipeline=[ConstantFolding()]) - ast = TreeFragment.SetPosTransform(self.node.pos)(fragment.root) - UtilityCode.declare_declarations_in_scope( - decl_code.getvalue(), env.global_scope()) - ast.scope = env - # FIXME: for static methods of cdef classes, we build the wrong signature here: first arg becomes 'self' - ast.analyse_declarations(env) - py_func = ast.stats[-1] # the DefNode - self.fragment_scope = ast.scope - - if isinstance(self.node, DefNode): - py_func.specialized_cpdefs = self.nodes[:] - else: - py_func.specialized_cpdefs = [n.py_func for n in self.nodes] - - return py_func - - def update_fused_defnode_entry(self, env): - copy_attributes = ( - 'name', 'pos', 'cname', 'func_cname', 'pyfunc_cname', - 'pymethdef_cname', 'doc', 'doc_cname', 'is_member', - 'scope' - ) - - entry = self.py_func.entry - - for attr in copy_attributes: - setattr(entry, attr, - getattr(self.orig_py_func.entry, attr)) - - self.py_func.name = self.orig_py_func.name - self.py_func.doc = self.orig_py_func.doc - - env.entries.pop('__pyx_fused_cpdef', None) - if isinstance(self.node, DefNode): - env.entries[entry.name] = entry - else: - env.entries[entry.name].as_variable = entry - - env.pyfunc_entries.append(entry) - - self.py_func.entry.fused_cfunction = self - for node in self.nodes: - if isinstance(self.node, DefNode): - node.fused_py_func = self.py_func - else: - node.py_func.fused_py_func = self.py_func - node.entry.as_variable = entry - - self.synthesize_defnodes() - self.stats.append(self.__signatures__) - - def analyse_expressions(self, env): - """ - Analyse the expressions. Take care to only evaluate default arguments - once and clone the result for all specializations - """ - for fused_compound_type in self.fused_compound_types: - for fused_type in fused_compound_type.get_fused_types(): - for specialization_type in fused_type.types: - if specialization_type.is_complex: - specialization_type.create_declaration_utility_code(env) - - if self.py_func: - self.__signatures__ = self.__signatures__.analyse_expressions(env) - self.py_func = self.py_func.analyse_expressions(env) - self.resulting_fused_function = self.resulting_fused_function.analyse_expressions(env) - self.fused_func_assignment = self.fused_func_assignment.analyse_expressions(env) - - self.defaults = defaults = [] - - for arg in self.node.args: - if arg.default: - arg.default = arg.default.analyse_expressions(env) - defaults.append(ProxyNode(arg.default)) - else: - defaults.append(None) - - for i, stat in enumerate(self.stats): - stat = self.stats[i] = stat.analyse_expressions(env) - if isinstance(stat, FuncDefNode): - for arg, default in zip(stat.args, defaults): - if default is not None: - arg.default = CloneNode(default).coerce_to(arg.type, env) - - if self.py_func: - args = [CloneNode(default) for default in defaults if default] - self.defaults_tuple = TupleNode(self.pos, args=args) - self.defaults_tuple = self.defaults_tuple.analyse_types(env, skip_children=True).coerce_to_pyobject(env) - self.defaults_tuple = ProxyNode(self.defaults_tuple) - self.code_object = ProxyNode(self.specialized_pycfuncs[0].code_object) - - fused_func = self.resulting_fused_function.arg - fused_func.defaults_tuple = CloneNode(self.defaults_tuple) - fused_func.code_object = CloneNode(self.code_object) - - for i, pycfunc in enumerate(self.specialized_pycfuncs): - pycfunc.code_object = CloneNode(self.code_object) - pycfunc = self.specialized_pycfuncs[i] = pycfunc.analyse_types(env) - pycfunc.defaults_tuple = CloneNode(self.defaults_tuple) - return self - - def synthesize_defnodes(self): - """ - Create the __signatures__ dict of PyCFunctionNode specializations. - """ - if isinstance(self.nodes[0], CFuncDefNode): - nodes = [node.py_func for node in self.nodes] - else: - nodes = self.nodes - - signatures = [StringEncoding.EncodedString(node.specialized_signature_string) - for node in nodes] - keys = [ExprNodes.StringNode(node.pos, value=sig) - for node, sig in zip(nodes, signatures)] - values = [ExprNodes.PyCFunctionNode.from_defnode(node, binding=True) - for node in nodes] - - self.__signatures__ = ExprNodes.DictNode.from_pairs(self.pos, zip(keys, values)) - - self.specialized_pycfuncs = values - for pycfuncnode in values: - pycfuncnode.is_specialization = True - - def generate_function_definitions(self, env, code): - if self.py_func: - self.py_func.pymethdef_required = True - self.fused_func_assignment.generate_function_definitions(env, code) - - for stat in self.stats: - if isinstance(stat, FuncDefNode) and stat.entry.used: - code.mark_pos(stat.pos) - stat.generate_function_definitions(env, code) - - def generate_execution_code(self, code): - # Note: all def function specialization are wrapped in PyCFunction - # nodes in the self.__signatures__ dictnode. - for default in self.defaults: - if default is not None: - default.generate_evaluation_code(code) - - if self.py_func: - self.defaults_tuple.generate_evaluation_code(code) - self.code_object.generate_evaluation_code(code) - - for stat in self.stats: - code.mark_pos(stat.pos) - if isinstance(stat, ExprNodes.ExprNode): - stat.generate_evaluation_code(code) - else: - stat.generate_execution_code(code) - - if self.__signatures__: - self.resulting_fused_function.generate_evaluation_code(code) - - code.putln( - "((__pyx_FusedFunctionObject *) %s)->__signatures__ = %s;" % - (self.resulting_fused_function.result(), - self.__signatures__.result())) - code.put_giveref(self.__signatures__.result()) - self.__signatures__.generate_post_assignment_code(code) - self.__signatures__.free_temps(code) - - self.fused_func_assignment.generate_execution_code(code) - - # Dispose of results - self.resulting_fused_function.generate_disposal_code(code) - self.resulting_fused_function.free_temps(code) - self.defaults_tuple.generate_disposal_code(code) - self.defaults_tuple.free_temps(code) - self.code_object.generate_disposal_code(code) - self.code_object.free_temps(code) - - for default in self.defaults: - if default is not None: - default.generate_disposal_code(code) - default.free_temps(code) - - def annotate(self, code): - for stat in self.stats: - stat.annotate(code) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_data.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_data.py deleted file mode 100644 index 8eae11c868c6bc0ba14edb9cc7bae6d588f1d5aa..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_data.py +++ /dev/null @@ -1,33 +0,0 @@ -import os - -import pandas as pd -import pytest - -from .. import data as alt - - -@pytest.fixture -def sample_data(): - return pd.DataFrame({"x": range(10), "y": range(10)}) - - -def test_disable_max_rows(sample_data): - with alt.data_transformers.enable("default", max_rows=5): - # Ensure max rows error is raised. - with pytest.raises(alt.MaxRowsError): - alt.data_transformers.get()(sample_data) - - # Ensure that max rows error is properly disabled. - with alt.data_transformers.disable_max_rows(): - alt.data_transformers.get()(sample_data) - - try: - with alt.data_transformers.enable("json"): - # Ensure that there is no TypeError for non-max_rows transformers. - with alt.data_transformers.disable_max_rows(): - jsonfile = alt.data_transformers.get()(sample_data) - except TypeError: - jsonfile = {} - finally: - if jsonfile: - os.remove(jsonfile["url"]) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/version.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/version.py deleted file mode 100644 index 64c0dbdeffa42572ce7a2e1b8dfaca43f4ec5025..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/version.py +++ /dev/null @@ -1,6 +0,0 @@ -""" -Expose version -""" - -__version__ = "2.1.1" -VERSION = __version__.split(".") diff --git a/spaces/asyafiqe/pdfGPT-chat/app.py b/spaces/asyafiqe/pdfGPT-chat/app.py deleted file mode 100644 index bd75c55594188b1c2aa851d9dd95fb3096addde8..0000000000000000000000000000000000000000 --- a/spaces/asyafiqe/pdfGPT-chat/app.py +++ /dev/null @@ -1,395 +0,0 @@ -# %% -import os -import json -import urllib.parse -from tempfile import _TemporaryFileWrapper - -import pandas as pd -import requests -import streamlit as st -from streamlit_chat import message -from streamlit_extras.add_vertical_space import add_vertical_space -from streamlit_extras.colored_header import colored_header - -st.set_page_config( - layout="wide", - page_title="pdfGPT-chat. Ask your PDF!", - page_icon=":robot_face:", -) - - -def main(): - @st.cache_data - def convert_df(df): - return df.to_csv(index=False).encode("utf-8") - - def pdf_change(): - st.session_state["pdf_change"] = True - - def check_api(api_key): - return api_key.startswith("sk-") and len(api_key) == 51 - - def check_url(url): - parsed_url = urllib.parse.urlparse(url) - return all([parsed_url.scheme, parsed_url.netloc]) - - def result_to_dict(r, start): - result = r.json()["result"] - result = result.split("###")[start:] - keys = ["prompt", "answer", "token_used", "gpt_model"] - # Error in OpenAI server also gives status_code 200 - if len(result) >= 0: - result.extend([result, 0, gpt_model]) - return dict(zip(keys, result)) - - def load_pdf(): - if file is None and len(pdf_url) == 0: - return st.error("Both URL and PDF is empty. Provide at least one.") - elif len(pdf_url) > 0: - if not check_url(pdf_url): - return st.error("Please enter valid URL.") - elif file is not None: - return st.error( - "Both URL and PDF is provided. Please provide only one (either URL or PDF)." - ) - # load pdf from url - else: - r = requests.post( - f"{LCSERVE_HOST}/load_url", - json={ - "url": pdf_url, - "rebuild_embedding": st.session_state["pdf_change"], - "embedding_model": embedding_model, - "gpt_model": gpt_model, - "envs": { - "OPENAI_API_KEY": OPENAI_API_KEY, - } - }, - ) - # load file - else: - _data = { - "rebuild_embedding": st.session_state["pdf_change"], - "embedding_model": embedding_model, - "gpt_model": gpt_model, - "envs": { - "OPENAI_API_KEY": OPENAI_API_KEY, - } - } - - r = requests.post( - f"{LCSERVE_HOST}/load_file", - params={"input_data": json.dumps(_data)}, - files={"file": file}, - ) - if r.status_code != 200: - if "error" in r.json(): - if "message" in r.json()["error"]: - return st.error(r.json()["error"]["message"]) - else: - return str(r.json()) - elif r.json()["result"].startswith("Corpus Loaded."): - st.session_state["loaded"] = True - st.session_state["pdf_change"] = False - # extract result - result = result_to_dict(r, 1) - - # concatenate reply - reply_summary = "Hello there. I'm pdfGPT-chat.\nHere is a summary of your PDF:\n\n" - reply_summary += result["answer"] - reply_summary += "\n\nDo you have any question about your PDF?" - - if len(st.session_state["past"]) == 1: - st.session_state["generated"][0] = reply_summary - else: - st.session_state["past"].append("Hi") - st.session_state["generated"].append(reply_summary) - - # calculate cost - calculate_cost(result["token_used"], result["gpt_model"]) - return st.success("The PDF file has been loaded.") - else: - return st.info(r.json()["result"]) - - def generate_response( - lcserve_host: str, - url: str, - file: _TemporaryFileWrapper, - question: str, - ) -> dict: - if question.strip() == "": - return "[ERROR]: Question field is empty" - - _data = { - "question": question, - "rebuild_embedding": st.session_state["pdf_change"], - "embedding_model": embedding_model, - "gpt_model": gpt_model, - "envs": { - "OPENAI_API_KEY": OPENAI_API_KEY, - }, - } - - if url.strip() != "": - r = requests.post( - f"{LCSERVE_HOST}/ask_url", - json={"url": url, **_data}, - ) - - else: - r = requests.post( - f"{LCSERVE_HOST}/ask_file", - params={"input_data": json.dumps(_data)}, - files={"file": file}, - ) - - if r.status_code != 200: - content = r.content.decode() # Convert bytes to string - with open("langchainlog.txt", "w") as file: - file.write(content) - return f"[ERROR]: {r.text}" - - result_dict = result_to_dict(r, 0) - return result_dict - - def calculate_cost(token_used, gpt_model): - st.session_state["total_token"] += int(token_used) - if "gpt-3" in gpt_model: - current_cost = st.session_state["total_token"] * 0.002 / 1000 - else: - current_cost = st.session_state["total_token"] * 0.06 / 1000 - st.session_state["total_cost"] += current_cost - - # %% - # main page layout - header = st.container() - welcome_page = st.container() - response_container = st.container() - input_container = st.container() - cost_container = st.container() - load_pdf_popup = st.container() - - # sidebar layout - input_details = st.sidebar.container() - preferences = st.sidebar.container() - chat_download = st.sidebar.container() - # %% - # instantiate session states - if "api_key" not in st.session_state: - st.session_state["api_key"] = False - - if "generated" not in st.session_state: - st.session_state["generated"] = ["Hello there. I'm pdfGPT-chat. Do you have any question about your PDF?"] - - if "loaded" not in st.session_state: - st.session_state["loaded"] = False - - if "past" not in st.session_state: - st.session_state["past"] = ["Hi"] - - if "pdf_change" not in st.session_state: - st.session_state["pdf_change"] = True - - if "total_cost" not in st.session_state: - st.session_state["total_cost"] = 0 - - if "total_token" not in st.session_state: - st.session_state["total_token"] = 0 - - # %% - # constants - E5_URL = "https://github.com/microsoft/unilm/tree/master/e5" - EMBEDDING_CHOICES = { - "multilingual-e5-base": "Multilingual-E5 (default)", - "e5-small-v2": "English-E5-small (faster)", - } - GPT_CHOICES = { - "gpt-3.5-turbo": "GPT-3.5-turbo (default)", - "gpt-4": "GPT-4 (smarter, costlier)", - } - LCSERVE_HOST = "http://localhost:8080" - OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY") - PDFGPT_URL = "https://github.com/bhaskatripathi/pdfGPT" - SIGNATURE = """ - - -""" - - with header: - st.title(":page_facing_up: pdfGPT-chat") - with st.expander( - "A fork of [pdfGPT](%s) with several improvements. With pdfGPT-chat, you can chat with your PDF files using [**Microsoft E5 Multilingual Text Embeddings**](%s) and **OpenAI**." - % (PDFGPT_URL, E5_URL) - ): - st.markdown( - "Compared to other tools, pdfGPT-chat provides **hallucinations-free** response, thanks to its superior embeddings and tailored prompt.
The generated responses from pdfGPT-chat include **citations** in square brackets ([]), indicating the **page numbers** where the relevant information is found.
This feature not only enhances the credibility of the responses but also aids in swiftly locating the pertinent information within the PDF file.", - unsafe_allow_html=True, - ) - - colored_header( - label="", - description="", - color_name="blue-40", - ) - - with preferences: - colored_header( - label="", - description="", - color_name="blue-40", - ) - st.write("**Preferences**") - embedding_model = st.selectbox( - "Embedding", - EMBEDDING_CHOICES.keys(), - help="""[Multilingual-E5](%s) supports 100 languages. - E5-small is much faster and suitable for PC without GPU.""" - % E5_URL, - on_change=pdf_change, - format_func=lambda x: EMBEDDING_CHOICES[x], - ) - gpt_model = st.selectbox( - "GPT Model", - GPT_CHOICES.keys(), - help="For GPT-4 you might have to join the waitlist: https://openai.com/waitlist/gpt-4-api", - format_func=lambda x: GPT_CHOICES[x], - ) - - # %% - # sidebar - with input_details: - # sidebar - pdf_url = st.text_input( - ":globe_with_meridians: Enter PDF URL here", on_change=pdf_change - ) - - st.markdown( - "

OR

", - unsafe_allow_html=True, - ) - - file = st.file_uploader( - ":page_facing_up: Upload your PDF/ Research Paper / Book here", - type=["pdf"], - on_change=pdf_change, - ) - - if st.button("Load PDF"): - st.session_state["loaded"] = True - with st.spinner("Loading PDF"): - with load_pdf_popup: - load_pdf() - - # %% - - # main tab - if st.session_state["loaded"]: - with input_container: - with st.form(key="input_form", clear_on_submit=True): - user_input = st.text_area("Question:", key="input", height=100) - submit_button = st.form_submit_button(label="Send") - - if user_input and submit_button: - with st.spinner("Processing your question"): - response = generate_response( - LCSERVE_HOST, - pdf_url, - file, - user_input, - ) - st.session_state.past.append(user_input) - st.session_state.generated.append(response["answer"]) - - # calculate cost - calculate_cost(response["token_used"], response["gpt_model"]) - - if not user_input and submit_button: - st.error("Please write your question.") - - with response_container: - if st.session_state["generated"]: - for i in range(len(st.session_state["generated"])): - message( - st.session_state["past"][i], is_user=True, key=str(i) + "_user" - ) - message(st.session_state["generated"][i], key=str(i)) - - cost_container.caption( - f"Estimated cost: $ {st.session_state['total_cost']:.4f}" - ) - - else: - with welcome_page: - st.write("") - st.subheader( - """:arrow_left: To start please fill input details in the sidebar and click **Load PDF**""" - ) - # %% - # placed in the end to include the last conversation - with chat_download: - chat_history = pd.DataFrame( - { - "Question": st.session_state["past"], - "Answer": st.session_state["generated"], - } - ) - - csv = convert_df(chat_history) - - st.download_button( - label="Download chat history", - data=csv, - file_name="chat history.csv", - mime="text/csv", - ) - add_vertical_space(2) - st.markdown(SIGNATURE, unsafe_allow_html=True) - - # %% - # # javascript - # - # # scroll halfway through the page - js = f""" - - """ - st.components.v1.html(js) - - # reduce main top padding - st.markdown( - "", - unsafe_allow_html=True, - ) - # reduce sidebar top padding - st.markdown( - "", - unsafe_allow_html=True, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/awacke1/BlenderbotGradioChatbotSOTA/README.md b/spaces/awacke1/BlenderbotGradioChatbotSOTA/README.md deleted file mode 100644 index 6c18bb978bb7f98a5b012e64847c01b0ec1e7074..0000000000000000000000000000000000000000 --- a/spaces/awacke1/BlenderbotGradioChatbotSOTA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BlenderbotGradioChatbotSOTA -emoji: 🗣️💬 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/awacke1/CardWriterPro/pages/14_\360\237\223\232_Glossary.py" "b/spaces/awacke1/CardWriterPro/pages/14_\360\237\223\232_Glossary.py" deleted file mode 100644 index e118cdc5a1304e0113cad2f032c4f998d0c127de..0000000000000000000000000000000000000000 --- "a/spaces/awacke1/CardWriterPro/pages/14_\360\237\223\232_Glossary.py" +++ /dev/null @@ -1,24 +0,0 @@ -import streamlit as st -from persist import persist, load_widget_state - - -global variable_output - -def main(): - cs_body() - - - -def cs_body(): - - st.markdown('# Glossary [optional]') - st.text_area("Terms used in the model card that need to be clearly defined in order to be accessible across audiences go here.",height = 200, key=persist("Glossary")) - - - - - - -if __name__ == '__main__': - load_widget_state() - main() \ No newline at end of file diff --git a/spaces/awacke1/HTML5.3D.Flight.with.Gravity/README.md b/spaces/awacke1/HTML5.3D.Flight.with.Gravity/README.md deleted file mode 100644 index 31e78f070898e7654f39a41ba45b9bcdb1a195b2..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5.3D.Flight.with.Gravity/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: HTML5.3D.Flight.with.Gravity -emoji: 💻 -colorFrom: indigo -colorTo: blue -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/HTML5.Wordle.Solver/index.html b/spaces/awacke1/HTML5.Wordle.Solver/index.html deleted file mode 100644 index 7a64734de7f54c964d298d5bab6524720031e3e0..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5.Wordle.Solver/index.html +++ /dev/null @@ -1,53 +0,0 @@ - - - - - Wordle Solver - - - -
-

Wordle Solver

-
-
-

Guess the word

-
- - -
- - - -
DescriptionVIDEO

Ejemplo para la empresa GCC, La imagen debe contener cualquiera equipo contra acidentes personales

-
    -
  • Cascos de Seguridad. ...
  • -
  • Botas de Seguridad. ...
  • -
  • Guantes de Seguridad. ...
  • -
  • Arnés de Seguridad. ...
  • -
  • Lentes de Seguridad. ...
  • -
  • Ropa de Seguridad. ...
  • -
  • CubreBocas ...
  • -
- 307136886-514139580194714-961127180922884176-n
- - -
- -
-

Score

-
-
-
1
-
2
-
3
-
4
-
5
-
6
-
-
-
-
-
-
-
-
-
-
-
- - - - - - - diff --git a/spaces/awacke1/Streamlit.Data.Editor/README.md b/spaces/awacke1/Streamlit.Data.Editor/README.md deleted file mode 100644 index 6654b1503a99f4d1e3f36303d72e20ca124293d3..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit.Data.Editor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Streamlit.Data.Editor -emoji: 💻 -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigslime/stablediffusion-infinity/js/keyboard.js b/spaces/bigslime/stablediffusion-infinity/js/keyboard.js deleted file mode 100644 index cf9757878c3c12b6b178a129860c12ca2b68b5be..0000000000000000000000000000000000000000 --- a/spaces/bigslime/stablediffusion-infinity/js/keyboard.js +++ /dev/null @@ -1,37 +0,0 @@ - -window.my_setup_keyboard=setInterval(function(){ - let app=document.querySelector("gradio-app"); - app=app.shadowRoot??app; - let frame=app.querySelector("#sdinfframe").contentWindow; - console.log("Check iframe..."); - if(frame.setup_shortcut) - { - frame.setup_shortcut(json); - clearInterval(window.my_setup_keyboard); - } -}, 1000); -var config=JSON.parse(json); -var key_map={}; -Object.keys(config.shortcut).forEach(k=>{ - key_map[config.shortcut[k]]=k; -}); -document.addEventListener("keydown", e => { - if(e.target.tagName!="INPUT"&&e.target.tagName!="GRADIO-APP"&&e.target.tagName!="TEXTAREA") - { - let key=e.key; - if(e.ctrlKey) - { - key="Ctrl+"+e.key; - if(key in key_map) - { - e.preventDefault(); - } - } - let app=document.querySelector("gradio-app"); - app=app.shadowRoot??app; - let frame=app.querySelector("#sdinfframe").contentDocument; - frame.dispatchEvent( - new KeyboardEvent("keydown", {key: e.key, ctrlKey: e.ctrlKey}) - ); - } -}) \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Email Hacker Activation Code Keygen Hack Any Email in Minutes with This Simple Tool.md b/spaces/bioriAsaeru/text-to-voice/Email Hacker Activation Code Keygen Hack Any Email in Minutes with This Simple Tool.md deleted file mode 100644 index 9b46d67fe44d82c866acf741ca61d2b62d20d54b..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Email Hacker Activation Code Keygen Hack Any Email in Minutes with This Simple Tool.md +++ /dev/null @@ -1,31 +0,0 @@ - -

In any case, you might start your search for the activation key message in bp-core/bp-core-signup.php. The function bp_core_signup_user() creates the activation key and hands it off to bp_core_signup_send_validation_email(), which creates the email. The content of that email message is filtered, so that you should be able to change the message without hacking the core files.

-

email hacker activation code keygen


Downloadhttps://urloso.com/2uyQpW



-

I have a similar issue where users sign up, get the activation email but are sent to a page asking for an activation key to be entered in an input field. One member on this site told me the key is the in the url. Sure, I realize that. But how do the users know that?

-

Deactivate all plugins and switch to twentytwenty theme so only core WP is in effect. The registration process should work as expected. Following the email link should result in a password input form. Restore your theme, then plugins, one at a time, testing after each activation. When key validation again fails, the last activated module is the cause.

-

For payment methods such as bank transfers or checks, you will receive an e-mail with your activation code after the payment is confirmed. Bank transfers are typically processed in 2-3 working days from the time the wire payment was sent. If paying by check, it could take up to 2 weeks to receive your payment.

-

Once you enable Facebook's two-factor authentication, which we strongly recommend, Facebook will ask you for a security or confirmation code to log in from a new location or device. Read our guide on two-factor authentication and why you should use it to learn more about this security method. Without two-factor authentication, you only need your username or email address and password to log into your Facebook account.

-

You can also access the above recovery route through any browser, on desktop or mobile. Go to m.facebook.com and log into your account using your mobile number, email, or username and password. When prompted for the login code, click Having trouble? > I don't have my phone > Continue.

-

So, don't let it get that far. It only takes a few minutes to enable two-factor authentication and save Facebook recovery codes. While you're at it, update your email address and phone number linked to your Facebook account. Once disaster strikes again, you will be able to recover your Facebook account.

-

-

If you choose to use this less secure option, enter a phone number at which you can receive phone calls or text messages. If you only have a landline, you must receive your one-time code by phone call. Login.gov cannot send one-time codes to extensions or voicemails.

-

The other part of that reason makes much more sense. That is managing the licit use of GlassWire when many people are involved. It is mainly Pro and Elite users who would have multiple users of each activation code. Basic GlassWire users will normally be using it on one device so there are not other users of each activation code. But group with multiple activations to manage would be purchasers of multiple Basic licenses who have issued them to other people. The GlassWire team should be able to tell what proportion of users these are.

-

Tenorshare 4uKey for Android is capable of solving this "forgot Android password" issue. To fully work for your devices, 4uKey for Android registration code is needed with a license email. So, this article will introduce Tenorshare 4uKey for Android licensed email and registration code in detail.

-

What if the above Tenorshare licensed email and registration code free is invalid? For that, we would suggest a premium version purchase. Buying a premium package not only guarantees your process but also saves you the hassle of free code trials and errors.

-

After download and install 4uKey for Android, it is time to get your registration code. Go to the purchase page and buy a suitable license. When you finish this process, you will receive an email with 4uKey for Android registration code.

-

We can conclude that it's safe to use the Tenorshare 4uKey for Android licensed email and registration code free provided above. If they do not work on your case, you can use the coupon codes to purchase at a discounted price.

-

If you did not receive an activation code, please contact Experian. We have established a dedicated call center available toll-free in the U.S. at (866) 904-6220 from 6:00 AM to 8:00 PM PT Monday through Friday and from 8:00 AM to 5:00 PM PT Saturday and Sunday.

-

If you lost your activation code, please contact Experian. We have established a dedicated call center available toll-free in the U.S. at +1 (866) 904-6220 from 6:00 AM to 8:00 PM PT Monday through Friday and from 8:00 AM to 5:00 PM PT Saturday and Sunday.

-

The University is working to identify the community members whose information was impacted. These investigations take time, and the University is working deliberately to provide accurate information as quickly as it can. As of June 30 and July 1, 2021, the University sent the appropriate individual notifications via Experian. UC community members are being notified via USPS mail where current physical addresses are available. If the University does not have a physical address, but does have an email address, it will send the notification via email. These notifications also included credit monitoring and identity theft protection activation codes for Experian IdentityWorks.

-

These individuals were notified between May 12-14, 2021. An activation code is contained in the email to sign up for Experian IdentityWorks. The former universal activation code (JCZGTC333) may no longer be used for new activations.

-

As of June 30 and July 1, 2021, the University sent the appropriate individual notifications via Experian. UC community members are being notified via USPS mail where current physical addresses are available. If the University does not have a physical address, but does have an email address, it will send the notification via email. These notifications also included credit monitoring and identity theft protection activation codes for Experian IdentityWorks.

-

Individuals eligible for credit monitoring and identity theft protection services were notified between May 12-14, 2021. An activation code is contained in the email to sign up for Experian IdentityWorks. The former universal activation code (JCZGTC333) may no longer be used for new activations.

-

Individuals eligible for credit monitoring and identity theft protection services were notified between May 12-14, 2021. As of June 30 and July 1, 2021, the University sent the appropriate individual notifications via Experian. These notifications also included credit monitoring and identity theft protection activation codes for Experian IdentityWorks.

-

As of June 30 and July 1, 2021, the University sent the appropriate individual notifications via Experian. UC community members are being notified via USPS mail where current physical addresses are available. If the University does not have a physical address, but does have an email address, it will send the notification via email. These notifications also included credit monitoring and an activation code is contained in the letter or email to sign up for Experian.

-

UC community members are being notified via USPS mail where current physical addresses are available. If the University does not have a physical address, but does have an email address, it will send the notification via email. These notifications also included credit monitoring and identity theft protection activation codes for Experian IdentityWorks.

-

A Social Security number is required to sign-up for credit monitoring. Adults without a Social Security number are eligible for Experian IdentityWorks Global. The activation code in individual notifications sent between May 12 - May 14 and June 30 - July 1, 2021, also works for Experian IdentityWorks Global.

-

Hi. My name is ____________. My email is ____________, and my Facebook ID is ______________. My Facebook account was hacked on ___date___. While I was able to reset my password after I confirmed my identity, I believe the hacker has set up 2FA, preventing me from logging in to my own account and accessing the code to log in. I am attaching an image of my ID as proof of my identity. I would appreciate you turning off the 2FA on my account so that I can log in again. Thank you.

-

WondershareFilmora 9 Activation Key eliminates unwanted background noise very easily. Import images and clips directly from Facebook or other social media platforms. Jump through your voice and video tracks one frame at a time for precise editing. Here is the working list of Filmora 9 activation keys and Filmora 10 free activation code for 2022.

-

Filmora is the most popular video editing software used by most vloggers to edit their videos in 2022. It is available in a free version with basic features. Still, if you want high-quality premium features, you have to buy them or use them by using the free activation codes and keys available in the article. You can download the original version from the official website of Filmora and use these keys to unlock the incredible editing features and let us know in the comment box which keys are working 100%.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/botlik100/kaki/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/botlik100/kaki/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/botlik100/kaki/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/bradarrML/stablediffusion-infinity/js/proceed.js b/spaces/bradarrML/stablediffusion-infinity/js/proceed.js deleted file mode 100644 index 11a726fa0902382e2515f3b32147bdaa0179aa9a..0000000000000000000000000000000000000000 --- a/spaces/bradarrML/stablediffusion-infinity/js/proceed.js +++ /dev/null @@ -1,42 +0,0 @@ -function(sel_buffer_str, - prompt_text, - negative_prompt_text, - strength, - guidance, - step, - resize_check, - fill_mode, - enable_safety, - use_correction, - enable_img2img, - use_seed, - seed_val, - generate_num, - scheduler, - scheduler_eta, - state){ - let app=document.querySelector("gradio-app"); - app=app.shadowRoot??app; - sel_buffer=app.querySelector("#input textarea").value; - let use_correction_bak=false; - ({resize_check,enable_safety,use_correction_bak,enable_img2img,use_seed,seed_val}=window.config_obj); - return [ - sel_buffer, - prompt_text, - negative_prompt_text, - strength, - guidance, - step, - resize_check, - fill_mode, - enable_safety, - use_correction, - enable_img2img, - use_seed, - seed_val, - generate_num, - scheduler, - scheduler_eta, - state, - ] -} \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/discriminators/msd.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/discriminators/msd.py deleted file mode 100644 index c4e67e29b46ab22f6ffeec85ffc64d8b99800b1b..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/discriminators/msd.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch -import torch.nn as nn - -from ...modules import NormConv1d -from .base import MultiDiscriminator, MultiDiscriminatorOutputType - - -class ScaleDiscriminator(nn.Module): - """Waveform sub-discriminator. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_sizes (Sequence[int]): Kernel sizes for first and last convolutions. - filters (int): Number of initial filters for convolutions. - max_filters (int): Maximum number of filters. - downsample_scales (Sequence[int]): Scale for downsampling implemented as strided convolutions. - inner_kernel_sizes (Sequence[int] or None): Kernel sizes for inner convolutions. - groups (Sequence[int] or None): Groups for inner convolutions. - strides (Sequence[int] or None): Strides for inner convolutions. - paddings (Sequence[int] or None): Paddings for inner convolutions. - norm (str): Normalization method. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - pad (str): Padding for initial convolution. - pad_params (dict): Parameters to provide to the padding module. - """ - def __init__(self, in_channels=1, out_channels=1, kernel_sizes: tp.Sequence[int] = [5, 3], - filters: int = 16, max_filters: int = 1024, downsample_scales: tp.Sequence[int] = [4, 4, 4, 4], - inner_kernel_sizes: tp.Optional[tp.Sequence[int]] = None, groups: tp.Optional[tp.Sequence[int]] = None, - strides: tp.Optional[tp.Sequence[int]] = None, paddings: tp.Optional[tp.Sequence[int]] = None, - norm: str = 'weight_norm', activation: str = 'LeakyReLU', - activation_params: dict = {'negative_slope': 0.2}, pad: str = 'ReflectionPad1d', - pad_params: dict = {}): - super().__init__() - assert len(kernel_sizes) == 2 - assert kernel_sizes[0] % 2 == 1 - assert kernel_sizes[1] % 2 == 1 - assert (inner_kernel_sizes is None or len(inner_kernel_sizes) == len(downsample_scales)) - assert (groups is None or len(groups) == len(downsample_scales)) - assert (strides is None or len(strides) == len(downsample_scales)) - assert (paddings is None or len(paddings) == len(downsample_scales)) - self.activation = getattr(torch.nn, activation)(**activation_params) - self.convs = nn.ModuleList() - self.convs.append( - nn.Sequential( - getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params), - NormConv1d(in_channels, filters, kernel_size=np.prod(kernel_sizes), stride=1, norm=norm) - ) - ) - - in_chs = filters - for i, downsample_scale in enumerate(downsample_scales): - out_chs = min(in_chs * downsample_scale, max_filters) - default_kernel_size = downsample_scale * 10 + 1 - default_stride = downsample_scale - default_padding = (default_kernel_size - 1) // 2 - default_groups = in_chs // 4 - self.convs.append( - NormConv1d(in_chs, out_chs, - kernel_size=inner_kernel_sizes[i] if inner_kernel_sizes else default_kernel_size, - stride=strides[i] if strides else default_stride, - groups=groups[i] if groups else default_groups, - padding=paddings[i] if paddings else default_padding, - norm=norm)) - in_chs = out_chs - - out_chs = min(in_chs * 2, max_filters) - self.convs.append(NormConv1d(in_chs, out_chs, kernel_size=kernel_sizes[0], stride=1, - padding=(kernel_sizes[0] - 1) // 2, norm=norm)) - self.conv_post = NormConv1d(out_chs, out_channels, kernel_size=kernel_sizes[1], stride=1, - padding=(kernel_sizes[1] - 1) // 2, norm=norm) - - def forward(self, x: torch.Tensor): - fmap = [] - for layer in self.convs: - x = layer(x) - x = self.activation(x) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - # x = torch.flatten(x, 1, -1) - return x, fmap - - -class MultiScaleDiscriminator(MultiDiscriminator): - """Multi-Scale (MSD) Discriminator, - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - downsample_factor (int): Downsampling factor between the different scales. - scale_norms (Sequence[str]): Normalization for each sub-discriminator. - **kwargs: Additional args for ScaleDiscriminator. - """ - def __init__(self, in_channels: int = 1, out_channels: int = 1, downsample_factor: int = 2, - scale_norms: tp.Sequence[str] = ['weight_norm', 'weight_norm', 'weight_norm'], **kwargs): - super().__init__() - self.discriminators = nn.ModuleList([ - ScaleDiscriminator(in_channels, out_channels, norm=norm, **kwargs) for norm in scale_norms - ]) - self.downsample = nn.AvgPool1d(downsample_factor * 2, downsample_factor, padding=downsample_factor) - - @property - def num_discriminators(self): - return len(self.discriminators) - - def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType: - logits = [] - fmaps = [] - for i, disc in enumerate(self.discriminators): - if i != 0: - self.downsample(x) - logit, fmap = disc(x) - logits.append(logit) - fmaps.append(fmap) - return logits, fmaps diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/samplers/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/samplers/__init__.py deleted file mode 100644 index 85c9f1a9df8a4038fbd4246239b699402e382309..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/samplers/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .distributed_sampler import ( - InferenceSampler, - RandomSubsetTrainingSampler, - RepeatFactorTrainingSampler, - TrainingSampler, -) - -from .grouped_batch_sampler import GroupedBatchSampler - -__all__ = [ - "GroupedBatchSampler", - "TrainingSampler", - "RandomSubsetTrainingSampler", - "InferenceSampler", - "RepeatFactorTrainingSampler", -] diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tools/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tools/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cakiki/facets-dive/Makefile b/spaces/cakiki/facets-dive/Makefile deleted file mode 100644 index a6f28a7a554e30a25202cca5f276d7d10b24dc26..0000000000000000000000000000000000000000 --- a/spaces/cakiki/facets-dive/Makefile +++ /dev/null @@ -1,13 +0,0 @@ -VERSION := 0.0.1 -NAME := facets-dive -REPO := cakiki - -build: - docker build -f Dockerfile -t ${REPO}/${NAME}:${VERSION} -t ${REPO}/${NAME}:latest . - -run: build - docker run --rm -it -p 8888:8888 --mount type=bind,source=${PWD},target=/home/jovyan/work --name ${NAME} --workdir=/home/jovyan/work ${REPO}/${NAME}:${VERSION} - -push: build - docker push ${REPO}/${NAME}:${VERSION} && docker push ${REPO}/${NAME}:latest - diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Image.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Image.py deleted file mode 100644 index a519a28af3689d46f1c26d00ffc7204958da7a7e..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Image.py +++ /dev/null @@ -1,3910 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# the Image class wrapper -# -# partial release history: -# 1995-09-09 fl Created -# 1996-03-11 fl PIL release 0.0 (proof of concept) -# 1996-04-30 fl PIL release 0.1b1 -# 1999-07-28 fl PIL release 1.0 final -# 2000-06-07 fl PIL release 1.1 -# 2000-10-20 fl PIL release 1.1.1 -# 2001-05-07 fl PIL release 1.1.2 -# 2002-03-15 fl PIL release 1.1.3 -# 2003-05-10 fl PIL release 1.1.4 -# 2005-03-28 fl PIL release 1.1.5 -# 2006-12-02 fl PIL release 1.1.6 -# 2009-11-15 fl PIL release 1.1.7 -# -# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-2009 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import atexit -import builtins -import io -import logging -import math -import os -import re -import struct -import sys -import tempfile -import warnings -from collections.abc import Callable, MutableMapping -from enum import IntEnum -from pathlib import Path - -try: - import defusedxml.ElementTree as ElementTree -except ImportError: - ElementTree = None - -# VERSION was removed in Pillow 6.0.0. -# PILLOW_VERSION was removed in Pillow 9.0.0. -# Use __version__ instead. -from . import ( - ExifTags, - ImageMode, - TiffTags, - UnidentifiedImageError, - __version__, - _plugins, -) -from ._binary import i32le, o32be, o32le -from ._util import DeferredError, is_path - -logger = logging.getLogger(__name__) - - -class DecompressionBombWarning(RuntimeWarning): - pass - - -class DecompressionBombError(Exception): - pass - - -# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image -MAX_IMAGE_PIXELS = int(1024 * 1024 * 1024 // 4 // 3) - - -try: - # If the _imaging C module is not present, Pillow will not load. - # Note that other modules should not refer to _imaging directly; - # import Image and use the Image.core variable instead. - # Also note that Image.core is not a publicly documented interface, - # and should be considered private and subject to change. - from . import _imaging as core - - if __version__ != getattr(core, "PILLOW_VERSION", None): - msg = ( - "The _imaging extension was built for another version of Pillow or PIL:\n" - f"Core version: {getattr(core, 'PILLOW_VERSION', None)}\n" - f"Pillow version: {__version__}" - ) - raise ImportError(msg) - -except ImportError as v: - core = DeferredError(ImportError("The _imaging C module is not installed.")) - # Explanations for ways that we know we might have an import error - if str(v).startswith("Module use of python"): - # The _imaging C module is present, but not compiled for - # the right version (windows only). Print a warning, if - # possible. - warnings.warn( - "The _imaging extension was built for another version of Python.", - RuntimeWarning, - ) - elif str(v).startswith("The _imaging extension"): - warnings.warn(str(v), RuntimeWarning) - # Fail here anyway. Don't let people run with a mostly broken Pillow. - # see docs/porting.rst - raise - - -USE_CFFI_ACCESS = False -try: - import cffi -except ImportError: - cffi = None - - -def isImageType(t): - """ - Checks if an object is an image object. - - .. warning:: - - This function is for internal use only. - - :param t: object to check if it's an image - :returns: True if the object is an image - """ - return hasattr(t, "im") - - -# -# Constants - - -# transpose -class Transpose(IntEnum): - FLIP_LEFT_RIGHT = 0 - FLIP_TOP_BOTTOM = 1 - ROTATE_90 = 2 - ROTATE_180 = 3 - ROTATE_270 = 4 - TRANSPOSE = 5 - TRANSVERSE = 6 - - -# transforms (also defined in Imaging.h) -class Transform(IntEnum): - AFFINE = 0 - EXTENT = 1 - PERSPECTIVE = 2 - QUAD = 3 - MESH = 4 - - -# resampling filters (also defined in Imaging.h) -class Resampling(IntEnum): - NEAREST = 0 - BOX = 4 - BILINEAR = 2 - HAMMING = 5 - BICUBIC = 3 - LANCZOS = 1 - - -_filters_support = { - Resampling.BOX: 0.5, - Resampling.BILINEAR: 1.0, - Resampling.HAMMING: 1.0, - Resampling.BICUBIC: 2.0, - Resampling.LANCZOS: 3.0, -} - - -# dithers -class Dither(IntEnum): - NONE = 0 - ORDERED = 1 # Not yet implemented - RASTERIZE = 2 # Not yet implemented - FLOYDSTEINBERG = 3 # default - - -# palettes/quantizers -class Palette(IntEnum): - WEB = 0 - ADAPTIVE = 1 - - -class Quantize(IntEnum): - MEDIANCUT = 0 - MAXCOVERAGE = 1 - FASTOCTREE = 2 - LIBIMAGEQUANT = 3 - - -module = sys.modules[__name__] -for enum in (Transpose, Transform, Resampling, Dither, Palette, Quantize): - for item in enum: - setattr(module, item.name, item.value) - - -if hasattr(core, "DEFAULT_STRATEGY"): - DEFAULT_STRATEGY = core.DEFAULT_STRATEGY - FILTERED = core.FILTERED - HUFFMAN_ONLY = core.HUFFMAN_ONLY - RLE = core.RLE - FIXED = core.FIXED - - -# -------------------------------------------------------------------- -# Registries - -ID = [] -OPEN = {} -MIME = {} -SAVE = {} -SAVE_ALL = {} -EXTENSION = {} -DECODERS = {} -ENCODERS = {} - -# -------------------------------------------------------------------- -# Modes - -_ENDIAN = "<" if sys.byteorder == "little" else ">" - - -def _conv_type_shape(im): - m = ImageMode.getmode(im.mode) - shape = (im.height, im.width) - extra = len(m.bands) - if extra != 1: - shape += (extra,) - return shape, m.typestr - - -MODES = ["1", "CMYK", "F", "HSV", "I", "L", "LAB", "P", "RGB", "RGBA", "RGBX", "YCbCr"] - -# raw modes that may be memory mapped. NOTE: if you change this, you -# may have to modify the stride calculation in map.c too! -_MAPMODES = ("L", "P", "RGBX", "RGBA", "CMYK", "I;16", "I;16L", "I;16B") - - -def getmodebase(mode): - """ - Gets the "base" mode for given mode. This function returns "L" for - images that contain grayscale data, and "RGB" for images that - contain color data. - - :param mode: Input mode. - :returns: "L" or "RGB". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basemode - - -def getmodetype(mode): - """ - Gets the storage type mode. Given a mode, this function returns a - single-layer mode suitable for storing individual bands. - - :param mode: Input mode. - :returns: "L", "I", or "F". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basetype - - -def getmodebandnames(mode): - """ - Gets a list of individual band names. Given a mode, this function returns - a tuple containing the names of individual bands (use - :py:method:`~PIL.Image.getmodetype` to get the mode used to store each - individual band. - - :param mode: Input mode. - :returns: A tuple containing band names. The length of the tuple - gives the number of bands in an image of the given mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).bands - - -def getmodebands(mode): - """ - Gets the number of individual bands for this mode. - - :param mode: Input mode. - :returns: The number of bands in this mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return len(ImageMode.getmode(mode).bands) - - -# -------------------------------------------------------------------- -# Helpers - -_initialized = 0 - - -def preinit(): - """Explicitly load standard file format drivers.""" - - global _initialized - if _initialized >= 1: - return - - try: - from . import BmpImagePlugin - - assert BmpImagePlugin - except ImportError: - pass - try: - from . import GifImagePlugin - - assert GifImagePlugin - except ImportError: - pass - try: - from . import JpegImagePlugin - - assert JpegImagePlugin - except ImportError: - pass - try: - from . import PpmImagePlugin - - assert PpmImagePlugin - except ImportError: - pass - try: - from . import PngImagePlugin - - assert PngImagePlugin - except ImportError: - pass - # try: - # import TiffImagePlugin - # assert TiffImagePlugin - # except ImportError: - # pass - - _initialized = 1 - - -def init(): - """ - Explicitly initializes the Python Imaging Library. This function - loads all available file format drivers. - """ - - global _initialized - if _initialized >= 2: - return 0 - - for plugin in _plugins: - try: - logger.debug("Importing %s", plugin) - __import__(f"PIL.{plugin}", globals(), locals(), []) - except ImportError as e: - logger.debug("Image: failed to import %s: %s", plugin, e) - - if OPEN or SAVE: - _initialized = 2 - return 1 - - -# -------------------------------------------------------------------- -# Codec factories (used by tobytes/frombytes and ImageFile.load) - - -def _getdecoder(mode, decoder_name, args, extra=()): - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - decoder = DECODERS[decoder_name] - except KeyError: - pass - else: - return decoder(mode, *args + extra) - - try: - # get decoder - decoder = getattr(core, decoder_name + "_decoder") - except AttributeError as e: - msg = f"decoder {decoder_name} not available" - raise OSError(msg) from e - return decoder(mode, *args + extra) - - -def _getencoder(mode, encoder_name, args, extra=()): - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - encoder = ENCODERS[encoder_name] - except KeyError: - pass - else: - return encoder(mode, *args + extra) - - try: - # get encoder - encoder = getattr(core, encoder_name + "_encoder") - except AttributeError as e: - msg = f"encoder {encoder_name} not available" - raise OSError(msg) from e - return encoder(mode, *args + extra) - - -# -------------------------------------------------------------------- -# Simple expression analyzer - - -class _E: - def __init__(self, scale, offset): - self.scale = scale - self.offset = offset - - def __neg__(self): - return _E(-self.scale, -self.offset) - - def __add__(self, other): - if isinstance(other, _E): - return _E(self.scale + other.scale, self.offset + other.offset) - return _E(self.scale, self.offset + other) - - __radd__ = __add__ - - def __sub__(self, other): - return self + -other - - def __rsub__(self, other): - return other + -self - - def __mul__(self, other): - if isinstance(other, _E): - return NotImplemented - return _E(self.scale * other, self.offset * other) - - __rmul__ = __mul__ - - def __truediv__(self, other): - if isinstance(other, _E): - return NotImplemented - return _E(self.scale / other, self.offset / other) - - -def _getscaleoffset(expr): - a = expr(_E(1, 0)) - return (a.scale, a.offset) if isinstance(a, _E) else (0, a) - - -# -------------------------------------------------------------------- -# Implementation wrapper - - -class Image: - """ - This class represents an image object. To create - :py:class:`~PIL.Image.Image` objects, use the appropriate factory - functions. There's hardly ever any reason to call the Image constructor - directly. - - * :py:func:`~PIL.Image.open` - * :py:func:`~PIL.Image.new` - * :py:func:`~PIL.Image.frombytes` - """ - - format = None - format_description = None - _close_exclusive_fp_after_loading = True - - def __init__(self): - # FIXME: take "new" parameters / other image? - # FIXME: turn mode and size into delegating properties? - self.im = None - self.mode = "" - self._size = (0, 0) - self.palette = None - self.info = {} - self.readonly = 0 - self.pyaccess = None - self._exif = None - - @property - def width(self): - return self.size[0] - - @property - def height(self): - return self.size[1] - - @property - def size(self): - return self._size - - def _new(self, im): - new = Image() - new.im = im - new.mode = im.mode - new._size = im.size - if im.mode in ("P", "PA"): - if self.palette: - new.palette = self.palette.copy() - else: - from . import ImagePalette - - new.palette = ImagePalette.ImagePalette() - new.info = self.info.copy() - return new - - # Context manager support - def __enter__(self): - return self - - def __exit__(self, *args): - if hasattr(self, "fp") and getattr(self, "_exclusive_fp", False): - if getattr(self, "_fp", False): - if self._fp != self.fp: - self._fp.close() - self._fp = DeferredError(ValueError("Operation on closed image")) - if self.fp: - self.fp.close() - self.fp = None - - def close(self): - """ - Closes the file pointer, if possible. - - This operation will destroy the image core and release its memory. - The image data will be unusable afterward. - - This function is required to close images that have multiple frames or - have not had their file read and closed by the - :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for - more information. - """ - try: - if getattr(self, "_fp", False): - if self._fp != self.fp: - self._fp.close() - self._fp = DeferredError(ValueError("Operation on closed image")) - if self.fp: - self.fp.close() - self.fp = None - except Exception as msg: - logger.debug("Error closing: %s", msg) - - if getattr(self, "map", None): - self.map = None - - # Instead of simply setting to None, we're setting up a - # deferred error that will better explain that the core image - # object is gone. - self.im = DeferredError(ValueError("Operation on closed image")) - - def _copy(self): - self.load() - self.im = self.im.copy() - self.pyaccess = None - self.readonly = 0 - - def _ensure_mutable(self): - if self.readonly: - self._copy() - else: - self.load() - - def _dump(self, file=None, format=None, **options): - suffix = "" - if format: - suffix = "." + format - - if not file: - f, filename = tempfile.mkstemp(suffix) - os.close(f) - else: - filename = file - if not filename.endswith(suffix): - filename = filename + suffix - - self.load() - - if not format or format == "PPM": - self.im.save_ppm(filename) - else: - self.save(filename, format, **options) - - return filename - - def __eq__(self, other): - return ( - self.__class__ is other.__class__ - and self.mode == other.mode - and self.size == other.size - and self.info == other.info - and self.getpalette() == other.getpalette() - and self.tobytes() == other.tobytes() - ) - - def __repr__(self): - return "<%s.%s image mode=%s size=%dx%d at 0x%X>" % ( - self.__class__.__module__, - self.__class__.__name__, - self.mode, - self.size[0], - self.size[1], - id(self), - ) - - def _repr_pretty_(self, p, cycle): - """IPython plain text display support""" - - # Same as __repr__ but without unpredictable id(self), - # to keep Jupyter notebook `text/plain` output stable. - p.text( - "<%s.%s image mode=%s size=%dx%d>" - % ( - self.__class__.__module__, - self.__class__.__name__, - self.mode, - self.size[0], - self.size[1], - ) - ) - - def _repr_image(self, image_format, **kwargs): - """Helper function for iPython display hook. - - :param image_format: Image format. - :returns: image as bytes, saved into the given format. - """ - b = io.BytesIO() - try: - self.save(b, image_format, **kwargs) - except Exception as e: - msg = f"Could not save to {image_format} for display" - raise ValueError(msg) from e - return b.getvalue() - - def _repr_png_(self): - """iPython display hook support for PNG format. - - :returns: PNG version of the image as bytes - """ - return self._repr_image("PNG", compress_level=1) - - def _repr_jpeg_(self): - """iPython display hook support for JPEG format. - - :returns: JPEG version of the image as bytes - """ - return self._repr_image("JPEG") - - @property - def __array_interface__(self): - # numpy array interface support - new = {"version": 3} - try: - if self.mode == "1": - # Binary images need to be extended from bits to bytes - # See: https://github.com/python-pillow/Pillow/issues/350 - new["data"] = self.tobytes("raw", "L") - else: - new["data"] = self.tobytes() - except Exception as e: - if not isinstance(e, (MemoryError, RecursionError)): - try: - import numpy - from packaging.version import parse as parse_version - except ImportError: - pass - else: - if parse_version(numpy.__version__) < parse_version("1.23"): - warnings.warn(e) - raise - new["shape"], new["typestr"] = _conv_type_shape(self) - return new - - def __getstate__(self): - im_data = self.tobytes() # load image first - return [self.info, self.mode, self.size, self.getpalette(), im_data] - - def __setstate__(self, state): - Image.__init__(self) - info, mode, size, palette, data = state - self.info = info - self.mode = mode - self._size = size - self.im = core.new(mode, size) - if mode in ("L", "LA", "P", "PA") and palette: - self.putpalette(palette) - self.frombytes(data) - - def tobytes(self, encoder_name="raw", *args): - """ - Return image as a bytes object. - - .. warning:: - - This method returns the raw image data from the internal - storage. For compressed image data (e.g. PNG, JPEG) use - :meth:`~.save`, with a BytesIO parameter for in-memory - data. - - :param encoder_name: What encoder to use. The default is to - use the standard "raw" encoder. - - A list of C encoders can be seen under - codecs section of the function array in - :file:`_imaging.c`. Python encoders are - registered within the relevant plugins. - :param args: Extra arguments to the encoder. - :returns: A :py:class:`bytes` object. - """ - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if encoder_name == "raw" and args == (): - args = self.mode - - self.load() - - if self.width == 0 or self.height == 0: - return b"" - - # unpack data - e = _getencoder(self.mode, encoder_name, args) - e.setimage(self.im) - - bufsize = max(65536, self.size[0] * 4) # see RawEncode.c - - output = [] - while True: - bytes_consumed, errcode, data = e.encode(bufsize) - output.append(data) - if errcode: - break - if errcode < 0: - msg = f"encoder error {errcode} in tobytes" - raise RuntimeError(msg) - - return b"".join(output) - - def tobitmap(self, name="image"): - """ - Returns the image converted to an X11 bitmap. - - .. note:: This method only works for mode "1" images. - - :param name: The name prefix to use for the bitmap variables. - :returns: A string containing an X11 bitmap. - :raises ValueError: If the mode is not "1" - """ - - self.load() - if self.mode != "1": - msg = "not a bitmap" - raise ValueError(msg) - data = self.tobytes("xbm") - return b"".join( - [ - f"#define {name}_width {self.size[0]}\n".encode("ascii"), - f"#define {name}_height {self.size[1]}\n".encode("ascii"), - f"static char {name}_bits[] = {{\n".encode("ascii"), - data, - b"};", - ] - ) - - def frombytes(self, data, decoder_name="raw", *args): - """ - Loads this image with pixel data from a bytes object. - - This method is similar to the :py:func:`~PIL.Image.frombytes` function, - but loads data into this image instead of creating a new image object. - """ - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - # default format - if decoder_name == "raw" and args == (): - args = self.mode - - # unpack data - d = _getdecoder(self.mode, decoder_name, args) - d.setimage(self.im) - s = d.decode(data) - - if s[0] >= 0: - msg = "not enough image data" - raise ValueError(msg) - if s[1] != 0: - msg = "cannot decode image data" - raise ValueError(msg) - - def load(self): - """ - Allocates storage for the image and loads the pixel data. In - normal cases, you don't need to call this method, since the - Image class automatically loads an opened image when it is - accessed for the first time. - - If the file associated with the image was opened by Pillow, then this - method will close it. The exception to this is if the image has - multiple frames, in which case the file will be left open for seek - operations. See :ref:`file-handling` for more information. - - :returns: An image access object. - :rtype: :ref:`PixelAccess` or :py:class:`PIL.PyAccess` - """ - if self.im is not None and self.palette and self.palette.dirty: - # realize palette - mode, arr = self.palette.getdata() - self.im.putpalette(mode, arr) - self.palette.dirty = 0 - self.palette.rawmode = None - if "transparency" in self.info and mode in ("LA", "PA"): - if isinstance(self.info["transparency"], int): - self.im.putpalettealpha(self.info["transparency"], 0) - else: - self.im.putpalettealphas(self.info["transparency"]) - self.palette.mode = "RGBA" - else: - palette_mode = "RGBA" if mode.startswith("RGBA") else "RGB" - self.palette.mode = palette_mode - self.palette.palette = self.im.getpalette(palette_mode, palette_mode) - - if self.im is not None: - if cffi and USE_CFFI_ACCESS: - if self.pyaccess: - return self.pyaccess - from . import PyAccess - - self.pyaccess = PyAccess.new(self, self.readonly) - if self.pyaccess: - return self.pyaccess - return self.im.pixel_access(self.readonly) - - def verify(self): - """ - Verifies the contents of a file. For data read from a file, this - method attempts to determine if the file is broken, without - actually decoding the image data. If this method finds any - problems, it raises suitable exceptions. If you need to load - the image after using this method, you must reopen the image - file. - """ - pass - - def convert( - self, mode=None, matrix=None, dither=None, palette=Palette.WEB, colors=256 - ): - """ - Returns a converted copy of this image. For the "P" mode, this - method translates pixels through the palette. If mode is - omitted, a mode is chosen so that all information in the image - and the palette can be represented without a palette. - - The current version supports all possible conversions between - "L", "RGB" and "CMYK". The ``matrix`` argument only supports "L" - and "RGB". - - When translating a color image to greyscale (mode "L"), - the library uses the ITU-R 601-2 luma transform:: - - L = R * 299/1000 + G * 587/1000 + B * 114/1000 - - The default method of converting a greyscale ("L") or "RGB" - image into a bilevel (mode "1") image uses Floyd-Steinberg - dither to approximate the original image luminosity levels. If - dither is ``None``, all values larger than 127 are set to 255 (white), - all other values to 0 (black). To use other thresholds, use the - :py:meth:`~PIL.Image.Image.point` method. - - When converting from "RGBA" to "P" without a ``matrix`` argument, - this passes the operation to :py:meth:`~PIL.Image.Image.quantize`, - and ``dither`` and ``palette`` are ignored. - - When converting from "PA", if an "RGBA" palette is present, the alpha - channel from the image will be used instead of the values from the palette. - - :param mode: The requested mode. See: :ref:`concept-modes`. - :param matrix: An optional conversion matrix. If given, this - should be 4- or 12-tuple containing floating point values. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). Note that this is not used when ``matrix`` is supplied. - :param palette: Palette to use when converting from mode "RGB" - to "P". Available palettes are :data:`Palette.WEB` or - :data:`Palette.ADAPTIVE`. - :param colors: Number of colors to use for the :data:`Palette.ADAPTIVE` - palette. Defaults to 256. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - has_transparency = self.info.get("transparency") is not None - if not mode and self.mode == "P": - # determine default mode - if self.palette: - mode = self.palette.mode - else: - mode = "RGB" - if mode == "RGB" and has_transparency: - mode = "RGBA" - if not mode or (mode == self.mode and not matrix): - return self.copy() - - if matrix: - # matrix conversion - if mode not in ("L", "RGB"): - msg = "illegal conversion" - raise ValueError(msg) - im = self.im.convert_matrix(mode, matrix) - new = self._new(im) - if has_transparency and self.im.bands == 3: - transparency = new.info["transparency"] - - def convert_transparency(m, v): - v = m[0] * v[0] + m[1] * v[1] + m[2] * v[2] + m[3] * 0.5 - return max(0, min(255, int(v))) - - if mode == "L": - transparency = convert_transparency(matrix, transparency) - elif len(mode) == 3: - transparency = tuple( - convert_transparency(matrix[i * 4 : i * 4 + 4], transparency) - for i in range(0, len(transparency)) - ) - new.info["transparency"] = transparency - return new - - if mode == "P" and self.mode == "RGBA": - return self.quantize(colors) - - trns = None - delete_trns = False - # transparency handling - if has_transparency: - if (self.mode in ("1", "L", "I") and mode in ("LA", "RGBA")) or ( - self.mode == "RGB" and mode == "RGBA" - ): - # Use transparent conversion to promote from transparent - # color to an alpha channel. - new_im = self._new( - self.im.convert_transparent(mode, self.info["transparency"]) - ) - del new_im.info["transparency"] - return new_im - elif self.mode in ("L", "RGB", "P") and mode in ("L", "RGB", "P"): - t = self.info["transparency"] - if isinstance(t, bytes): - # Dragons. This can't be represented by a single color - warnings.warn( - "Palette images with Transparency expressed in bytes should be " - "converted to RGBA images" - ) - delete_trns = True - else: - # get the new transparency color. - # use existing conversions - trns_im = Image()._new(core.new(self.mode, (1, 1))) - if self.mode == "P": - trns_im.putpalette(self.palette) - if isinstance(t, tuple): - err = "Couldn't allocate a palette color for transparency" - try: - t = trns_im.palette.getcolor(t, self) - except ValueError as e: - if str(e) == "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - t = None - else: - raise ValueError(err) from e - if t is None: - trns = None - else: - trns_im.putpixel((0, 0), t) - - if mode in ("L", "RGB"): - trns_im = trns_im.convert(mode) - else: - # can't just retrieve the palette number, got to do it - # after quantization. - trns_im = trns_im.convert("RGB") - trns = trns_im.getpixel((0, 0)) - - elif self.mode == "P" and mode in ("LA", "PA", "RGBA"): - t = self.info["transparency"] - delete_trns = True - - if isinstance(t, bytes): - self.im.putpalettealphas(t) - elif isinstance(t, int): - self.im.putpalettealpha(t, 0) - else: - msg = "Transparency for P mode should be bytes or int" - raise ValueError(msg) - - if mode == "P" and palette == Palette.ADAPTIVE: - im = self.im.quantize(colors) - new = self._new(im) - from . import ImagePalette - - new.palette = ImagePalette.ImagePalette("RGB", new.im.getpalette("RGB")) - if delete_trns: - # This could possibly happen if we requantize to fewer colors. - # The transparency would be totally off in that case. - del new.info["transparency"] - if trns is not None: - try: - new.info["transparency"] = new.palette.getcolor(trns, new) - except Exception: - # if we can't make a transparent color, don't leave the old - # transparency hanging around to mess us up. - del new.info["transparency"] - warnings.warn("Couldn't allocate palette entry for transparency") - return new - - if "LAB" in (self.mode, mode): - other_mode = mode if self.mode == "LAB" else self.mode - if other_mode in ("RGB", "RGBA", "RGBX"): - from . import ImageCms - - srgb = ImageCms.createProfile("sRGB") - lab = ImageCms.createProfile("LAB") - profiles = [lab, srgb] if self.mode == "LAB" else [srgb, lab] - transform = ImageCms.buildTransform( - profiles[0], profiles[1], self.mode, mode - ) - return transform.apply(self) - - # colorspace conversion - if dither is None: - dither = Dither.FLOYDSTEINBERG - - try: - im = self.im.convert(mode, dither) - except ValueError: - try: - # normalize source image and try again - modebase = getmodebase(self.mode) - if modebase == self.mode: - raise - im = self.im.convert(modebase) - im = im.convert(mode, dither) - except KeyError as e: - msg = "illegal conversion" - raise ValueError(msg) from e - - new_im = self._new(im) - if mode == "P" and palette != Palette.ADAPTIVE: - from . import ImagePalette - - new_im.palette = ImagePalette.ImagePalette("RGB", list(range(256)) * 3) - if delete_trns: - # crash fail if we leave a bytes transparency in an rgb/l mode. - del new_im.info["transparency"] - if trns is not None: - if new_im.mode == "P": - try: - new_im.info["transparency"] = new_im.palette.getcolor(trns, new_im) - except ValueError as e: - del new_im.info["transparency"] - if str(e) != "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - warnings.warn( - "Couldn't allocate palette entry for transparency" - ) - else: - new_im.info["transparency"] = trns - return new_im - - def quantize( - self, - colors=256, - method=None, - kmeans=0, - palette=None, - dither=Dither.FLOYDSTEINBERG, - ): - """ - Convert the image to 'P' mode with the specified number - of colors. - - :param colors: The desired number of colors, <= 256 - :param method: :data:`Quantize.MEDIANCUT` (median cut), - :data:`Quantize.MAXCOVERAGE` (maximum coverage), - :data:`Quantize.FASTOCTREE` (fast octree), - :data:`Quantize.LIBIMAGEQUANT` (libimagequant; check support - using :py:func:`PIL.features.check_feature` with - ``feature="libimagequant"``). - - By default, :data:`Quantize.MEDIANCUT` will be used. - - The exception to this is RGBA images. :data:`Quantize.MEDIANCUT` - and :data:`Quantize.MAXCOVERAGE` do not support RGBA images, so - :data:`Quantize.FASTOCTREE` is used by default instead. - :param kmeans: Integer - :param palette: Quantize to the palette of given - :py:class:`PIL.Image.Image`. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). - :returns: A new image - """ - - self.load() - - if method is None: - # defaults: - method = Quantize.MEDIANCUT - if self.mode == "RGBA": - method = Quantize.FASTOCTREE - - if self.mode == "RGBA" and method not in ( - Quantize.FASTOCTREE, - Quantize.LIBIMAGEQUANT, - ): - # Caller specified an invalid mode. - msg = ( - "Fast Octree (method == 2) and libimagequant (method == 3) " - "are the only valid methods for quantizing RGBA images" - ) - raise ValueError(msg) - - if palette: - # use palette from reference image - palette.load() - if palette.mode != "P": - msg = "bad mode for palette image" - raise ValueError(msg) - if self.mode != "RGB" and self.mode != "L": - msg = "only RGB or L mode images can be quantized to a palette" - raise ValueError(msg) - im = self.im.convert("P", dither, palette.im) - new_im = self._new(im) - new_im.palette = palette.palette.copy() - return new_im - - im = self._new(self.im.quantize(colors, method, kmeans)) - - from . import ImagePalette - - mode = im.im.getpalettemode() - palette = im.im.getpalette(mode, mode)[: colors * len(mode)] - im.palette = ImagePalette.ImagePalette(mode, palette) - - return im - - def copy(self): - """ - Copies this image. Use this method if you wish to paste things - into an image, but still retain the original. - - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - self.load() - return self._new(self.im.copy()) - - __copy__ = copy - - def crop(self, box=None): - """ - Returns a rectangular region from this image. The box is a - 4-tuple defining the left, upper, right, and lower pixel - coordinate. See :ref:`coordinate-system`. - - Note: Prior to Pillow 3.4.0, this was a lazy operation. - - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if box is None: - return self.copy() - - if box[2] < box[0]: - msg = "Coordinate 'right' is less than 'left'" - raise ValueError(msg) - elif box[3] < box[1]: - msg = "Coordinate 'lower' is less than 'upper'" - raise ValueError(msg) - - self.load() - return self._new(self._crop(self.im, box)) - - def _crop(self, im, box): - """ - Returns a rectangular region from the core image object im. - - This is equivalent to calling im.crop((x0, y0, x1, y1)), but - includes additional sanity checks. - - :param im: a core image object - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :returns: A core image object. - """ - - x0, y0, x1, y1 = map(int, map(round, box)) - - absolute_values = (abs(x1 - x0), abs(y1 - y0)) - - _decompression_bomb_check(absolute_values) - - return im.crop((x0, y0, x1, y1)) - - def draft(self, mode, size): - """ - Configures the image file loader so it returns a version of the - image that as closely as possible matches the given mode and - size. For example, you can use this method to convert a color - JPEG to greyscale while loading it. - - If any changes are made, returns a tuple with the chosen ``mode`` and - ``box`` with coordinates of the original image within the altered one. - - Note that this method modifies the :py:class:`~PIL.Image.Image` object - in place. If the image has already been loaded, this method has no - effect. - - Note: This method is not implemented for most images. It is - currently implemented only for JPEG and MPO images. - - :param mode: The requested mode. - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - """ - pass - - def _expand(self, xmargin, ymargin=None): - if ymargin is None: - ymargin = xmargin - self.load() - return self._new(self.im.expand(xmargin, ymargin)) - - def filter(self, filter): - """ - Filters this image using the given filter. For a list of - available filters, see the :py:mod:`~PIL.ImageFilter` module. - - :param filter: Filter kernel. - :returns: An :py:class:`~PIL.Image.Image` object.""" - - from . import ImageFilter - - self.load() - - if isinstance(filter, Callable): - filter = filter() - if not hasattr(filter, "filter"): - msg = "filter argument should be ImageFilter.Filter instance or class" - raise TypeError(msg) - - multiband = isinstance(filter, ImageFilter.MultibandFilter) - if self.im.bands == 1 or multiband: - return self._new(filter.filter(self.im)) - - ims = [] - for c in range(self.im.bands): - ims.append(self._new(filter.filter(self.im.getband(c)))) - return merge(self.mode, ims) - - def getbands(self): - """ - Returns a tuple containing the name of each band in this image. - For example, ``getbands`` on an RGB image returns ("R", "G", "B"). - - :returns: A tuple containing band names. - :rtype: tuple - """ - return ImageMode.getmode(self.mode).bands - - def getbbox(self, *, alpha_only=True): - """ - Calculates the bounding box of the non-zero regions in the - image. - - :param alpha_only: Optional flag, defaulting to ``True``. - If ``True`` and the image has an alpha channel, trim transparent pixels. - Otherwise, trim pixels when all channels are zero. - Keyword-only argument. - :returns: The bounding box is returned as a 4-tuple defining the - left, upper, right, and lower pixel coordinate. See - :ref:`coordinate-system`. If the image is completely empty, this - method returns None. - - """ - - self.load() - return self.im.getbbox(alpha_only) - - def getcolors(self, maxcolors=256): - """ - Returns a list of colors used in this image. - - The colors will be in the image's mode. For example, an RGB image will - return a tuple of (red, green, blue) color values, and a P image will - return the index of the color in the palette. - - :param maxcolors: Maximum number of colors. If this number is - exceeded, this method returns None. The default limit is - 256 colors. - :returns: An unsorted list of (count, pixel) values. - """ - - self.load() - if self.mode in ("1", "L", "P"): - h = self.im.histogram() - out = [] - for i in range(256): - if h[i]: - out.append((h[i], i)) - if len(out) > maxcolors: - return None - return out - return self.im.getcolors(maxcolors) - - def getdata(self, band=None): - """ - Returns the contents of this image as a sequence object - containing pixel values. The sequence object is flattened, so - that values for line one follow directly after the values of - line zero, and so on. - - Note that the sequence object returned by this method is an - internal PIL data type, which only supports certain sequence - operations. To convert it to an ordinary sequence (e.g. for - printing), use ``list(im.getdata())``. - - :param band: What band to return. The default is to return - all bands. To return a single band, pass in the index - value (e.g. 0 to get the "R" band from an "RGB" image). - :returns: A sequence-like object. - """ - - self.load() - if band is not None: - return self.im.getband(band) - return self.im # could be abused - - def getextrema(self): - """ - Gets the minimum and maximum pixel values for each band in - the image. - - :returns: For a single-band image, a 2-tuple containing the - minimum and maximum pixel value. For a multi-band image, - a tuple containing one 2-tuple for each band. - """ - - self.load() - if self.im.bands > 1: - extrema = [] - for i in range(self.im.bands): - extrema.append(self.im.getband(i).getextrema()) - return tuple(extrema) - return self.im.getextrema() - - def _getxmp(self, xmp_tags): - def get_name(tag): - return tag.split("}")[1] - - def get_value(element): - value = {get_name(k): v for k, v in element.attrib.items()} - children = list(element) - if children: - for child in children: - name = get_name(child.tag) - child_value = get_value(child) - if name in value: - if not isinstance(value[name], list): - value[name] = [value[name]] - value[name].append(child_value) - else: - value[name] = child_value - elif value: - if element.text: - value["text"] = element.text - else: - return element.text - return value - - if ElementTree is None: - warnings.warn("XMP data cannot be read without defusedxml dependency") - return {} - else: - root = ElementTree.fromstring(xmp_tags) - return {get_name(root.tag): get_value(root)} - - def getexif(self): - """ - Gets EXIF data from the image. - - :returns: an :py:class:`~PIL.Image.Exif` object. - """ - if self._exif is None: - self._exif = Exif() - self._exif._loaded = False - elif self._exif._loaded: - return self._exif - self._exif._loaded = True - - exif_info = self.info.get("exif") - if exif_info is None: - if "Raw profile type exif" in self.info: - exif_info = bytes.fromhex( - "".join(self.info["Raw profile type exif"].split("\n")[3:]) - ) - elif hasattr(self, "tag_v2"): - self._exif.bigtiff = self.tag_v2._bigtiff - self._exif.endian = self.tag_v2._endian - self._exif.load_from_fp(self.fp, self.tag_v2._offset) - if exif_info is not None: - self._exif.load(exif_info) - - # XMP tags - if ExifTags.Base.Orientation not in self._exif: - xmp_tags = self.info.get("XML:com.adobe.xmp") - if xmp_tags: - match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags) - if match: - self._exif[ExifTags.Base.Orientation] = int(match[2]) - - return self._exif - - def _reload_exif(self): - if self._exif is None or not self._exif._loaded: - return - self._exif._loaded = False - self.getexif() - - def get_child_images(self): - child_images = [] - exif = self.getexif() - ifds = [] - if ExifTags.Base.SubIFDs in exif: - subifd_offsets = exif[ExifTags.Base.SubIFDs] - if subifd_offsets: - if not isinstance(subifd_offsets, tuple): - subifd_offsets = (subifd_offsets,) - for subifd_offset in subifd_offsets: - ifds.append((exif._get_ifd_dict(subifd_offset), subifd_offset)) - ifd1 = exif.get_ifd(ExifTags.IFD.IFD1) - if ifd1 and ifd1.get(513): - ifds.append((ifd1, exif._info.next)) - - offset = None - for ifd, ifd_offset in ifds: - current_offset = self.fp.tell() - if offset is None: - offset = current_offset - - fp = self.fp - thumbnail_offset = ifd.get(513) - if thumbnail_offset is not None: - try: - thumbnail_offset += self._exif_offset - except AttributeError: - pass - self.fp.seek(thumbnail_offset) - data = self.fp.read(ifd.get(514)) - fp = io.BytesIO(data) - - with open(fp) as im: - if thumbnail_offset is None: - im._frame_pos = [ifd_offset] - im._seek(0) - im.load() - child_images.append(im) - - if offset is not None: - self.fp.seek(offset) - return child_images - - def getim(self): - """ - Returns a capsule that points to the internal image memory. - - :returns: A capsule object. - """ - - self.load() - return self.im.ptr - - def getpalette(self, rawmode="RGB"): - """ - Returns the image palette as a list. - - :param rawmode: The mode in which to return the palette. ``None`` will - return the palette in its current mode. - - .. versionadded:: 9.1.0 - - :returns: A list of color values [r, g, b, ...], or None if the - image has no palette. - """ - - self.load() - try: - mode = self.im.getpalettemode() - except ValueError: - return None # no palette - if rawmode is None: - rawmode = mode - return list(self.im.getpalette(mode, rawmode)) - - def apply_transparency(self): - """ - If a P mode image has a "transparency" key in the info dictionary, - remove the key and instead apply the transparency to the palette. - Otherwise, the image is unchanged. - """ - if self.mode != "P" or "transparency" not in self.info: - return - - from . import ImagePalette - - palette = self.getpalette("RGBA") - transparency = self.info["transparency"] - if isinstance(transparency, bytes): - for i, alpha in enumerate(transparency): - palette[i * 4 + 3] = alpha - else: - palette[transparency * 4 + 3] = 0 - self.palette = ImagePalette.ImagePalette("RGBA", bytes(palette)) - self.palette.dirty = 1 - - del self.info["transparency"] - - def getpixel(self, xy): - """ - Returns the pixel value at a given position. - - :param xy: The coordinate, given as (x, y). See - :ref:`coordinate-system`. - :returns: The pixel value. If the image is a multi-layer image, - this method returns a tuple. - """ - - self.load() - if self.pyaccess: - return self.pyaccess.getpixel(xy) - return self.im.getpixel(xy) - - def getprojection(self): - """ - Get projection to x and y axes - - :returns: Two sequences, indicating where there are non-zero - pixels along the X-axis and the Y-axis, respectively. - """ - - self.load() - x, y = self.im.getprojection() - return list(x), list(y) - - def histogram(self, mask=None, extrema=None): - """ - Returns a histogram for the image. The histogram is returned as a - list of pixel counts, one for each pixel value in the source - image. Counts are grouped into 256 bins for each band, even if - the image has more than 8 bits per band. If the image has more - than one band, the histograms for all bands are concatenated (for - example, the histogram for an "RGB" image contains 768 values). - - A bilevel image (mode "1") is treated as a greyscale ("L") image - by this method. - - If a mask is provided, the method returns a histogram for those - parts of the image where the mask image is non-zero. The mask - image must have the same size as the image, and be either a - bi-level image (mode "1") or a greyscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A list containing pixel counts. - """ - self.load() - if mask: - mask.load() - return self.im.histogram((0, 0), mask.im) - if self.mode in ("I", "F"): - if extrema is None: - extrema = self.getextrema() - return self.im.histogram(extrema) - return self.im.histogram() - - def entropy(self, mask=None, extrema=None): - """ - Calculates and returns the entropy for the image. - - A bilevel image (mode "1") is treated as a greyscale ("L") - image by this method. - - If a mask is provided, the method employs the histogram for - those parts of the image where the mask image is non-zero. - The mask image must have the same size as the image, and be - either a bi-level image (mode "1") or a greyscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A float value representing the image entropy - """ - self.load() - if mask: - mask.load() - return self.im.entropy((0, 0), mask.im) - if self.mode in ("I", "F"): - if extrema is None: - extrema = self.getextrema() - return self.im.entropy(extrema) - return self.im.entropy() - - def paste(self, im, box=None, mask=None): - """ - Pastes another image into this image. The box argument is either - a 2-tuple giving the upper left corner, a 4-tuple defining the - left, upper, right, and lower pixel coordinate, or None (same as - (0, 0)). See :ref:`coordinate-system`. If a 4-tuple is given, the size - of the pasted image must match the size of the region. - - If the modes don't match, the pasted image is converted to the mode of - this image (see the :py:meth:`~PIL.Image.Image.convert` method for - details). - - Instead of an image, the source can be a integer or tuple - containing pixel values. The method then fills the region - with the given color. When creating RGB images, you can - also use color strings as supported by the ImageColor module. - - If a mask is given, this method updates only the regions - indicated by the mask. You can use either "1", "L", "LA", "RGBA" - or "RGBa" images (if present, the alpha band is used as mask). - Where the mask is 255, the given image is copied as is. Where - the mask is 0, the current value is preserved. Intermediate - values will mix the two images together, including their alpha - channels if they have them. - - See :py:meth:`~PIL.Image.Image.alpha_composite` if you want to - combine images with respect to their alpha channels. - - :param im: Source image or pixel value (integer or tuple). - :param box: An optional 4-tuple giving the region to paste into. - If a 2-tuple is used instead, it's treated as the upper left - corner. If omitted or None, the source is pasted into the - upper left corner. - - If an image is given as the second argument and there is no - third, the box defaults to (0, 0), and the second argument - is interpreted as a mask image. - :param mask: An optional mask image. - """ - - if isImageType(box) and mask is None: - # abbreviated paste(im, mask) syntax - mask = box - box = None - - if box is None: - box = (0, 0) - - if len(box) == 2: - # upper left corner given; get size from image or mask - if isImageType(im): - size = im.size - elif isImageType(mask): - size = mask.size - else: - # FIXME: use self.size here? - msg = "cannot determine region size; use 4-item box" - raise ValueError(msg) - box += (box[0] + size[0], box[1] + size[1]) - - if isinstance(im, str): - from . import ImageColor - - im = ImageColor.getcolor(im, self.mode) - - elif isImageType(im): - im.load() - if self.mode != im.mode: - if self.mode != "RGB" or im.mode not in ("LA", "RGBA", "RGBa"): - # should use an adapter for this! - im = im.convert(self.mode) - im = im.im - - self._ensure_mutable() - - if mask: - mask.load() - self.im.paste(im, box, mask.im) - else: - self.im.paste(im, box) - - def alpha_composite(self, im, dest=(0, 0), source=(0, 0)): - """'In-place' analog of Image.alpha_composite. Composites an image - onto this image. - - :param im: image to composite over this one - :param dest: Optional 2 tuple (left, top) specifying the upper - left corner in this (destination) image. - :param source: Optional 2 (left, top) tuple for the upper left - corner in the overlay source image, or 4 tuple (left, top, right, - bottom) for the bounds of the source rectangle - - Performance Note: Not currently implemented in-place in the core layer. - """ - - if not isinstance(source, (list, tuple)): - msg = "Source must be a tuple" - raise ValueError(msg) - if not isinstance(dest, (list, tuple)): - msg = "Destination must be a tuple" - raise ValueError(msg) - if len(source) not in (2, 4): - msg = "Source must be a 2 or 4-tuple" - raise ValueError(msg) - if not len(dest) == 2: - msg = "Destination must be a 2-tuple" - raise ValueError(msg) - if min(source) < 0: - msg = "Source must be non-negative" - raise ValueError(msg) - - if len(source) == 2: - source = source + im.size - - # over image, crop if it's not the whole thing. - if source == (0, 0) + im.size: - overlay = im - else: - overlay = im.crop(source) - - # target for the paste - box = dest + (dest[0] + overlay.width, dest[1] + overlay.height) - - # destination image. don't copy if we're using the whole image. - if box == (0, 0) + self.size: - background = self - else: - background = self.crop(box) - - result = alpha_composite(background, overlay) - self.paste(result, box) - - def point(self, lut, mode=None): - """ - Maps this image through a lookup table or function. - - :param lut: A lookup table, containing 256 (or 65536 if - self.mode=="I" and mode == "L") values per band in the - image. A function can be used instead, it should take a - single argument. The function is called once for each - possible pixel value, and the resulting table is applied to - all bands of the image. - - It may also be an :py:class:`~PIL.Image.ImagePointHandler` - object:: - - class Example(Image.ImagePointHandler): - def point(self, data): - # Return result - :param mode: Output mode (default is same as input). In the - current version, this can only be used if the source image - has mode "L" or "P", and the output has mode "1" or the - source image mode is "I" and the output mode is "L". - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - if isinstance(lut, ImagePointHandler): - return lut.point(self) - - if callable(lut): - # if it isn't a list, it should be a function - if self.mode in ("I", "I;16", "F"): - # check if the function can be used with point_transform - # UNDONE wiredfool -- I think this prevents us from ever doing - # a gamma function point transform on > 8bit images. - scale, offset = _getscaleoffset(lut) - return self._new(self.im.point_transform(scale, offset)) - # for other modes, convert the function to a table - lut = [lut(i) for i in range(256)] * self.im.bands - - if self.mode == "F": - # FIXME: _imaging returns a confusing error message for this case - msg = "point operation not supported for this mode" - raise ValueError(msg) - - if mode != "F": - lut = [round(i) for i in lut] - return self._new(self.im.point(lut, mode)) - - def putalpha(self, alpha): - """ - Adds or replaces the alpha layer in this image. If the image - does not have an alpha layer, it's converted to "LA" or "RGBA". - The new layer must be either "L" or "1". - - :param alpha: The new alpha layer. This can either be an "L" or "1" - image having the same size as this image, or an integer or - other color value. - """ - - self._ensure_mutable() - - if self.mode not in ("LA", "PA", "RGBA"): - # attempt to promote self to a matching alpha mode - try: - mode = getmodebase(self.mode) + "A" - try: - self.im.setmode(mode) - except (AttributeError, ValueError) as e: - # do things the hard way - im = self.im.convert(mode) - if im.mode not in ("LA", "PA", "RGBA"): - raise ValueError from e # sanity check - self.im = im - self.pyaccess = None - self.mode = self.im.mode - except KeyError as e: - msg = "illegal image mode" - raise ValueError(msg) from e - - if self.mode in ("LA", "PA"): - band = 1 - else: - band = 3 - - if isImageType(alpha): - # alpha layer - if alpha.mode not in ("1", "L"): - msg = "illegal image mode" - raise ValueError(msg) - alpha.load() - if alpha.mode == "1": - alpha = alpha.convert("L") - else: - # constant alpha - try: - self.im.fillband(band, alpha) - except (AttributeError, ValueError): - # do things the hard way - alpha = new("L", self.size, alpha) - else: - return - - self.im.putband(alpha.im, band) - - def putdata(self, data, scale=1.0, offset=0.0): - """ - Copies pixel data from a flattened sequence object into the image. The - values should start at the upper left corner (0, 0), continue to the - end of the line, followed directly by the first value of the second - line, and so on. Data will be read until either the image or the - sequence ends. The scale and offset values are used to adjust the - sequence values: **pixel = value*scale + offset**. - - :param data: A flattened sequence object. - :param scale: An optional scale value. The default is 1.0. - :param offset: An optional offset value. The default is 0.0. - """ - - self._ensure_mutable() - - self.im.putdata(data, scale, offset) - - def putpalette(self, data, rawmode="RGB"): - """ - Attaches a palette to this image. The image must be a "P", "PA", "L" - or "LA" image. - - The palette sequence must contain at most 256 colors, made up of one - integer value for each channel in the raw mode. - For example, if the raw mode is "RGB", then it can contain at most 768 - values, made up of red, green and blue values for the corresponding pixel - index in the 256 colors. - If the raw mode is "RGBA", then it can contain at most 1024 values, - containing red, green, blue and alpha values. - - Alternatively, an 8-bit string may be used instead of an integer sequence. - - :param data: A palette sequence (either a list or a string). - :param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode - that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L"). - """ - from . import ImagePalette - - if self.mode not in ("L", "LA", "P", "PA"): - msg = "illegal image mode" - raise ValueError(msg) - if isinstance(data, ImagePalette.ImagePalette): - palette = ImagePalette.raw(data.rawmode, data.palette) - else: - if not isinstance(data, bytes): - data = bytes(data) - palette = ImagePalette.raw(rawmode, data) - self.mode = "PA" if "A" in self.mode else "P" - self.palette = palette - self.palette.mode = "RGB" - self.load() # install new palette - - def putpixel(self, xy, value): - """ - Modifies the pixel at the given position. The color is given as - a single numerical value for single-band images, and a tuple for - multi-band images. In addition to this, RGB and RGBA tuples are - accepted for P and PA images. - - Note that this method is relatively slow. For more extensive changes, - use :py:meth:`~PIL.Image.Image.paste` or the :py:mod:`~PIL.ImageDraw` - module instead. - - See: - - * :py:meth:`~PIL.Image.Image.paste` - * :py:meth:`~PIL.Image.Image.putdata` - * :py:mod:`~PIL.ImageDraw` - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :param value: The pixel value. - """ - - if self.readonly: - self._copy() - self.load() - - if self.pyaccess: - return self.pyaccess.putpixel(xy, value) - - if ( - self.mode in ("P", "PA") - and isinstance(value, (list, tuple)) - and len(value) in [3, 4] - ): - # RGB or RGBA value for a P or PA image - if self.mode == "PA": - alpha = value[3] if len(value) == 4 else 255 - value = value[:3] - value = self.palette.getcolor(value, self) - if self.mode == "PA": - value = (value, alpha) - return self.im.putpixel(xy, value) - - def remap_palette(self, dest_map, source_palette=None): - """ - Rewrites the image to reorder the palette. - - :param dest_map: A list of indexes into the original palette. - e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))`` - is the identity transform. - :param source_palette: Bytes or None. - :returns: An :py:class:`~PIL.Image.Image` object. - - """ - from . import ImagePalette - - if self.mode not in ("L", "P"): - msg = "illegal image mode" - raise ValueError(msg) - - bands = 3 - palette_mode = "RGB" - if source_palette is None: - if self.mode == "P": - self.load() - palette_mode = self.im.getpalettemode() - if palette_mode == "RGBA": - bands = 4 - source_palette = self.im.getpalette(palette_mode, palette_mode) - else: # L-mode - source_palette = bytearray(i // 3 for i in range(768)) - - palette_bytes = b"" - new_positions = [0] * 256 - - # pick only the used colors from the palette - for i, oldPosition in enumerate(dest_map): - palette_bytes += source_palette[ - oldPosition * bands : oldPosition * bands + bands - ] - new_positions[oldPosition] = i - - # replace the palette color id of all pixel with the new id - - # Palette images are [0..255], mapped through a 1 or 3 - # byte/color map. We need to remap the whole image - # from palette 1 to palette 2. New_positions is - # an array of indexes into palette 1. Palette 2 is - # palette 1 with any holes removed. - - # We're going to leverage the convert mechanism to use the - # C code to remap the image from palette 1 to palette 2, - # by forcing the source image into 'L' mode and adding a - # mapping 'L' mode palette, then converting back to 'L' - # sans palette thus converting the image bytes, then - # assigning the optimized RGB palette. - - # perf reference, 9500x4000 gif, w/~135 colors - # 14 sec prepatch, 1 sec postpatch with optimization forced. - - mapping_palette = bytearray(new_positions) - - m_im = self.copy() - m_im.mode = "P" - - m_im.palette = ImagePalette.ImagePalette( - palette_mode, palette=mapping_palette * bands - ) - # possibly set palette dirty, then - # m_im.putpalette(mapping_palette, 'L') # converts to 'P' - # or just force it. - # UNDONE -- this is part of the general issue with palettes - m_im.im.putpalette(palette_mode + ";L", m_im.palette.tobytes()) - - m_im = m_im.convert("L") - - m_im.putpalette(palette_bytes, palette_mode) - m_im.palette = ImagePalette.ImagePalette(palette_mode, palette=palette_bytes) - - if "transparency" in self.info: - try: - m_im.info["transparency"] = dest_map.index(self.info["transparency"]) - except ValueError: - if "transparency" in m_im.info: - del m_im.info["transparency"] - - return m_im - - def _get_safe_box(self, size, resample, box): - """Expands the box so it includes adjacent pixels - that may be used by resampling with the given resampling filter. - """ - filter_support = _filters_support[resample] - 0.5 - scale_x = (box[2] - box[0]) / size[0] - scale_y = (box[3] - box[1]) / size[1] - support_x = filter_support * scale_x - support_y = filter_support * scale_y - - return ( - max(0, int(box[0] - support_x)), - max(0, int(box[1] - support_y)), - min(self.size[0], math.ceil(box[2] + support_x)), - min(self.size[1], math.ceil(box[3] + support_y)), - ) - - def resize(self, size, resample=None, box=None, reducing_gap=None): - """ - Returns a resized copy of this image. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If the image has mode "1" or "P", it is always set to - :py:data:`Resampling.NEAREST`. If the image mode specifies a number - of bits, such as "I;16", then the default filter is - :py:data:`Resampling.NEAREST`. Otherwise, the default filter is - :py:data:`Resampling.BICUBIC`. See: :ref:`concept-filters`. - :param box: An optional 4-tuple of floats providing - the source image region to be scaled. - The values must be within (0, 0, width, height) rectangle. - If omitted or None, the entire source is used. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce`. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is None (no optimization). - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if resample is None: - type_special = ";" in self.mode - resample = Resampling.NEAREST if type_special else Resampling.BICUBIC - elif resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - Resampling.LANCZOS, - Resampling.BOX, - Resampling.HAMMING, - ): - msg = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.LANCZOS, "Image.Resampling.LANCZOS"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - (Resampling.BOX, "Image.Resampling.BOX"), - (Resampling.HAMMING, "Image.Resampling.HAMMING"), - ) - ] - msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1] - raise ValueError(msg) - - if reducing_gap is not None and reducing_gap < 1.0: - msg = "reducing_gap must be 1.0 or greater" - raise ValueError(msg) - - size = tuple(size) - - self.load() - if box is None: - box = (0, 0) + self.size - else: - box = tuple(box) - - if self.size == size and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ("1", "P"): - resample = Resampling.NEAREST - - if self.mode in ["LA", "RGBA"] and resample != Resampling.NEAREST: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.resize(size, resample, box) - return im.convert(self.mode) - - self.load() - - if reducing_gap is not None and resample != Resampling.NEAREST: - factor_x = int((box[2] - box[0]) / size[0] / reducing_gap) or 1 - factor_y = int((box[3] - box[1]) / size[1] / reducing_gap) or 1 - if factor_x > 1 or factor_y > 1: - reduce_box = self._get_safe_box(size, resample, box) - factor = (factor_x, factor_y) - if callable(self.reduce): - self = self.reduce(factor, box=reduce_box) - else: - self = Image.reduce(self, factor, box=reduce_box) - box = ( - (box[0] - reduce_box[0]) / factor_x, - (box[1] - reduce_box[1]) / factor_y, - (box[2] - reduce_box[0]) / factor_x, - (box[3] - reduce_box[1]) / factor_y, - ) - - return self._new(self.im.resize(size, resample, box)) - - def reduce(self, factor, box=None): - """ - Returns a copy of the image reduced ``factor`` times. - If the size of the image is not dividable by ``factor``, - the resulting size will be rounded up. - - :param factor: A greater than 0 integer or tuple of two integers - for width and height separately. - :param box: An optional 4-tuple of ints providing - the source image region to be reduced. - The values must be within ``(0, 0, width, height)`` rectangle. - If omitted or ``None``, the entire source is used. - """ - if not isinstance(factor, (list, tuple)): - factor = (factor, factor) - - if box is None: - box = (0, 0) + self.size - else: - box = tuple(box) - - if factor == (1, 1) and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ["LA", "RGBA"]: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.reduce(factor, box) - return im.convert(self.mode) - - self.load() - - return self._new(self.im.reduce(factor, box)) - - def rotate( - self, - angle, - resample=Resampling.NEAREST, - expand=0, - center=None, - translate=None, - fillcolor=None, - ): - """ - Returns a rotated copy of this image. This method returns a - copy of this image, rotated the given number of degrees counter - clockwise around its centre. - - :param angle: In degrees counter clockwise. - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image has - mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See :ref:`concept-filters`. - :param expand: Optional expansion flag. If true, expands the output - image to make it large enough to hold the entire rotated image. - If false or omitted, make the output image the same size as the - input image. Note that the expand flag assumes rotation around - the center and no translation. - :param center: Optional center of rotation (a 2-tuple). Origin is - the upper left corner. Default is the center of the image. - :param translate: An optional post-rotate translation (a 2-tuple). - :param fillcolor: An optional color for area outside the rotated image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - angle = angle % 360.0 - - # Fast paths regardless of filter, as long as we're not - # translating or changing the center. - if not (center or translate): - if angle == 0: - return self.copy() - if angle == 180: - return self.transpose(Transpose.ROTATE_180) - if angle in (90, 270) and (expand or self.width == self.height): - return self.transpose( - Transpose.ROTATE_90 if angle == 90 else Transpose.ROTATE_270 - ) - - # Calculate the affine matrix. Note that this is the reverse - # transformation (from destination image to source) because we - # want to interpolate the (discrete) destination pixel from - # the local area around the (floating) source pixel. - - # The matrix we actually want (note that it operates from the right): - # (1, 0, tx) (1, 0, cx) ( cos a, sin a, 0) (1, 0, -cx) - # (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy) - # (0, 0, 1) (0, 0, 1) ( 0, 0, 1) (0, 0, 1) - - # The reverse matrix is thus: - # (1, 0, cx) ( cos -a, sin -a, 0) (1, 0, -cx) (1, 0, -tx) - # (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty) - # (0, 0, 1) ( 0, 0, 1) (0, 0, 1) (0, 0, 1) - - # In any case, the final translation may be updated at the end to - # compensate for the expand flag. - - w, h = self.size - - if translate is None: - post_trans = (0, 0) - else: - post_trans = translate - if center is None: - # FIXME These should be rounded to ints? - rotn_center = (w / 2.0, h / 2.0) - else: - rotn_center = center - - angle = -math.radians(angle) - matrix = [ - round(math.cos(angle), 15), - round(math.sin(angle), 15), - 0.0, - round(-math.sin(angle), 15), - round(math.cos(angle), 15), - 0.0, - ] - - def transform(x, y, matrix): - (a, b, c, d, e, f) = matrix - return a * x + b * y + c, d * x + e * y + f - - matrix[2], matrix[5] = transform( - -rotn_center[0] - post_trans[0], -rotn_center[1] - post_trans[1], matrix - ) - matrix[2] += rotn_center[0] - matrix[5] += rotn_center[1] - - if expand: - # calculate output size - xx = [] - yy = [] - for x, y in ((0, 0), (w, 0), (w, h), (0, h)): - x, y = transform(x, y, matrix) - xx.append(x) - yy.append(y) - nw = math.ceil(max(xx)) - math.floor(min(xx)) - nh = math.ceil(max(yy)) - math.floor(min(yy)) - - # We multiply a translation matrix from the right. Because of its - # special form, this is the same as taking the image of the - # translation vector as new translation vector. - matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix) - w, h = nw, nh - - return self.transform( - (w, h), Transform.AFFINE, matrix, resample, fillcolor=fillcolor - ) - - def save(self, fp, format=None, **params): - """ - Saves this image under the given filename. If no format is - specified, the format to use is determined from the filename - extension, if possible. - - Keyword options can be used to provide additional instructions - to the writer. If a writer doesn't recognise an option, it is - silently ignored. The available options are described in the - :doc:`image format documentation - <../handbook/image-file-formats>` for each writer. - - You can use a file object instead of a filename. In this case, - you must always specify the format. The file object must - implement the ``seek``, ``tell``, and ``write`` - methods, and be opened in binary mode. - - :param fp: A filename (string), pathlib.Path object or file object. - :param format: Optional format override. If omitted, the - format to use is determined from the filename extension. - If a file object was used instead of a filename, this - parameter should always be used. - :param params: Extra parameters to the image writer. - :returns: None - :exception ValueError: If the output format could not be determined - from the file name. Use the format option to solve this. - :exception OSError: If the file could not be written. The file - may have been created, and may contain partial data. - """ - - filename = "" - open_fp = False - if isinstance(fp, Path): - filename = str(fp) - open_fp = True - elif is_path(fp): - filename = fp - open_fp = True - elif fp == sys.stdout: - try: - fp = sys.stdout.buffer - except AttributeError: - pass - if not filename and hasattr(fp, "name") and is_path(fp.name): - # only set the name for metadata purposes - filename = fp.name - - # may mutate self! - self._ensure_mutable() - - save_all = params.pop("save_all", False) - self.encoderinfo = params - self.encoderconfig = () - - preinit() - - ext = os.path.splitext(filename)[1].lower() - - if not format: - if ext not in EXTENSION: - init() - try: - format = EXTENSION[ext] - except KeyError as e: - msg = f"unknown file extension: {ext}" - raise ValueError(msg) from e - - if format.upper() not in SAVE: - init() - if save_all: - save_handler = SAVE_ALL[format.upper()] - else: - save_handler = SAVE[format.upper()] - - created = False - if open_fp: - created = not os.path.exists(filename) - if params.get("append", False): - # Open also for reading ("+"), because TIFF save_all - # writer needs to go back and edit the written data. - fp = builtins.open(filename, "r+b") - else: - fp = builtins.open(filename, "w+b") - - try: - save_handler(self, fp, filename) - except Exception: - if open_fp: - fp.close() - if created: - try: - os.remove(filename) - except PermissionError: - pass - raise - if open_fp: - fp.close() - - def seek(self, frame): - """ - Seeks to the given frame in this sequence file. If you seek - beyond the end of the sequence, the method raises an - ``EOFError`` exception. When a sequence file is opened, the - library automatically seeks to frame 0. - - See :py:meth:`~PIL.Image.Image.tell`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :param frame: Frame number, starting at 0. - :exception EOFError: If the call attempts to seek beyond the end - of the sequence. - """ - - # overridden by file handlers - if frame != 0: - raise EOFError - - def show(self, title=None): - """ - Displays this image. This method is mainly intended for debugging purposes. - - This method calls :py:func:`PIL.ImageShow.show` internally. You can use - :py:func:`PIL.ImageShow.register` to override its default behaviour. - - The image is first saved to a temporary file. By default, it will be in - PNG format. - - On Unix, the image is then opened using the **xdg-open**, **display**, - **gm**, **eog** or **xv** utility, depending on which one can be found. - - On macOS, the image is opened with the native Preview application. - - On Windows, the image is opened with the standard PNG display utility. - - :param title: Optional title to use for the image window, where possible. - """ - - _show(self, title=title) - - def split(self): - """ - Split this image into individual bands. This method returns a - tuple of individual image bands from an image. For example, - splitting an "RGB" image creates three new images each - containing a copy of one of the original bands (red, green, - blue). - - If you need only one band, :py:meth:`~PIL.Image.Image.getchannel` - method can be more convenient and faster. - - :returns: A tuple containing bands. - """ - - self.load() - if self.im.bands == 1: - ims = [self.copy()] - else: - ims = map(self._new, self.im.split()) - return tuple(ims) - - def getchannel(self, channel): - """ - Returns an image containing a single channel of the source image. - - :param channel: What channel to return. Could be index - (0 for "R" channel of "RGB") or channel name - ("A" for alpha channel of "RGBA"). - :returns: An image in "L" mode. - - .. versionadded:: 4.3.0 - """ - self.load() - - if isinstance(channel, str): - try: - channel = self.getbands().index(channel) - except ValueError as e: - msg = f'The image has no channel "{channel}"' - raise ValueError(msg) from e - - return self._new(self.im.getband(channel)) - - def tell(self): - """ - Returns the current frame number. See :py:meth:`~PIL.Image.Image.seek`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :returns: Frame number, starting with 0. - """ - return 0 - - def thumbnail(self, size, resample=Resampling.BICUBIC, reducing_gap=2.0): - """ - Make this image into a thumbnail. This method modifies the - image to contain a thumbnail version of itself, no larger than - the given size. This method calculates an appropriate thumbnail - size to preserve the aspect of the image, calls the - :py:meth:`~PIL.Image.Image.draft` method to configure the file reader - (where applicable), and finally resizes the image. - - Note that this function modifies the :py:class:`~PIL.Image.Image` - object in place. If you need to use the full resolution image as well, - apply this method to a :py:meth:`~PIL.Image.Image.copy` of the original - image. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param resample: Optional resampling filter. This can be one - of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If omitted, it defaults to :py:data:`Resampling.BICUBIC`. - (was :py:data:`Resampling.NEAREST` prior to version 2.5.0). - See: :ref:`concept-filters`. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce` or - :py:meth:`~PIL.Image.Image.draft` for JPEG images. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is 2.0 (very close to fair resampling - while still being faster in many cases). - :returns: None - """ - - provided_size = tuple(map(math.floor, size)) - - def preserve_aspect_ratio(): - def round_aspect(number, key): - return max(min(math.floor(number), math.ceil(number), key=key), 1) - - x, y = provided_size - if x >= self.width and y >= self.height: - return - - aspect = self.width / self.height - if x / y >= aspect: - x = round_aspect(y * aspect, key=lambda n: abs(aspect - n / y)) - else: - y = round_aspect( - x / aspect, key=lambda n: 0 if n == 0 else abs(aspect - x / n) - ) - return x, y - - box = None - if reducing_gap is not None: - size = preserve_aspect_ratio() - if size is None: - return - - res = self.draft(None, (size[0] * reducing_gap, size[1] * reducing_gap)) - if res is not None: - box = res[1] - if box is None: - self.load() - - # load() may have changed the size of the image - size = preserve_aspect_ratio() - if size is None: - return - - if self.size != size: - im = self.resize(size, resample, box=box, reducing_gap=reducing_gap) - - self.im = im.im - self._size = size - self.mode = self.im.mode - - self.readonly = 0 - self.pyaccess = None - - # FIXME: the different transform methods need further explanation - # instead of bloating the method docs, add a separate chapter. - def transform( - self, - size, - method, - data=None, - resample=Resampling.NEAREST, - fill=1, - fillcolor=None, - ): - """ - Transforms this image. This method creates a new image with the - given size, and the same mode as the original, and copies data - to the new image using the given transform. - - :param size: The output size in pixels, as a 2-tuple: - (width, height). - :param method: The transformation method. This is one of - :py:data:`Transform.EXTENT` (cut out a rectangular subregion), - :py:data:`Transform.AFFINE` (affine transform), - :py:data:`Transform.PERSPECTIVE` (perspective transform), - :py:data:`Transform.QUAD` (map a quadrilateral to a rectangle), or - :py:data:`Transform.MESH` (map a number of source quadrilaterals - in one operation). - - It may also be an :py:class:`~PIL.Image.ImageTransformHandler` - object:: - - class Example(Image.ImageTransformHandler): - def transform(self, size, data, resample, fill=1): - # Return result - - It may also be an object with a ``method.getdata`` method - that returns a tuple supplying new ``method`` and ``data`` values:: - - class Example: - def getdata(self): - method = Image.Transform.EXTENT - data = (0, 0, 100, 100) - return method, data - :param data: Extra data to the transformation method. - :param resample: Optional resampling filter. It can be one of - :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image - has mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See: :ref:`concept-filters`. - :param fill: If ``method`` is an - :py:class:`~PIL.Image.ImageTransformHandler` object, this is one of - the arguments passed to it. Otherwise, it is unused. - :param fillcolor: Optional fill color for the area outside the - transform in the output image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if self.mode in ("LA", "RGBA") and resample != Resampling.NEAREST: - return ( - self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - .transform(size, method, data, resample, fill, fillcolor) - .convert(self.mode) - ) - - if isinstance(method, ImageTransformHandler): - return method.transform(size, self, resample=resample, fill=fill) - - if hasattr(method, "getdata"): - # compatibility w. old-style transform objects - method, data = method.getdata() - - if data is None: - msg = "missing method data" - raise ValueError(msg) - - im = new(self.mode, size, fillcolor) - if self.mode == "P" and self.palette: - im.palette = self.palette.copy() - im.info = self.info.copy() - if method == Transform.MESH: - # list of quads - for box, quad in data: - im.__transformer( - box, self, Transform.QUAD, quad, resample, fillcolor is None - ) - else: - im.__transformer( - (0, 0) + size, self, method, data, resample, fillcolor is None - ) - - return im - - def __transformer( - self, box, image, method, data, resample=Resampling.NEAREST, fill=1 - ): - w = box[2] - box[0] - h = box[3] - box[1] - - if method == Transform.AFFINE: - data = data[:6] - - elif method == Transform.EXTENT: - # convert extent to an affine transform - x0, y0, x1, y1 = data - xs = (x1 - x0) / w - ys = (y1 - y0) / h - method = Transform.AFFINE - data = (xs, 0, x0, 0, ys, y0) - - elif method == Transform.PERSPECTIVE: - data = data[:8] - - elif method == Transform.QUAD: - # quadrilateral warp. data specifies the four corners - # given as NW, SW, SE, and NE. - nw = data[:2] - sw = data[2:4] - se = data[4:6] - ne = data[6:8] - x0, y0 = nw - As = 1.0 / w - At = 1.0 / h - data = ( - x0, - (ne[0] - x0) * As, - (sw[0] - x0) * At, - (se[0] - sw[0] - ne[0] + x0) * As * At, - y0, - (ne[1] - y0) * As, - (sw[1] - y0) * At, - (se[1] - sw[1] - ne[1] + y0) * As * At, - ) - - else: - msg = "unknown transformation method" - raise ValueError(msg) - - if resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - ): - if resample in (Resampling.BOX, Resampling.HAMMING, Resampling.LANCZOS): - msg = { - Resampling.BOX: "Image.Resampling.BOX", - Resampling.HAMMING: "Image.Resampling.HAMMING", - Resampling.LANCZOS: "Image.Resampling.LANCZOS", - }[resample] + f" ({resample}) cannot be used." - else: - msg = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - ) - ] - msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1] - raise ValueError(msg) - - image.load() - - self.load() - - if image.mode in ("1", "P"): - resample = Resampling.NEAREST - - self.im.transform2(box, image.im, method, data, resample, fill) - - def transpose(self, method): - """ - Transpose image (flip or rotate in 90 degree steps) - - :param method: One of :py:data:`Transpose.FLIP_LEFT_RIGHT`, - :py:data:`Transpose.FLIP_TOP_BOTTOM`, :py:data:`Transpose.ROTATE_90`, - :py:data:`Transpose.ROTATE_180`, :py:data:`Transpose.ROTATE_270`, - :py:data:`Transpose.TRANSPOSE` or :py:data:`Transpose.TRANSVERSE`. - :returns: Returns a flipped or rotated copy of this image. - """ - - self.load() - return self._new(self.im.transpose(method)) - - def effect_spread(self, distance): - """ - Randomly spread pixels in an image. - - :param distance: Distance to spread pixels. - """ - self.load() - return self._new(self.im.effect_spread(distance)) - - def toqimage(self): - """Returns a QImage copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.toqimage(self) - - def toqpixmap(self): - """Returns a QPixmap copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.toqpixmap(self) - - -# -------------------------------------------------------------------- -# Abstract handlers. - - -class ImagePointHandler: - """ - Used as a mixin by point transforms - (for use with :py:meth:`~PIL.Image.Image.point`) - """ - - pass - - -class ImageTransformHandler: - """ - Used as a mixin by geometry transforms - (for use with :py:meth:`~PIL.Image.Image.transform`) - """ - - pass - - -# -------------------------------------------------------------------- -# Factories - -# -# Debugging - - -def _wedge(): - """Create greyscale wedge (for debugging only)""" - - return Image()._new(core.wedge("L")) - - -def _check_size(size): - """ - Common check to enforce type and sanity check on size tuples - - :param size: Should be a 2 tuple of (width, height) - :returns: True, or raises a ValueError - """ - - if not isinstance(size, (list, tuple)): - msg = "Size must be a tuple" - raise ValueError(msg) - if len(size) != 2: - msg = "Size must be a tuple of length 2" - raise ValueError(msg) - if size[0] < 0 or size[1] < 0: - msg = "Width and height must be >= 0" - raise ValueError(msg) - - return True - - -def new(mode, size, color=0): - """ - Creates a new image with the given mode and size. - - :param mode: The mode to use for the new image. See: - :ref:`concept-modes`. - :param size: A 2-tuple, containing (width, height) in pixels. - :param color: What color to use for the image. Default is black. - If given, this should be a single integer or floating point value - for single-band modes, and a tuple for multi-band modes (one value - per band). When creating RGB or HSV images, you can also use color - strings as supported by the ImageColor module. If the color is - None, the image is not initialised. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - if color is None: - # don't initialize - return Image()._new(core.new(mode, size)) - - if isinstance(color, str): - # css3-style specifier - - from . import ImageColor - - color = ImageColor.getcolor(color, mode) - - im = Image() - if mode == "P" and isinstance(color, (list, tuple)) and len(color) in [3, 4]: - # RGB or RGBA value for a P image - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette() - color = im.palette.getcolor(color) - return im._new(core.fill(mode, size, color)) - - -def frombytes(mode, size, data, decoder_name="raw", *args): - """ - Creates a copy of an image memory from pixel data in a buffer. - - In its simplest form, this function takes three arguments - (mode, size, and unpacked pixel data). - - You can also use any pixel decoder supported by PIL. For more - information on available decoders, see the section - :ref:`Writing Your Own File Codec `. - - Note that this function decodes pixel data only, not entire images. - If you have an entire image in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load - it. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A byte buffer containing raw data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if decoder_name == "raw" and args == (): - args = mode - - im = new(mode, size) - im.frombytes(data, decoder_name, args) - return im - - -def frombuffer(mode, size, data, decoder_name="raw", *args): - """ - Creates an image memory referencing pixel data in a byte buffer. - - This function is similar to :py:func:`~PIL.Image.frombytes`, but uses data - in the byte buffer, where possible. This means that changes to the - original buffer object are reflected in this image). Not all modes can - share memory; supported modes include "L", "RGBX", "RGBA", and "CMYK". - - Note that this function decodes pixel data only, not entire images. - If you have an entire image file in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load it. - - In the current version, the default parameters used for the "raw" decoder - differs from that used for :py:func:`~PIL.Image.frombytes`. This is a - bug, and will probably be fixed in a future release. The current release - issues a warning if you do this; to disable the warning, you should provide - the full set of parameters. See below for details. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A bytes or other buffer object containing raw - data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. For the - default encoder ("raw"), it's recommended that you provide the - full set of parameters:: - - frombuffer(mode, size, data, "raw", mode, 0, 1) - - :returns: An :py:class:`~PIL.Image.Image` object. - - .. versionadded:: 1.1.4 - """ - - _check_size(size) - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if decoder_name == "raw": - if args == (): - args = mode, 0, 1 - if args[0] in _MAPMODES: - im = new(mode, (1, 1)) - im = im._new(core.map_buffer(data, size, decoder_name, 0, args)) - if mode == "P": - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette("RGB", im.im.getpalette("RGB")) - im.readonly = 1 - return im - - return frombytes(mode, size, data, decoder_name, args) - - -def fromarray(obj, mode=None): - """ - Creates an image memory from an object exporting the array interface - (using the buffer protocol):: - - from PIL import Image - import numpy as np - a = np.zeros((5, 5)) - im = Image.fromarray(a) - - If ``obj`` is not contiguous, then the ``tobytes`` method is called - and :py:func:`~PIL.Image.frombuffer` is used. - - In the case of NumPy, be aware that Pillow modes do not always correspond - to NumPy dtypes. Pillow modes only offer 1-bit pixels, 8-bit pixels, - 32-bit signed integer pixels, and 32-bit floating point pixels. - - Pillow images can also be converted to arrays:: - - from PIL import Image - import numpy as np - im = Image.open("hopper.jpg") - a = np.asarray(im) - - When converting Pillow images to arrays however, only pixel values are - transferred. This means that P and PA mode images will lose their palette. - - :param obj: Object with array interface - :param mode: Optional mode to use when reading ``obj``. Will be determined from - type if ``None``. - - This will not be used to convert the data after reading, but will be used to - change how the data is read:: - - from PIL import Image - import numpy as np - a = np.full((1, 1), 300) - im = Image.fromarray(a, mode="L") - im.getpixel((0, 0)) # 44 - im = Image.fromarray(a, mode="RGB") - im.getpixel((0, 0)) # (44, 1, 0) - - See: :ref:`concept-modes` for general information about modes. - :returns: An image object. - - .. versionadded:: 1.1.6 - """ - arr = obj.__array_interface__ - shape = arr["shape"] - ndim = len(shape) - strides = arr.get("strides", None) - if mode is None: - try: - typekey = (1, 1) + shape[2:], arr["typestr"] - except KeyError as e: - msg = "Cannot handle this data type" - raise TypeError(msg) from e - try: - mode, rawmode = _fromarray_typemap[typekey] - except KeyError as e: - msg = "Cannot handle this data type: %s, %s" % typekey - raise TypeError(msg) from e - else: - rawmode = mode - if mode in ["1", "L", "I", "P", "F"]: - ndmax = 2 - elif mode == "RGB": - ndmax = 3 - else: - ndmax = 4 - if ndim > ndmax: - msg = f"Too many dimensions: {ndim} > {ndmax}." - raise ValueError(msg) - - size = 1 if ndim == 1 else shape[1], shape[0] - if strides is not None: - if hasattr(obj, "tobytes"): - obj = obj.tobytes() - else: - obj = obj.tostring() - - return frombuffer(mode, size, obj, "raw", rawmode, 0, 1) - - -def fromqimage(im): - """Creates an image instance from a QImage image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.fromqimage(im) - - -def fromqpixmap(im): - """Creates an image instance from a QPixmap image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.fromqpixmap(im) - - -_fromarray_typemap = { - # (shape, typestr) => mode, rawmode - # first two members of shape are set to one - ((1, 1), "|b1"): ("1", "1;8"), - ((1, 1), "|u1"): ("L", "L"), - ((1, 1), "|i1"): ("I", "I;8"), - ((1, 1), "u2"): ("I", "I;16B"), - ((1, 1), "i2"): ("I", "I;16BS"), - ((1, 1), "u4"): ("I", "I;32B"), - ((1, 1), "i4"): ("I", "I;32BS"), - ((1, 1), "f4"): ("F", "F;32BF"), - ((1, 1), "f8"): ("F", "F;64BF"), - ((1, 1, 2), "|u1"): ("LA", "LA"), - ((1, 1, 3), "|u1"): ("RGB", "RGB"), - ((1, 1, 4), "|u1"): ("RGBA", "RGBA"), - # shortcuts: - ((1, 1), _ENDIAN + "i4"): ("I", "I"), - ((1, 1), _ENDIAN + "f4"): ("F", "F"), -} - - -def _decompression_bomb_check(size): - if MAX_IMAGE_PIXELS is None: - return - - pixels = max(1, size[0]) * max(1, size[1]) - - if pixels > 2 * MAX_IMAGE_PIXELS: - msg = ( - f"Image size ({pixels} pixels) exceeds limit of {2 * MAX_IMAGE_PIXELS} " - "pixels, could be decompression bomb DOS attack." - ) - raise DecompressionBombError(msg) - - if pixels > MAX_IMAGE_PIXELS: - warnings.warn( - f"Image size ({pixels} pixels) exceeds limit of {MAX_IMAGE_PIXELS} pixels, " - "could be decompression bomb DOS attack.", - DecompressionBombWarning, - ) - - -def open(fp, mode="r", formats=None): - """ - Opens and identifies the given image file. - - This is a lazy operation; this function identifies the file, but - the file remains open and the actual image data is not read from - the file until you try to process the data (or call the - :py:meth:`~PIL.Image.Image.load` method). See - :py:func:`~PIL.Image.new`. See :ref:`file-handling`. - - :param fp: A filename (string), pathlib.Path object or a file object. - The file object must implement ``file.read``, - ``file.seek``, and ``file.tell`` methods, - and be opened in binary mode. The file object will also seek to zero - before reading. - :param mode: The mode. If given, this argument must be "r". - :param formats: A list or tuple of formats to attempt to load the file in. - This can be used to restrict the set of formats checked. - Pass ``None`` to try all supported formats. You can print the set of - available formats by running ``python3 -m PIL`` or using - the :py:func:`PIL.features.pilinfo` function. - :returns: An :py:class:`~PIL.Image.Image` object. - :exception FileNotFoundError: If the file cannot be found. - :exception PIL.UnidentifiedImageError: If the image cannot be opened and - identified. - :exception ValueError: If the ``mode`` is not "r", or if a ``StringIO`` - instance is used for ``fp``. - :exception TypeError: If ``formats`` is not ``None``, a list or a tuple. - """ - - if mode != "r": - msg = f"bad mode {repr(mode)}" - raise ValueError(msg) - elif isinstance(fp, io.StringIO): - msg = ( - "StringIO cannot be used to open an image. " - "Binary data must be used instead." - ) - raise ValueError(msg) - - if formats is None: - formats = ID - elif not isinstance(formats, (list, tuple)): - msg = "formats must be a list or tuple" - raise TypeError(msg) - - exclusive_fp = False - filename = "" - if isinstance(fp, Path): - filename = str(fp.resolve()) - elif is_path(fp): - filename = fp - - if filename: - fp = builtins.open(filename, "rb") - exclusive_fp = True - - try: - fp.seek(0) - except (AttributeError, io.UnsupportedOperation): - fp = io.BytesIO(fp.read()) - exclusive_fp = True - - prefix = fp.read(16) - - preinit() - - accept_warnings = [] - - def _open_core(fp, filename, prefix, formats): - for i in formats: - i = i.upper() - if i not in OPEN: - init() - try: - factory, accept = OPEN[i] - result = not accept or accept(prefix) - if type(result) in [str, bytes]: - accept_warnings.append(result) - elif result: - fp.seek(0) - im = factory(fp, filename) - _decompression_bomb_check(im.size) - return im - except (SyntaxError, IndexError, TypeError, struct.error): - # Leave disabled by default, spams the logs with image - # opening failures that are entirely expected. - # logger.debug("", exc_info=True) - continue - except BaseException: - if exclusive_fp: - fp.close() - raise - return None - - im = _open_core(fp, filename, prefix, formats) - - if im is None and formats is ID: - checked_formats = formats.copy() - if init(): - im = _open_core( - fp, - filename, - prefix, - tuple(format for format in formats if format not in checked_formats), - ) - - if im: - im._exclusive_fp = exclusive_fp - return im - - if exclusive_fp: - fp.close() - for message in accept_warnings: - warnings.warn(message) - msg = "cannot identify image file %r" % (filename if filename else fp) - raise UnidentifiedImageError(msg) - - -# -# Image processing. - - -def alpha_composite(im1, im2): - """ - Alpha composite im2 over im1. - - :param im1: The first image. Must have mode RGBA. - :param im2: The second image. Must have mode RGBA, and the same size as - the first image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.alpha_composite(im1.im, im2.im)) - - -def blend(im1, im2, alpha): - """ - Creates a new image by interpolating between two input images, using - a constant alpha:: - - out = image1 * (1.0 - alpha) + image2 * alpha - - :param im1: The first image. - :param im2: The second image. Must have the same mode and size as - the first image. - :param alpha: The interpolation alpha factor. If alpha is 0.0, a - copy of the first image is returned. If alpha is 1.0, a copy of - the second image is returned. There are no restrictions on the - alpha value. If necessary, the result is clipped to fit into - the allowed output range. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.blend(im1.im, im2.im, alpha)) - - -def composite(image1, image2, mask): - """ - Create composite image by blending images using a transparency mask. - - :param image1: The first image. - :param image2: The second image. Must have the same mode and - size as the first image. - :param mask: A mask image. This image can have mode - "1", "L", or "RGBA", and must have the same size as the - other two images. - """ - - image = image2.copy() - image.paste(image1, None, mask) - return image - - -def eval(image, *args): - """ - Applies the function (which should take one argument) to each pixel - in the given image. If the image has more than one band, the same - function is applied to each band. Note that the function is - evaluated once for each possible pixel value, so you cannot use - random components or other generators. - - :param image: The input image. - :param function: A function object, taking one integer argument. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - return image.point(args[0]) - - -def merge(mode, bands): - """ - Merge a set of single band images into a new multiband image. - - :param mode: The mode to use for the output image. See: - :ref:`concept-modes`. - :param bands: A sequence containing one single-band image for - each band in the output image. All bands must have the - same size. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if getmodebands(mode) != len(bands) or "*" in mode: - msg = "wrong number of bands" - raise ValueError(msg) - for band in bands[1:]: - if band.mode != getmodetype(mode): - msg = "mode mismatch" - raise ValueError(msg) - if band.size != bands[0].size: - msg = "size mismatch" - raise ValueError(msg) - for band in bands: - band.load() - return bands[0]._new(core.merge(mode, *[b.im for b in bands])) - - -# -------------------------------------------------------------------- -# Plugin registry - - -def register_open(id, factory, accept=None): - """ - Register an image file plugin. This function should not be used - in application code. - - :param id: An image format identifier. - :param factory: An image file factory method. - :param accept: An optional function that can be used to quickly - reject images having another format. - """ - id = id.upper() - if id not in ID: - ID.append(id) - OPEN[id] = factory, accept - - -def register_mime(id, mimetype): - """ - Registers an image MIME type. This function should not be used - in application code. - - :param id: An image format identifier. - :param mimetype: The image MIME type for this format. - """ - MIME[id.upper()] = mimetype - - -def register_save(id, driver): - """ - Registers an image save function. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE[id.upper()] = driver - - -def register_save_all(id, driver): - """ - Registers an image function to save all the frames - of a multiframe format. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE_ALL[id.upper()] = driver - - -def register_extension(id, extension): - """ - Registers an image extension. This function should not be - used in application code. - - :param id: An image format identifier. - :param extension: An extension used for this format. - """ - EXTENSION[extension.lower()] = id.upper() - - -def register_extensions(id, extensions): - """ - Registers image extensions. This function should not be - used in application code. - - :param id: An image format identifier. - :param extensions: A list of extensions used for this format. - """ - for extension in extensions: - register_extension(id, extension) - - -def registered_extensions(): - """ - Returns a dictionary containing all file extensions belonging - to registered plugins - """ - init() - return EXTENSION - - -def register_decoder(name, decoder): - """ - Registers an image decoder. This function should not be - used in application code. - - :param name: The name of the decoder - :param decoder: A callable(mode, args) that returns an - ImageFile.PyDecoder object - - .. versionadded:: 4.1.0 - """ - DECODERS[name] = decoder - - -def register_encoder(name, encoder): - """ - Registers an image encoder. This function should not be - used in application code. - - :param name: The name of the encoder - :param encoder: A callable(mode, args) that returns an - ImageFile.PyEncoder object - - .. versionadded:: 4.1.0 - """ - ENCODERS[name] = encoder - - -# -------------------------------------------------------------------- -# Simple display support. - - -def _show(image, **options): - from . import ImageShow - - ImageShow.show(image, **options) - - -# -------------------------------------------------------------------- -# Effects - - -def effect_mandelbrot(size, extent, quality): - """ - Generate a Mandelbrot set covering the given extent. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param extent: The extent to cover, as a 4-tuple: - (x0, y0, x1, y1). - :param quality: Quality. - """ - return Image()._new(core.effect_mandelbrot(size, extent, quality)) - - -def effect_noise(size, sigma): - """ - Generate Gaussian noise centered around 128. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param sigma: Standard deviation of noise. - """ - return Image()._new(core.effect_noise(size, sigma)) - - -def linear_gradient(mode): - """ - Generate 256x256 linear gradient from black to white, top to bottom. - - :param mode: Input mode. - """ - return Image()._new(core.linear_gradient(mode)) - - -def radial_gradient(mode): - """ - Generate 256x256 radial gradient from black to white, centre to edge. - - :param mode: Input mode. - """ - return Image()._new(core.radial_gradient(mode)) - - -# -------------------------------------------------------------------- -# Resources - - -def _apply_env_variables(env=None): - if env is None: - env = os.environ - - for var_name, setter in [ - ("PILLOW_ALIGNMENT", core.set_alignment), - ("PILLOW_BLOCK_SIZE", core.set_block_size), - ("PILLOW_BLOCKS_MAX", core.set_blocks_max), - ]: - if var_name not in env: - continue - - var = env[var_name].lower() - - units = 1 - for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]: - if var.endswith(postfix): - units = mul - var = var[: -len(postfix)] - - try: - var = int(var) * units - except ValueError: - warnings.warn(f"{var_name} is not int") - continue - - try: - setter(var) - except ValueError as e: - warnings.warn(f"{var_name}: {e}") - - -_apply_env_variables() -atexit.register(core.clear_cache) - - -class Exif(MutableMapping): - """ - This class provides read and write access to EXIF image data:: - - from PIL import Image - im = Image.open("exif.png") - exif = im.getexif() # Returns an instance of this class - - Information can be read and written, iterated over or deleted:: - - print(exif[274]) # 1 - exif[274] = 2 - for k, v in exif.items(): - print("Tag", k, "Value", v) # Tag 274 Value 2 - del exif[274] - - To access information beyond IFD0, :py:meth:`~PIL.Image.Exif.get_ifd` - returns a dictionary:: - - from PIL import ExifTags - im = Image.open("exif_gps.jpg") - exif = im.getexif() - gps_ifd = exif.get_ifd(ExifTags.IFD.GPSInfo) - print(gps_ifd) - - Other IFDs include ``ExifTags.IFD.Exif``, ``ExifTags.IFD.Makernote``, - ``ExifTags.IFD.Interop`` and ``ExifTags.IFD.IFD1``. - - :py:mod:`~PIL.ExifTags` also has enum classes to provide names for data:: - - print(exif[ExifTags.Base.Software]) # PIL - print(gps_ifd[ExifTags.GPS.GPSDateStamp]) # 1999:99:99 99:99:99 - """ - - endian = None - bigtiff = False - - def __init__(self): - self._data = {} - self._hidden_data = {} - self._ifds = {} - self._info = None - self._loaded_exif = None - - def _fixup(self, value): - try: - if len(value) == 1 and isinstance(value, tuple): - return value[0] - except Exception: - pass - return value - - def _fixup_dict(self, src_dict): - # Helper function - # returns a dict with any single item tuples/lists as individual values - return {k: self._fixup(v) for k, v in src_dict.items()} - - def _get_ifd_dict(self, offset): - try: - # an offset pointer to the location of the nested embedded IFD. - # It should be a long, but may be corrupted. - self.fp.seek(offset) - except (KeyError, TypeError): - pass - else: - from . import TiffImagePlugin - - info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - info.load(self.fp) - return self._fixup_dict(info) - - def _get_head(self): - version = b"\x2B" if self.bigtiff else b"\x2A" - if self.endian == "<": - head = b"II" + version + b"\x00" + o32le(8) - else: - head = b"MM\x00" + version + o32be(8) - if self.bigtiff: - head += o32le(8) if self.endian == "<" else o32be(8) - head += b"\x00\x00\x00\x00" - return head - - def load(self, data): - # Extract EXIF information. This is highly experimental, - # and is likely to be replaced with something better in a future - # version. - - # The EXIF record consists of a TIFF file embedded in a JPEG - # application marker (!). - if data == self._loaded_exif: - return - self._loaded_exif = data - self._data.clear() - self._hidden_data.clear() - self._ifds.clear() - if data and data.startswith(b"Exif\x00\x00"): - data = data[6:] - if not data: - self._info = None - return - - self.fp = io.BytesIO(data) - self.head = self.fp.read(8) - # process dictionary - from . import TiffImagePlugin - - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - self.endian = self._info._endian - self.fp.seek(self._info.next) - self._info.load(self.fp) - - def load_from_fp(self, fp, offset=None): - self._loaded_exif = None - self._data.clear() - self._hidden_data.clear() - self._ifds.clear() - - # process dictionary - from . import TiffImagePlugin - - self.fp = fp - if offset is not None: - self.head = self._get_head() - else: - self.head = self.fp.read(8) - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - if self.endian is None: - self.endian = self._info._endian - if offset is None: - offset = self._info.next - self.fp.seek(offset) - self._info.load(self.fp) - - def _get_merged_dict(self): - merged_dict = dict(self) - - # get EXIF extension - if ExifTags.IFD.Exif in self: - ifd = self._get_ifd_dict(self[ExifTags.IFD.Exif]) - if ifd: - merged_dict.update(ifd) - - # GPS - if ExifTags.IFD.GPSInfo in self: - merged_dict[ExifTags.IFD.GPSInfo] = self._get_ifd_dict( - self[ExifTags.IFD.GPSInfo] - ) - - return merged_dict - - def tobytes(self, offset=8): - from . import TiffImagePlugin - - head = self._get_head() - ifd = TiffImagePlugin.ImageFileDirectory_v2(ifh=head) - for tag, value in self.items(): - if tag in [ - ExifTags.IFD.Exif, - ExifTags.IFD.GPSInfo, - ] and not isinstance(value, dict): - value = self.get_ifd(tag) - if ( - tag == ExifTags.IFD.Exif - and ExifTags.IFD.Interop in value - and not isinstance(value[ExifTags.IFD.Interop], dict) - ): - value = value.copy() - value[ExifTags.IFD.Interop] = self.get_ifd(ExifTags.IFD.Interop) - ifd[tag] = value - return b"Exif\x00\x00" + head + ifd.tobytes(offset) - - def get_ifd(self, tag): - if tag not in self._ifds: - if tag == ExifTags.IFD.IFD1: - if self._info is not None and self._info.next != 0: - self._ifds[tag] = self._get_ifd_dict(self._info.next) - elif tag in [ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo]: - offset = self._hidden_data.get(tag, self.get(tag)) - if offset is not None: - self._ifds[tag] = self._get_ifd_dict(offset) - elif tag in [ExifTags.IFD.Interop, ExifTags.IFD.Makernote]: - if ExifTags.IFD.Exif not in self._ifds: - self.get_ifd(ExifTags.IFD.Exif) - tag_data = self._ifds[ExifTags.IFD.Exif][tag] - if tag == ExifTags.IFD.Makernote: - from .TiffImagePlugin import ImageFileDirectory_v2 - - if tag_data[:8] == b"FUJIFILM": - ifd_offset = i32le(tag_data, 8) - ifd_data = tag_data[ifd_offset:] - - makernote = {} - for i in range(0, struct.unpack(" 4: - (offset,) = struct.unpack("H", tag_data[:2])[0]): - ifd_tag, typ, count, data = struct.unpack( - ">HHL4s", tag_data[i * 12 + 2 : (i + 1) * 12 + 2] - ) - if ifd_tag == 0x1101: - # CameraInfo - (offset,) = struct.unpack(">L", data) - self.fp.seek(offset) - - camerainfo = {"ModelID": self.fp.read(4)} - - self.fp.read(4) - # Seconds since 2000 - camerainfo["TimeStamp"] = i32le(self.fp.read(12)) - - self.fp.read(4) - camerainfo["InternalSerialNumber"] = self.fp.read(4) - - self.fp.read(12) - parallax = self.fp.read(4) - handler = ImageFileDirectory_v2._load_dispatch[ - TiffTags.FLOAT - ][1] - camerainfo["Parallax"] = handler( - ImageFileDirectory_v2(), parallax, False - ) - - self.fp.read(4) - camerainfo["Category"] = self.fp.read(2) - - makernote = {0x1101: dict(self._fixup_dict(camerainfo))} - self._ifds[tag] = makernote - else: - # Interop - self._ifds[tag] = self._get_ifd_dict(tag_data) - ifd = self._ifds.get(tag, {}) - if tag == ExifTags.IFD.Exif and self._hidden_data: - ifd = { - k: v - for (k, v) in ifd.items() - if k not in (ExifTags.IFD.Interop, ExifTags.IFD.Makernote) - } - return ifd - - def hide_offsets(self): - for tag in (ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo): - if tag in self: - self._hidden_data[tag] = self[tag] - del self[tag] - - def __str__(self): - if self._info is not None: - # Load all keys into self._data - for tag in self._info: - self[tag] - - return str(self._data) - - def __len__(self): - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return len(keys) - - def __getitem__(self, tag): - if self._info is not None and tag not in self._data and tag in self._info: - self._data[tag] = self._fixup(self._info[tag]) - del self._info[tag] - return self._data[tag] - - def __contains__(self, tag): - return tag in self._data or (self._info is not None and tag in self._info) - - def __setitem__(self, tag, value): - if self._info is not None and tag in self._info: - del self._info[tag] - self._data[tag] = value - - def __delitem__(self, tag): - if self._info is not None and tag in self._info: - del self._info[tag] - else: - del self._data[tag] - - def __iter__(self): - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return iter(keys) diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/_version.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/_version.py deleted file mode 100644 index 1fc7f7334aa447852807572682a757a342003312..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/_version.py +++ /dev/null @@ -1,2 +0,0 @@ -# Master version for Pillow -__version__ = "10.0.0" diff --git a/spaces/candlend/vits-hoshimi/vits/text/japanese.py b/spaces/candlend/vits-hoshimi/vits/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/vits/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/captchaboy/fastest-8kun-captchas-solver/README.md b/spaces/captchaboy/fastest-8kun-captchas-solver/README.md deleted file mode 100644 index b5dc092f0785f1576dea52a333ebfab702752241..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/fastest-8kun-captchas-solver/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Fastest 8kun Captchas Solver -emoji: 🐠 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/chart_with_confidence.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/chart_with_confidence.py deleted file mode 100644 index 9c1cd6cc8fda56e831fbc02a8ffdd844866c0e4f..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/chart_with_confidence.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from . import DensePoseChartConfidencePredictorMixin, DensePoseChartPredictor -from .registry import DENSEPOSE_PREDICTOR_REGISTRY - - -@DENSEPOSE_PREDICTOR_REGISTRY.register() -class DensePoseChartWithConfidencePredictor( - DensePoseChartConfidencePredictorMixin, DensePoseChartPredictor -): - """ - Predictor that combines chart and chart confidence estimation - """ - - pass diff --git a/spaces/chainyo/optimum-text-classification/README.md b/spaces/chainyo/optimum-text-classification/README.md deleted file mode 100644 index b926a83fb2d5559826f6267cccc0b834a9f6d368..0000000000000000000000000000000000000000 --- a/spaces/chainyo/optimum-text-classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Optimum Text Classification -emoji: ⭐ -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.9.0 -app_file: main.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/chendl/compositional_test/multimodal/tools/make_gqa_val.py b/spaces/chendl/compositional_test/multimodal/tools/make_gqa_val.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/assets/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/assets/__init__.py deleted file mode 100644 index 9075930dc8f9a382c0bd7663e546fa2a93a4d257..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/assets/__init__.py +++ /dev/null @@ -1,1440 +0,0 @@ -# -*- coding: utf-8 -*- -from typing import Dict, List - -# Language label that contain the em dash "—" -# character are to be considered alternative seq to origin -FREQUENCIES: Dict[str, List[str]] = { - "English": [ - "e", - "a", - "t", - "i", - "o", - "n", - "s", - "r", - "h", - "l", - "d", - "c", - "u", - "m", - "f", - "p", - "g", - "w", - "y", - "b", - "v", - "k", - "x", - "j", - "z", - "q", - ], - "English—": [ - "e", - "a", - "t", - "i", - "o", - "n", - "s", - "r", - "h", - "l", - "d", - "c", - "m", - "u", - "f", - "p", - "g", - "w", - "b", - "y", - "v", - "k", - "j", - "x", - "z", - "q", - ], - "German": [ - "e", - "n", - "i", - "r", - "s", - "t", - "a", - "d", - "h", - "u", - "l", - "g", - "o", - "c", - "m", - "b", - "f", - "k", - "w", - "z", - "p", - "v", - "ü", - "ä", - "ö", - "j", - ], - "French": [ - "e", - "a", - "s", - "n", - "i", - "t", - "r", - "l", - "u", - "o", - "d", - "c", - "p", - "m", - "é", - "v", - "g", - "f", - "b", - "h", - "q", - "à", - "x", - "è", - "y", - "j", - ], - "Dutch": [ - "e", - "n", - "a", - "i", - "r", - "t", - "o", - "d", - "s", - "l", - "g", - "h", - "v", - "m", - "u", - "k", - "c", - "p", - "b", - "w", - "j", - "z", - "f", - "y", - "x", - "ë", - ], - "Italian": [ - "e", - "i", - "a", - "o", - "n", - "l", - "t", - "r", - "s", - "c", - "d", - "u", - "p", - "m", - "g", - "v", - "f", - "b", - "z", - "h", - "q", - "è", - "à", - "k", - "y", - "ò", - ], - "Polish": [ - "a", - "i", - "o", - "e", - "n", - "r", - "z", - "w", - "s", - "c", - "t", - "k", - "y", - "d", - "p", - "m", - "u", - "l", - "j", - "ł", - "g", - "b", - "h", - "ą", - "ę", - "ó", - ], - "Spanish": [ - "e", - "a", - "o", - "n", - "s", - "r", - "i", - "l", - "d", - "t", - "c", - "u", - "m", - "p", - "b", - "g", - "v", - "f", - "y", - "ó", - "h", - "q", - "í", - "j", - "z", - "á", - ], - "Russian": [ - "о", - "а", - "е", - "и", - "н", - "с", - "т", - "р", - "в", - "л", - "к", - "м", - "д", - "п", - "у", - "г", - "я", - "ы", - "з", - "б", - "й", - "ь", - "ч", - "х", - "ж", - "ц", - ], - # Jap-Kanji - "Japanese": [ - "人", - "一", - "大", - "亅", - "丁", - "丨", - "竹", - "笑", - "口", - "日", - "今", - "二", - "彳", - "行", - "十", - "土", - "丶", - "寸", - "寺", - "時", - "乙", - "丿", - "乂", - "气", - "気", - "冂", - "巾", - "亠", - "市", - "目", - "儿", - "見", - "八", - "小", - "凵", - "県", - "月", - "彐", - "門", - "間", - "木", - "東", - "山", - "出", - "本", - "中", - "刀", - "分", - "耳", - "又", - "取", - "最", - "言", - "田", - "心", - "思", - "刂", - "前", - "京", - "尹", - "事", - "生", - "厶", - "云", - "会", - "未", - "来", - "白", - "冫", - "楽", - "灬", - "馬", - "尸", - "尺", - "駅", - "明", - "耂", - "者", - "了", - "阝", - "都", - "高", - "卜", - "占", - "厂", - "广", - "店", - "子", - "申", - "奄", - "亻", - "俺", - "上", - "方", - "冖", - "学", - "衣", - "艮", - "食", - "自", - ], - # Jap-Katakana - "Japanese—": [ - "ー", - "ン", - "ス", - "・", - "ル", - "ト", - "リ", - "イ", - "ア", - "ラ", - "ッ", - "ク", - "ド", - "シ", - "レ", - "ジ", - "タ", - "フ", - "ロ", - "カ", - "テ", - "マ", - "ィ", - "グ", - "バ", - "ム", - "プ", - "オ", - "コ", - "デ", - "ニ", - "ウ", - "メ", - "サ", - "ビ", - "ナ", - "ブ", - "ャ", - "エ", - "ュ", - "チ", - "キ", - "ズ", - "ダ", - "パ", - "ミ", - "ェ", - "ョ", - "ハ", - "セ", - "ベ", - "ガ", - "モ", - "ツ", - "ネ", - "ボ", - "ソ", - "ノ", - "ァ", - "ヴ", - "ワ", - "ポ", - "ペ", - "ピ", - "ケ", - "ゴ", - "ギ", - "ザ", - "ホ", - "ゲ", - "ォ", - "ヤ", - "ヒ", - "ユ", - "ヨ", - "ヘ", - "ゼ", - "ヌ", - "ゥ", - "ゾ", - "ヶ", - "ヂ", - "ヲ", - "ヅ", - "ヵ", - "ヱ", - "ヰ", - "ヮ", - "ヽ", - "゠", - "ヾ", - "ヷ", - "ヿ", - "ヸ", - "ヹ", - "ヺ", - ], - # Jap-Hiragana - "Japanese——": [ - "の", - "に", - "る", - "た", - "と", - "は", - "し", - "い", - "を", - "で", - "て", - "が", - "な", - "れ", - "か", - "ら", - "さ", - "っ", - "り", - "す", - "あ", - "も", - "こ", - "ま", - "う", - "く", - "よ", - "き", - "ん", - "め", - "お", - "け", - "そ", - "つ", - "だ", - "や", - "え", - "ど", - "わ", - "ち", - "み", - "せ", - "じ", - "ば", - "へ", - "び", - "ず", - "ろ", - "ほ", - "げ", - "む", - "べ", - "ひ", - "ょ", - "ゆ", - "ぶ", - "ご", - "ゃ", - "ね", - "ふ", - "ぐ", - "ぎ", - "ぼ", - "ゅ", - "づ", - "ざ", - "ぞ", - "ぬ", - "ぜ", - "ぱ", - "ぽ", - "ぷ", - "ぴ", - "ぃ", - "ぁ", - "ぇ", - "ぺ", - "ゞ", - "ぢ", - "ぉ", - "ぅ", - "ゐ", - "ゝ", - "ゑ", - "゛", - "゜", - "ゎ", - "ゔ", - "゚", - "ゟ", - "゙", - "ゕ", - "ゖ", - ], - "Portuguese": [ - "a", - "e", - "o", - "s", - "i", - "r", - "d", - "n", - "t", - "m", - "u", - "c", - "l", - "p", - "g", - "v", - "b", - "f", - "h", - "ã", - "q", - "é", - "ç", - "á", - "z", - "í", - ], - "Swedish": [ - "e", - "a", - "n", - "r", - "t", - "s", - "i", - "l", - "d", - "o", - "m", - "k", - "g", - "v", - "h", - "f", - "u", - "p", - "ä", - "c", - "b", - "ö", - "å", - "y", - "j", - "x", - ], - "Chinese": [ - "的", - "一", - "是", - "不", - "了", - "在", - "人", - "有", - "我", - "他", - "这", - "个", - "们", - "中", - "来", - "上", - "大", - "为", - "和", - "国", - "地", - "到", - "以", - "说", - "时", - "要", - "就", - "出", - "会", - "可", - "也", - "你", - "对", - "生", - "能", - "而", - "子", - "那", - "得", - "于", - "着", - "下", - "自", - "之", - "年", - "过", - "发", - "后", - "作", - "里", - "用", - "道", - "行", - "所", - "然", - "家", - "种", - "事", - "成", - "方", - "多", - "经", - "么", - "去", - "法", - "学", - "如", - "都", - "同", - "现", - "当", - "没", - "动", - "面", - "起", - "看", - "定", - "天", - "分", - "还", - "进", - "好", - "小", - "部", - "其", - "些", - "主", - "样", - "理", - "心", - "她", - "本", - "前", - "开", - "但", - "因", - "只", - "从", - "想", - "实", - ], - "Ukrainian": [ - "о", - "а", - "н", - "і", - "и", - "р", - "в", - "т", - "е", - "с", - "к", - "л", - "у", - "д", - "м", - "п", - "з", - "я", - "ь", - "б", - "г", - "й", - "ч", - "х", - "ц", - "ї", - ], - "Norwegian": [ - "e", - "r", - "n", - "t", - "a", - "s", - "i", - "o", - "l", - "d", - "g", - "k", - "m", - "v", - "f", - "p", - "u", - "b", - "h", - "å", - "y", - "j", - "ø", - "c", - "æ", - "w", - ], - "Finnish": [ - "a", - "i", - "n", - "t", - "e", - "s", - "l", - "o", - "u", - "k", - "ä", - "m", - "r", - "v", - "j", - "h", - "p", - "y", - "d", - "ö", - "g", - "c", - "b", - "f", - "w", - "z", - ], - "Vietnamese": [ - "n", - "h", - "t", - "i", - "c", - "g", - "a", - "o", - "u", - "m", - "l", - "r", - "à", - "đ", - "s", - "e", - "v", - "p", - "b", - "y", - "ư", - "d", - "á", - "k", - "ộ", - "ế", - ], - "Czech": [ - "o", - "e", - "a", - "n", - "t", - "s", - "i", - "l", - "v", - "r", - "k", - "d", - "u", - "m", - "p", - "í", - "c", - "h", - "z", - "á", - "y", - "j", - "b", - "ě", - "é", - "ř", - ], - "Hungarian": [ - "e", - "a", - "t", - "l", - "s", - "n", - "k", - "r", - "i", - "o", - "z", - "á", - "é", - "g", - "m", - "b", - "y", - "v", - "d", - "h", - "u", - "p", - "j", - "ö", - "f", - "c", - ], - "Korean": [ - "이", - "다", - "에", - "의", - "는", - "로", - "하", - "을", - "가", - "고", - "지", - "서", - "한", - "은", - "기", - "으", - "년", - "대", - "사", - "시", - "를", - "리", - "도", - "인", - "스", - "일", - ], - "Indonesian": [ - "a", - "n", - "e", - "i", - "r", - "t", - "u", - "s", - "d", - "k", - "m", - "l", - "g", - "p", - "b", - "o", - "h", - "y", - "j", - "c", - "w", - "f", - "v", - "z", - "x", - "q", - ], - "Turkish": [ - "a", - "e", - "i", - "n", - "r", - "l", - "ı", - "k", - "d", - "t", - "s", - "m", - "y", - "u", - "o", - "b", - "ü", - "ş", - "v", - "g", - "z", - "h", - "c", - "p", - "ç", - "ğ", - ], - "Romanian": [ - "e", - "i", - "a", - "r", - "n", - "t", - "u", - "l", - "o", - "c", - "s", - "d", - "p", - "m", - "ă", - "f", - "v", - "î", - "g", - "b", - "ș", - "ț", - "z", - "h", - "â", - "j", - ], - "Farsi": [ - "ا", - "ی", - "ر", - "د", - "ن", - "ه", - "و", - "م", - "ت", - "ب", - "س", - "ل", - "ک", - "ش", - "ز", - "ف", - "گ", - "ع", - "خ", - "ق", - "ج", - "آ", - "پ", - "ح", - "ط", - "ص", - ], - "Arabic": [ - "ا", - "ل", - "ي", - "م", - "و", - "ن", - "ر", - "ت", - "ب", - "ة", - "ع", - "د", - "س", - "ف", - "ه", - "ك", - "ق", - "أ", - "ح", - "ج", - "ش", - "ط", - "ص", - "ى", - "خ", - "إ", - ], - "Danish": [ - "e", - "r", - "n", - "t", - "a", - "i", - "s", - "d", - "l", - "o", - "g", - "m", - "k", - "f", - "v", - "u", - "b", - "h", - "p", - "å", - "y", - "ø", - "æ", - "c", - "j", - "w", - ], - "Serbian": [ - "а", - "и", - "о", - "е", - "н", - "р", - "с", - "у", - "т", - "к", - "ј", - "в", - "д", - "м", - "п", - "л", - "г", - "з", - "б", - "a", - "i", - "e", - "o", - "n", - "ц", - "ш", - ], - "Lithuanian": [ - "i", - "a", - "s", - "o", - "r", - "e", - "t", - "n", - "u", - "k", - "m", - "l", - "p", - "v", - "d", - "j", - "g", - "ė", - "b", - "y", - "ų", - "š", - "ž", - "c", - "ą", - "į", - ], - "Slovene": [ - "e", - "a", - "i", - "o", - "n", - "r", - "s", - "l", - "t", - "j", - "v", - "k", - "d", - "p", - "m", - "u", - "z", - "b", - "g", - "h", - "č", - "c", - "š", - "ž", - "f", - "y", - ], - "Slovak": [ - "o", - "a", - "e", - "n", - "i", - "r", - "v", - "t", - "s", - "l", - "k", - "d", - "m", - "p", - "u", - "c", - "h", - "j", - "b", - "z", - "á", - "y", - "ý", - "í", - "č", - "é", - ], - "Hebrew": [ - "י", - "ו", - "ה", - "ל", - "ר", - "ב", - "ת", - "מ", - "א", - "ש", - "נ", - "ע", - "ם", - "ד", - "ק", - "ח", - "פ", - "ס", - "כ", - "ג", - "ט", - "צ", - "ן", - "ז", - "ך", - ], - "Bulgarian": [ - "а", - "и", - "о", - "е", - "н", - "т", - "р", - "с", - "в", - "л", - "к", - "д", - "п", - "м", - "з", - "г", - "я", - "ъ", - "у", - "б", - "ч", - "ц", - "й", - "ж", - "щ", - "х", - ], - "Croatian": [ - "a", - "i", - "o", - "e", - "n", - "r", - "j", - "s", - "t", - "u", - "k", - "l", - "v", - "d", - "m", - "p", - "g", - "z", - "b", - "c", - "č", - "h", - "š", - "ž", - "ć", - "f", - ], - "Hindi": [ - "क", - "र", - "स", - "न", - "त", - "म", - "ह", - "प", - "य", - "ल", - "व", - "ज", - "द", - "ग", - "ब", - "श", - "ट", - "अ", - "ए", - "थ", - "भ", - "ड", - "च", - "ध", - "ष", - "इ", - ], - "Estonian": [ - "a", - "i", - "e", - "s", - "t", - "l", - "u", - "n", - "o", - "k", - "r", - "d", - "m", - "v", - "g", - "p", - "j", - "h", - "ä", - "b", - "õ", - "ü", - "f", - "c", - "ö", - "y", - ], - "Thai": [ - "า", - "น", - "ร", - "อ", - "ก", - "เ", - "ง", - "ม", - "ย", - "ล", - "ว", - "ด", - "ท", - "ส", - "ต", - "ะ", - "ป", - "บ", - "ค", - "ห", - "แ", - "จ", - "พ", - "ช", - "ข", - "ใ", - ], - "Greek": [ - "α", - "τ", - "ο", - "ι", - "ε", - "ν", - "ρ", - "σ", - "κ", - "η", - "π", - "ς", - "υ", - "μ", - "λ", - "ί", - "ό", - "ά", - "γ", - "έ", - "δ", - "ή", - "ω", - "χ", - "θ", - "ύ", - ], - "Tamil": [ - "க", - "த", - "ப", - "ட", - "ர", - "ம", - "ல", - "ன", - "வ", - "ற", - "ய", - "ள", - "ச", - "ந", - "இ", - "ண", - "அ", - "ஆ", - "ழ", - "ங", - "எ", - "உ", - "ஒ", - "ஸ", - ], - "Kazakh": [ - "а", - "ы", - "е", - "н", - "т", - "р", - "л", - "і", - "д", - "с", - "м", - "қ", - "к", - "о", - "б", - "и", - "у", - "ғ", - "ж", - "ң", - "з", - "ш", - "й", - "п", - "г", - "ө", - ], -} diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/compile/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/compile/__init__.py deleted file mode 100644 index 569c1b6819bddc21b571a24844dc3fcfb8810611..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/compile/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -from torch._functorch.python_key import pythonkey_decompose -from torch._functorch.fx_minifier import minifier -from torch._functorch.aot_autograd import ( - aot_function, - aot_module, - compiled_function, - compiled_module, - aot_module_simplified, - get_graph_being_compiled, - get_aot_graph_name, - get_aot_compilation_context, - make_boxed_func, - make_boxed_compiler -) -from torch._functorch.compilers import ( - ts_compile, - draw_graph_compile, - nop, - nnc_jit, - memory_efficient_fusion, - debug_compile, - print_compile, - default_decompositions -) -from torch._functorch.partitioners import ( - min_cut_rematerialization_partition, - default_partition, - draw_graph, - draw_joint_graph, -) -from torch._functorch import config diff --git a/spaces/cihyFjudo/fairness-paper-search/DeepSea Obfuscator 4.0.1.16 A Review and Comparison with Other .NET Obfuscators.md b/spaces/cihyFjudo/fairness-paper-search/DeepSea Obfuscator 4.0.1.16 A Review and Comparison with Other .NET Obfuscators.md deleted file mode 100644 index 2a354f3677d23fb7c395db7dac065f8e5b7c206c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/DeepSea Obfuscator 4.0.1.16 A Review and Comparison with Other .NET Obfuscators.md +++ /dev/null @@ -1,6 +0,0 @@ -

DeepSea Obfuscator 4.0.1.16


Download →→→ https://tinurli.com/2uwjTU



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Buku Aku Sumanjaya Pdflkjhl !!INSTALL!!.md b/spaces/cihyFjudo/fairness-paper-search/Download Buku Aku Sumanjaya Pdflkjhl !!INSTALL!!.md deleted file mode 100644 index cb13ec44b04a1c3936bfa0102e8c33ab856fe8f7..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Buku Aku Sumanjaya Pdflkjhl !!INSTALL!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Buku Aku Sumanjaya Pdflkjhl


Download Zip ::: https://tinurli.com/2uwji9



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Euro Truck Simulator 2 (v1.4.8s) (MULTi38) [Steam-Rip] RG Origin Cheat Engine Everything You Need to Know.md b/spaces/cihyFjudo/fairness-paper-search/Euro Truck Simulator 2 (v1.4.8s) (MULTi38) [Steam-Rip] RG Origin Cheat Engine Everything You Need to Know.md deleted file mode 100644 index ae4b34f0b3341653b587922695b6b89a8345ddfe..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Euro Truck Simulator 2 (v1.4.8s) (MULTi38) [Steam-Rip] RG Origin Cheat Engine Everything You Need to Know.md +++ /dev/null @@ -1,6 +0,0 @@ -

Euro Truck Simulator 2 (v1.4.8s) (MULTi38) [Steam-Rip] RG Origin Cheat Engine


Download Ziphttps://tinurli.com/2uwiCw



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Gangs Of Wasseypur 2012 Hindi BRRip 720p x264 AAC 5.1...Hon3y The Ultimate Fan Site for the Film.md b/spaces/cihyFjudo/fairness-paper-search/Gangs Of Wasseypur 2012 Hindi BRRip 720p x264 AAC 5.1...Hon3y The Ultimate Fan Site for the Film.md deleted file mode 100644 index b9b6acf666aab414242450e48169d1bcb3f51252..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Gangs Of Wasseypur 2012 Hindi BRRip 720p x264 AAC 5.1...Hon3y The Ultimate Fan Site for the Film.md +++ /dev/null @@ -1,6 +0,0 @@ -

Gangs Of Wasseypur 2012 Hindi BRRip 720p x264 AAC 5.1...Hon3y


Download Zip ►►►►► https://tinurli.com/2uwj8C



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/VSO ConvertXtoDVD 5.2.0.13 Final (crack key) - Tips and Tricks for Using this Amazing Software.md b/spaces/cihyFjudo/fairness-paper-search/VSO ConvertXtoDVD 5.2.0.13 Final (crack key) - Tips and Tricks for Using this Amazing Software.md deleted file mode 100644 index 41396d8c49759fb7c0ca0e347964ace90b02b081..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/VSO ConvertXtoDVD 5.2.0.13 Final (crack key) - Tips and Tricks for Using this Amazing Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

VSO ConvertXtoDVD 5.2.0.13 Final (crack key)


Download Zip ⚙⚙⚙ https://tinurli.com/2uwhOC



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cldelisle/test/README.md b/spaces/cldelisle/test/README.md deleted file mode 100644 index 983964ccfbd572b730541c2f81a0bfe3d8d6a962..0000000000000000000000000000000000000000 --- a/spaces/cldelisle/test/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Test -emoji: 🐨 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/_deprecate.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/_deprecate.py deleted file mode 100644 index 2f2a3df13e312aed847e482a067c2c10e4fd5632..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/_deprecate.py +++ /dev/null @@ -1,69 +0,0 @@ -from __future__ import annotations - -import warnings - -from . import __version__ - - -def deprecate( - deprecated: str, - when: int | None, - replacement: str | None = None, - *, - action: str | None = None, - plural: bool = False, -) -> None: - """ - Deprecations helper. - - :param deprecated: Name of thing to be deprecated. - :param when: Pillow major version to be removed in. - :param replacement: Name of replacement. - :param action: Instead of "replacement", give a custom call to action - e.g. "Upgrade to new thing". - :param plural: if the deprecated thing is plural, needing "are" instead of "is". - - Usually of the form: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - Use [replacement] instead." - - You can leave out the replacement sentence: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd)" - - Or with another call to action: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - [action]." - """ - - is_ = "are" if plural else "is" - - if when is None: - removed = "a future version" - elif when <= int(__version__.split(".")[0]): - msg = f"{deprecated} {is_} deprecated and should be removed." - raise RuntimeError(msg) - elif when == 11: - removed = "Pillow 11 (2024-10-15)" - else: - msg = f"Unknown removal version: {when}. Update {__name__}?" - raise ValueError(msg) - - if replacement and action: - msg = "Use only one of 'replacement' and 'action'" - raise ValueError(msg) - - if replacement: - action = f". Use {replacement} instead." - elif action: - action = f". {action.rstrip('.')}." - else: - action = "" - - warnings.warn( - f"{deprecated} {is_} deprecated and will be removed in {removed}{action}", - DeprecationWarning, - stacklevel=3, - ) diff --git a/spaces/colakin/video-generater/public/ffmpeg/compat/os2threads.h b/spaces/colakin/video-generater/public/ffmpeg/compat/os2threads.h deleted file mode 100644 index a061eaa63de88101787d6991f2308f7aaf880572..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/compat/os2threads.h +++ /dev/null @@ -1,229 +0,0 @@ -/* - * Copyright (c) 2011-2017 KO Myung-Hun - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * os2threads to pthreads wrapper - */ - -#ifndef COMPAT_OS2THREADS_H -#define COMPAT_OS2THREADS_H - -#define INCL_DOS -#define INCL_DOSERRORS -#include - -#undef __STRICT_ANSI__ /* for _beginthread() */ -#include -#include - -#include -#include - -#include "libavutil/attributes.h" -#include "libavutil/common.h" -#include "libavutil/time.h" - -typedef struct { - TID tid; - void *(*start_routine)(void *); - void *arg; - void *result; -} pthread_t; - -typedef void pthread_attr_t; - -typedef _fmutex pthread_mutex_t; -typedef void pthread_mutexattr_t; - -#define PTHREAD_MUTEX_INITIALIZER _FMUTEX_INITIALIZER - -typedef struct { - HEV event_sem; - HEV ack_sem; - volatile unsigned wait_count; -} pthread_cond_t; - -typedef void pthread_condattr_t; - -typedef struct { - volatile int done; - _fmutex mtx; -} pthread_once_t; - -#define PTHREAD_ONCE_INIT {0, _FMUTEX_INITIALIZER} - -static void thread_entry(void *arg) -{ - pthread_t *thread = arg; - - thread->result = thread->start_routine(thread->arg); -} - -static av_always_inline int pthread_create(pthread_t *thread, - const pthread_attr_t *attr, - void *(*start_routine)(void*), - void *arg) -{ - thread->start_routine = start_routine; - thread->arg = arg; - thread->result = NULL; - - thread->tid = _beginthread(thread_entry, NULL, 1024 * 1024, thread); - - return 0; -} - -static av_always_inline int pthread_join(pthread_t thread, void **value_ptr) -{ - DosWaitThread(&thread.tid, DCWW_WAIT); - - if (value_ptr) - *value_ptr = thread.result; - - return 0; -} - -static av_always_inline int pthread_mutex_init(pthread_mutex_t *mutex, - const pthread_mutexattr_t *attr) -{ - _fmutex_create(mutex, 0); - - return 0; -} - -static av_always_inline int pthread_mutex_destroy(pthread_mutex_t *mutex) -{ - _fmutex_close(mutex); - - return 0; -} - -static av_always_inline int pthread_mutex_lock(pthread_mutex_t *mutex) -{ - _fmutex_request(mutex, 0); - - return 0; -} - -static av_always_inline int pthread_mutex_unlock(pthread_mutex_t *mutex) -{ - _fmutex_release(mutex); - - return 0; -} - -static av_always_inline int pthread_cond_init(pthread_cond_t *cond, - const pthread_condattr_t *attr) -{ - DosCreateEventSem(NULL, &cond->event_sem, DCE_POSTONE, FALSE); - DosCreateEventSem(NULL, &cond->ack_sem, DCE_POSTONE, FALSE); - - cond->wait_count = 0; - - return 0; -} - -static av_always_inline int pthread_cond_destroy(pthread_cond_t *cond) -{ - DosCloseEventSem(cond->event_sem); - DosCloseEventSem(cond->ack_sem); - - return 0; -} - -static av_always_inline int pthread_cond_signal(pthread_cond_t *cond) -{ - if (!__atomic_cmpxchg32(&cond->wait_count, 0, 0)) { - DosPostEventSem(cond->event_sem); - DosWaitEventSem(cond->ack_sem, SEM_INDEFINITE_WAIT); - } - - return 0; -} - -static av_always_inline int pthread_cond_broadcast(pthread_cond_t *cond) -{ - while (!__atomic_cmpxchg32(&cond->wait_count, 0, 0)) - pthread_cond_signal(cond); - - return 0; -} - -static av_always_inline int pthread_cond_timedwait(pthread_cond_t *cond, - pthread_mutex_t *mutex, - const struct timespec *abstime) -{ - int64_t abs_milli = abstime->tv_sec * 1000LL + abstime->tv_nsec / 1000000; - ULONG t = av_clip64(abs_milli - av_gettime() / 1000, 0, ULONG_MAX); - - __atomic_increment(&cond->wait_count); - - pthread_mutex_unlock(mutex); - - APIRET ret = DosWaitEventSem(cond->event_sem, t); - - __atomic_decrement(&cond->wait_count); - - DosPostEventSem(cond->ack_sem); - - pthread_mutex_lock(mutex); - - return (ret == ERROR_TIMEOUT) ? ETIMEDOUT : 0; -} - -static av_always_inline int pthread_cond_wait(pthread_cond_t *cond, - pthread_mutex_t *mutex) -{ - __atomic_increment(&cond->wait_count); - - pthread_mutex_unlock(mutex); - - DosWaitEventSem(cond->event_sem, SEM_INDEFINITE_WAIT); - - __atomic_decrement(&cond->wait_count); - - DosPostEventSem(cond->ack_sem); - - pthread_mutex_lock(mutex); - - return 0; -} - -static av_always_inline int pthread_once(pthread_once_t *once_control, - void (*init_routine)(void)) -{ - if (!once_control->done) - { - _fmutex_request(&once_control->mtx, 0); - - if (!once_control->done) - { - init_routine(); - - once_control->done = 1; - } - - _fmutex_release(&once_control->mtx); - } - - return 0; -} -#endif /* COMPAT_OS2THREADS_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/fftools/objpool.h b/spaces/colakin/video-generater/public/ffmpeg/fftools/objpool.h deleted file mode 100644 index 1b2aea6acac54c9b6f033e431c85929008384861..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/fftools/objpool.h +++ /dev/null @@ -1,37 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef FFTOOLS_OBJPOOL_H -#define FFTOOLS_OBJPOOL_H - -typedef struct ObjPool ObjPool; - -typedef void* (*ObjPoolCBAlloc)(void); -typedef void (*ObjPoolCBReset)(void *); -typedef void (*ObjPoolCBFree)(void **); - -void objpool_free(ObjPool **op); -ObjPool *objpool_alloc(ObjPoolCBAlloc cb_alloc, ObjPoolCBReset cb_reset, - ObjPoolCBFree cb_free); -ObjPool *objpool_alloc_packets(void); -ObjPool *objpool_alloc_frames(void); - -int objpool_get(ObjPool *op, void **obj); -void objpool_release(ObjPool *op, void **obj); - -#endif // FFTOOLS_OBJPOOL_H diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/h264qpel_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/h264qpel_init_arm.c deleted file mode 100644 index 71237be3596eec9c7aa7801e148a6fa53a5c2654..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/h264qpel_init_arm.c +++ /dev/null @@ -1,171 +0,0 @@ -/* - * ARM NEON optimised DSP functions - * Copyright (c) 2008 Mans Rullgard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "config.h" -#include "libavutil/attributes.h" -#include "libavutil/arm/cpu.h" -#include "libavcodec/h264qpel.h" - -void ff_put_h264_qpel16_mc00_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc10_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc20_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc30_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc01_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc11_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc21_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc31_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc02_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc12_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc22_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc32_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc03_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc13_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc23_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel16_mc33_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); - -void ff_put_h264_qpel8_mc00_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc10_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc20_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc30_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc01_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc11_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc21_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc31_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc02_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc12_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc22_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc32_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc03_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc13_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc23_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_put_h264_qpel8_mc33_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); - -void ff_avg_h264_qpel16_mc00_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc10_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc20_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc30_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc01_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc11_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc21_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc31_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc02_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc12_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc22_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc32_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc03_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc13_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc23_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel16_mc33_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); - -void ff_avg_h264_qpel8_mc00_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc10_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc20_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc30_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc01_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc11_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc21_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc31_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc02_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc12_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc22_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc32_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc03_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc13_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc23_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); -void ff_avg_h264_qpel8_mc33_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride); - -av_cold void ff_h264qpel_init_arm(H264QpelContext *c, int bit_depth) -{ - const int high_bit_depth = bit_depth > 8; - int cpu_flags = av_get_cpu_flags(); - - if (have_neon(cpu_flags) && !high_bit_depth) { - c->put_h264_qpel_pixels_tab[0][ 0] = ff_put_h264_qpel16_mc00_neon; - c->put_h264_qpel_pixels_tab[0][ 1] = ff_put_h264_qpel16_mc10_neon; - c->put_h264_qpel_pixels_tab[0][ 2] = ff_put_h264_qpel16_mc20_neon; - c->put_h264_qpel_pixels_tab[0][ 3] = ff_put_h264_qpel16_mc30_neon; - c->put_h264_qpel_pixels_tab[0][ 4] = ff_put_h264_qpel16_mc01_neon; - c->put_h264_qpel_pixels_tab[0][ 5] = ff_put_h264_qpel16_mc11_neon; - c->put_h264_qpel_pixels_tab[0][ 6] = ff_put_h264_qpel16_mc21_neon; - c->put_h264_qpel_pixels_tab[0][ 7] = ff_put_h264_qpel16_mc31_neon; - c->put_h264_qpel_pixels_tab[0][ 8] = ff_put_h264_qpel16_mc02_neon; - c->put_h264_qpel_pixels_tab[0][ 9] = ff_put_h264_qpel16_mc12_neon; - c->put_h264_qpel_pixels_tab[0][10] = ff_put_h264_qpel16_mc22_neon; - c->put_h264_qpel_pixels_tab[0][11] = ff_put_h264_qpel16_mc32_neon; - c->put_h264_qpel_pixels_tab[0][12] = ff_put_h264_qpel16_mc03_neon; - c->put_h264_qpel_pixels_tab[0][13] = ff_put_h264_qpel16_mc13_neon; - c->put_h264_qpel_pixels_tab[0][14] = ff_put_h264_qpel16_mc23_neon; - c->put_h264_qpel_pixels_tab[0][15] = ff_put_h264_qpel16_mc33_neon; - - c->put_h264_qpel_pixels_tab[1][ 0] = ff_put_h264_qpel8_mc00_neon; - c->put_h264_qpel_pixels_tab[1][ 1] = ff_put_h264_qpel8_mc10_neon; - c->put_h264_qpel_pixels_tab[1][ 2] = ff_put_h264_qpel8_mc20_neon; - c->put_h264_qpel_pixels_tab[1][ 3] = ff_put_h264_qpel8_mc30_neon; - c->put_h264_qpel_pixels_tab[1][ 4] = ff_put_h264_qpel8_mc01_neon; - c->put_h264_qpel_pixels_tab[1][ 5] = ff_put_h264_qpel8_mc11_neon; - c->put_h264_qpel_pixels_tab[1][ 6] = ff_put_h264_qpel8_mc21_neon; - c->put_h264_qpel_pixels_tab[1][ 7] = ff_put_h264_qpel8_mc31_neon; - c->put_h264_qpel_pixels_tab[1][ 8] = ff_put_h264_qpel8_mc02_neon; - c->put_h264_qpel_pixels_tab[1][ 9] = ff_put_h264_qpel8_mc12_neon; - c->put_h264_qpel_pixels_tab[1][10] = ff_put_h264_qpel8_mc22_neon; - c->put_h264_qpel_pixels_tab[1][11] = ff_put_h264_qpel8_mc32_neon; - c->put_h264_qpel_pixels_tab[1][12] = ff_put_h264_qpel8_mc03_neon; - c->put_h264_qpel_pixels_tab[1][13] = ff_put_h264_qpel8_mc13_neon; - c->put_h264_qpel_pixels_tab[1][14] = ff_put_h264_qpel8_mc23_neon; - c->put_h264_qpel_pixels_tab[1][15] = ff_put_h264_qpel8_mc33_neon; - - c->avg_h264_qpel_pixels_tab[0][ 0] = ff_avg_h264_qpel16_mc00_neon; - c->avg_h264_qpel_pixels_tab[0][ 1] = ff_avg_h264_qpel16_mc10_neon; - c->avg_h264_qpel_pixels_tab[0][ 2] = ff_avg_h264_qpel16_mc20_neon; - c->avg_h264_qpel_pixels_tab[0][ 3] = ff_avg_h264_qpel16_mc30_neon; - c->avg_h264_qpel_pixels_tab[0][ 4] = ff_avg_h264_qpel16_mc01_neon; - c->avg_h264_qpel_pixels_tab[0][ 5] = ff_avg_h264_qpel16_mc11_neon; - c->avg_h264_qpel_pixels_tab[0][ 6] = ff_avg_h264_qpel16_mc21_neon; - c->avg_h264_qpel_pixels_tab[0][ 7] = ff_avg_h264_qpel16_mc31_neon; - c->avg_h264_qpel_pixels_tab[0][ 8] = ff_avg_h264_qpel16_mc02_neon; - c->avg_h264_qpel_pixels_tab[0][ 9] = ff_avg_h264_qpel16_mc12_neon; - c->avg_h264_qpel_pixels_tab[0][10] = ff_avg_h264_qpel16_mc22_neon; - c->avg_h264_qpel_pixels_tab[0][11] = ff_avg_h264_qpel16_mc32_neon; - c->avg_h264_qpel_pixels_tab[0][12] = ff_avg_h264_qpel16_mc03_neon; - c->avg_h264_qpel_pixels_tab[0][13] = ff_avg_h264_qpel16_mc13_neon; - c->avg_h264_qpel_pixels_tab[0][14] = ff_avg_h264_qpel16_mc23_neon; - c->avg_h264_qpel_pixels_tab[0][15] = ff_avg_h264_qpel16_mc33_neon; - - c->avg_h264_qpel_pixels_tab[1][ 0] = ff_avg_h264_qpel8_mc00_neon; - c->avg_h264_qpel_pixels_tab[1][ 1] = ff_avg_h264_qpel8_mc10_neon; - c->avg_h264_qpel_pixels_tab[1][ 2] = ff_avg_h264_qpel8_mc20_neon; - c->avg_h264_qpel_pixels_tab[1][ 3] = ff_avg_h264_qpel8_mc30_neon; - c->avg_h264_qpel_pixels_tab[1][ 4] = ff_avg_h264_qpel8_mc01_neon; - c->avg_h264_qpel_pixels_tab[1][ 5] = ff_avg_h264_qpel8_mc11_neon; - c->avg_h264_qpel_pixels_tab[1][ 6] = ff_avg_h264_qpel8_mc21_neon; - c->avg_h264_qpel_pixels_tab[1][ 7] = ff_avg_h264_qpel8_mc31_neon; - c->avg_h264_qpel_pixels_tab[1][ 8] = ff_avg_h264_qpel8_mc02_neon; - c->avg_h264_qpel_pixels_tab[1][ 9] = ff_avg_h264_qpel8_mc12_neon; - c->avg_h264_qpel_pixels_tab[1][10] = ff_avg_h264_qpel8_mc22_neon; - c->avg_h264_qpel_pixels_tab[1][11] = ff_avg_h264_qpel8_mc32_neon; - c->avg_h264_qpel_pixels_tab[1][12] = ff_avg_h264_qpel8_mc03_neon; - c->avg_h264_qpel_pixels_tab[1][13] = ff_avg_h264_qpel8_mc13_neon; - c->avg_h264_qpel_pixels_tab[1][14] = ff_avg_h264_qpel8_mc23_neon; - c->avg_h264_qpel_pixels_tab[1][15] = ff_avg_h264_qpel8_mc33_neon; - } -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vc1dsp_init_neon.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vc1dsp_init_neon.c deleted file mode 100644 index b5615624d5f0e48a50eb8e9f5a5a5351fb16f05f..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vc1dsp_init_neon.c +++ /dev/null @@ -1,194 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/attributes.h" -#include "libavutil/intreadwrite.h" -#include "libavcodec/vc1dsp.h" -#include "vc1dsp.h" - -void ff_vc1_inv_trans_8x8_neon(int16_t *block); -void ff_vc1_inv_trans_4x8_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block); -void ff_vc1_inv_trans_8x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block); -void ff_vc1_inv_trans_4x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block); - -void ff_vc1_inv_trans_8x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block); -void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block); -void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block); -void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block); - -void ff_vc1_v_loop_filter4_neon(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_h_loop_filter4_neon(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_v_loop_filter8_neon(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_h_loop_filter8_neon(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_v_loop_filter16_neon(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_h_loop_filter16_neon(uint8_t *src, ptrdiff_t stride, int pq); - -void ff_put_pixels8x8_neon(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int rnd); - -#define DECL_PUT(X, Y) \ -void ff_put_vc1_mspel_mc##X##Y##_neon(uint8_t *dst, const uint8_t *src, \ - ptrdiff_t stride, int rnd); \ -static void ff_put_vc1_mspel_mc##X##Y##_16_neon(uint8_t *dst, const uint8_t *src, \ - ptrdiff_t stride, int rnd) \ -{ \ - ff_put_vc1_mspel_mc##X##Y##_neon(dst+0, src+0, stride, rnd); \ - ff_put_vc1_mspel_mc##X##Y##_neon(dst+8, src+8, stride, rnd); \ - dst += 8*stride; src += 8*stride; \ - ff_put_vc1_mspel_mc##X##Y##_neon(dst+0, src+0, stride, rnd); \ - ff_put_vc1_mspel_mc##X##Y##_neon(dst+8, src+8, stride, rnd); \ -} - -DECL_PUT(1, 0) -DECL_PUT(2, 0) -DECL_PUT(3, 0) - -DECL_PUT(0, 1) -DECL_PUT(0, 2) -DECL_PUT(0, 3) - -DECL_PUT(1, 1) -DECL_PUT(1, 2) -DECL_PUT(1, 3) - -DECL_PUT(2, 1) -DECL_PUT(2, 2) -DECL_PUT(2, 3) - -DECL_PUT(3, 1) -DECL_PUT(3, 2) -DECL_PUT(3, 3) - -void ff_put_vc1_chroma_mc8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); -void ff_avg_vc1_chroma_mc8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); -void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); -void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); - -int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst); - -static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst) -{ - /* Dealing with starting and stopping, and removing escape bytes, are - * comparatively less time-sensitive, so are more clearly expressed using - * a C wrapper around the assembly inner loop. Note that we assume a - * little-endian machine that supports unaligned loads. */ - int dsize = 0; - while (size >= 4) - { - int found = 0; - while (!found && (((uintptr_t) dst) & 7) && size >= 4) - { - found = (AV_RL32(src) &~ 0x03000000) == 0x00030000; - if (!found) - { - *dst++ = *src++; - --size; - ++dsize; - } - } - if (!found) - { - int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst); - dst += skip; - src += skip; - size -= skip; - dsize += skip; - while (!found && size >= 4) - { - found = (AV_RL32(src) &~ 0x03000000) == 0x00030000; - if (!found) - { - *dst++ = *src++; - --size; - ++dsize; - } - } - } - if (found) - { - *dst++ = *src++; - *dst++ = *src++; - ++src; - size -= 3; - dsize += 2; - } - } - while (size > 0) - { - *dst++ = *src++; - --size; - ++dsize; - } - return dsize; -} - -#define FN_ASSIGN(X, Y) \ - dsp->put_vc1_mspel_pixels_tab[0][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_16_neon; \ - dsp->put_vc1_mspel_pixels_tab[1][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_neon - -av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp) -{ - dsp->vc1_inv_trans_8x8 = ff_vc1_inv_trans_8x8_neon; - dsp->vc1_inv_trans_4x8 = ff_vc1_inv_trans_4x8_neon; - dsp->vc1_inv_trans_8x4 = ff_vc1_inv_trans_8x4_neon; - dsp->vc1_inv_trans_4x4 = ff_vc1_inv_trans_4x4_neon; - dsp->vc1_inv_trans_8x8_dc = ff_vc1_inv_trans_8x8_dc_neon; - dsp->vc1_inv_trans_4x8_dc = ff_vc1_inv_trans_4x8_dc_neon; - dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon; - dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon; - - dsp->vc1_v_loop_filter4 = ff_vc1_v_loop_filter4_neon; - dsp->vc1_h_loop_filter4 = ff_vc1_h_loop_filter4_neon; - dsp->vc1_v_loop_filter8 = ff_vc1_v_loop_filter8_neon; - dsp->vc1_h_loop_filter8 = ff_vc1_h_loop_filter8_neon; - dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon; - dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon; - - dsp->put_vc1_mspel_pixels_tab[1][ 0] = ff_put_pixels8x8_neon; - FN_ASSIGN(1, 0); - FN_ASSIGN(2, 0); - FN_ASSIGN(3, 0); - - FN_ASSIGN(0, 1); - FN_ASSIGN(1, 1); - FN_ASSIGN(2, 1); - FN_ASSIGN(3, 1); - - FN_ASSIGN(0, 2); - FN_ASSIGN(1, 2); - FN_ASSIGN(2, 2); - FN_ASSIGN(3, 2); - - FN_ASSIGN(0, 3); - FN_ASSIGN(1, 3); - FN_ASSIGN(2, 3); - FN_ASSIGN(3, 3); - - dsp->put_no_rnd_vc1_chroma_pixels_tab[0] = ff_put_vc1_chroma_mc8_neon; - dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon; - dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon; - dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon; - - dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dv_profile_internal.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dv_profile_internal.h deleted file mode 100644 index 0ec36978b05c69dd89f4bcf0399a480d66fb7214..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dv_profile_internal.h +++ /dev/null @@ -1,36 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DV_PROFILE_INTERNAL_H -#define AVCODEC_DV_PROFILE_INTERNAL_H - -#include "avcodec.h" -#include "dv_profile.h" - -/** - * Print all allowed DV profiles into logctx at specified logging level. - */ -void ff_dv_print_profiles(void *logctx, int loglevel); - -/** - * Get a DV profile for the provided compressed frame. - */ -const AVDVProfile* ff_dv_frame_profile(AVCodecContext* codec, const AVDVProfile *sys, - const uint8_t *frame, unsigned buf_size); - -#endif /* AVCODEC_DV_PROFILE_INTERNAL_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fft_table.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fft_table.h deleted file mode 100644 index 09df49f2b8ef8d5d2dc0a0216d46df6ea54389c1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fft_table.h +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Copyright (c) 2012 - * MIPS Technologies, Inc., California. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * Authors: Stanislav Ocovaj (socovaj@mips.com) - * Goran Cordasic (goran@mips.com) - * Djordje Pesut (djordje@mips.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * definitions and tables for FFT - */ -#ifndef AVCODEC_FFT_TABLE_H -#define AVCODEC_FFT_TABLE_H - -#include "libavcodec/fft.h" - -#define MAX_LOG2_NFFT 17 //!< Specifies maximum allowed fft size -#define MAX_FFT_SIZE (1 << MAX_LOG2_NFFT) - -extern const int32_t ff_w_tab_sr[]; -extern uint16_t ff_fft_offsets_lut[]; -void ff_fft_lut_init(void); - -#endif /* AVCODEC_FFT_TABLE_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Blockman Go Beta APK How to Join the Beta Testing Program.md b/spaces/congsaPfin/Manga-OCR/logs/Blockman Go Beta APK How to Join the Beta Testing Program.md deleted file mode 100644 index a79788d69f5901e7dff5d098c95357404113e46e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Blockman Go Beta APK How to Join the Beta Testing Program.md +++ /dev/null @@ -1,78 +0,0 @@ -
-

Blockman Go Beta APK Download Latest Version: A Guide for Android Users

-

If you are looking for a fun and social arcade game that offers a variety of mini-games, then you might want to try Blockman Go Beta APK. This is a free game developed by Blockman GO Studio that allows you to test new games in advance and provide feedback to the developers. In this article, we will tell you everything you need to know about Blockman Go Beta APK, including what it is, how to download and install it, why you should play it, what are the best mini-games to play, and how to connect with other players. Let's get started!

-

What is Blockman Go Beta APK?

-

A brief introduction to Blockman Go Beta APK and its features

-

Blockman Go Beta APK is a mobile app that provides an enjoyable and user-friendly interface where you can engage in mini-games, communicate with existing friends, and connect with new ones. It is a server used for testing and experiencing new versions of games that will be released in the official Blockman GO app. You can play various genres of games, such as action, adventure, puzzle, racing, shooting, and more. You can also customize your avatar with different outfits, accessories, hairstyles, and skins. You can earn cubes and coins by playing games or completing tasks, which you can use to buy items or unlock features.

-

blockman go beta apk download latest version


Download ---> https://urlca.com/2uOfpm



-

How to download and install Blockman Go Beta APK on your Android device

-

Downloading and installing Blockman Go Beta APK on your Android device is easy and fast. Here are the steps you need to follow:

-
    -
  1. Go to [APKCombo](^1^), [Softonic](^2^), or [APKCombo Download](^3^) and search for "Blockman Go Beta APK".
  2. -
  3. Select the latest version of the app and click on "Download APK" or "Download XAPK".
  4. -
  5. Wait for the file to be downloaded on your device.
  6. -
  7. Go to your file manager and locate the downloaded file.
  8. -
  9. Tap on the file and allow unknown sources if prompted.
  10. -
  11. Follow the instructions on the screen to install the app.
  12. -
  13. Launch the app and enjoy playing!
  14. -
-

Note: Your game data in Blockman Go Beta APK is independent of any other servers or versions. You cannot use your Blockman GO account in Blockman Go Beta APK, and vice versa. Also, the official may occasionally delete or rollback data in Blockman Go Beta APK for testing purposes. Please understand and cooperate with this situation.

-

feedback or report bugs in Blockman Go Beta APK?

-

A5. You can give feedback or report bugs in Blockman Go Beta APK by using the following methods:

-
    -
  • Using the feedback button in the app or on the website
  • -
  • Contacting the customer service via email or phone
  • -
  • Joining the official Discord server or Facebook group
  • -
  • Leaving a comment or review on the app store or website
  • -
-

Your feedback or report will be appreciated and valued by the developers, who will try to improve the app and fix the bugs as soon as possible.

-

blockman go beta apk free download latest version
-blockman go beta apk download new version 2023
-blockman go beta apk latest update download
-blockman go beta apk download for android
-blockman go beta apk download filehippo
-blockman go beta apk download apkcombo
-blockman go beta apk download uptodown
-blockman go beta apk download apkpure
-blockman go beta apk download mod
-blockman go beta apk download hack
-blockman go beta apk download unlimited money
-blockman go beta apk download offline
-blockman go beta apk download no ads
-blockman go beta apk download full version
-blockman go beta apk download premium
-blockman go beta apk download pro
-blockman go beta apk download cracked
-blockman go beta apk download unlocked
-blockman go beta apk download mega mod
-blockman go beta apk download android 12
-blockman go beta apk download android 11
-blockman go beta apk download android 10
-blockman go beta apk download android 9
-blockman go beta apk download android 8
-blockman go beta apk download android 7
-blockman go beta apk download android 6
-blockman go beta apk download android 5
-blockman go beta apk download android 4.4
-blockman go beta apk download pc windows 10
-blockman go beta apk download pc windows 7
-blockman go beta apk download pc windows xp
-blockman go beta apk download pc windows 8.1
-blockman go beta apk download pc windows vista
-blockman go beta apk download mac os x
-blockman go beta apk download mac os catalina
-blockman go beta apk download mac os big sur
-blockman go beta apk download mac os mojave
-blockman go beta apk download mac os sierra
-blockman go beta apk download mac os high sierra
-blockman go beta apk download mac os el capitan
-blockman go beta apk download ios 15
-blockman go beta apk download ios 14.8.1
-blockman go beta apk download ios 14.7.1
-blockman go beta apk download ios 14.6
-blockman go beta apk download ios 14.5.1
-blockman go beta apk download ios 14.4.2
-blockman go beta apk download ios 14.3
-blockman go beta apk download ios 14.2

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ludo King A Royal Game of Dice and Strategy.md b/spaces/congsaPfin/Manga-OCR/logs/Ludo King A Royal Game of Dice and Strategy.md deleted file mode 100644 index 4ec7f9c7282a84d81d97b08819f40e1ffe48eced..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ludo King A Royal Game of Dice and Strategy.md +++ /dev/null @@ -1,160 +0,0 @@ -
-

Ludo King: The Modern Version of the Royal Game of Pachisi

-

Ludo King is a digital version of the old Ludo board game classically known as Pachisi, well-known in entire Southeast Asia, the middle east, and some European countries. It is a game that was played between Indian kings and queens in ancient times. Ludo King is a cross-platform multiplayer game that supports desktop, Android, iOS, HTML5, and Windows mobile platforms at the same time. It is one of the most popular games on the internet with one of the largest online gaming communities. It has more than 900 million downloads worldwide and has won several awards, including the India Gaming Awards 2022 for the most popular game of the year. In this article, we will tell you everything you need to know about Ludo King, how to play it, what are its benefits, and some tips and tricks to win it. So, let's get started!

-

How to Play Ludo King

-

Game Modes and Features

-

Ludo King follows the traditional rules and the old school look of the Ludo game. The objective of the game is simple: you have to roll the dice and move your tokens to reach the center of the board before your opponents do. You can play with up to six players in different game modes, such as:

-

ludu king


Downloadhttps://urlca.com/2uO7DM



-
    -
  • Online Multiplayer: You can play with your friends or other players from around the world. You can also invite and challenge your Facebook friends in a private game room. You can chat with them using voice chat and emojis.
  • -
  • Local Multiplayer: You can play with your family and friends on the same device. You can choose between two to six players.
  • -
  • Computer Mode: You can play against the computer when you don't have an internet connection or want to practice your skills.
  • -
  • Tournament Mode: You can participate in exciting tournaments with eight players and win amazing rewards.
  • -
-

Ludo King also offers some other features, such as:

-
    -
  • Quick Mode: A fast game mode where you can finish a game quickly without losing the fun of the full-length multiplayer game.
  • -
  • Team Up Mode: A mode where you can team up with your mates and have interesting two vs. two matches.
  • -
  • Themes: You can choose from different themes to enjoy Ludo in different audio-visual worlds, such as disco, night, nature, Egypt, pinball, candy, Christmas, penguin, battle, Diwali, pirate, Sui Dhaaga, marble, alien, octopus, Taj Mahal, etc.
  • -
  • Inventory: You can access various items in your inventory, such as new dices, funny emojis, voice notes, rewards, etc.
  • -
  • Snake and Ladders: You can also play another classic board game called Snake and Ladders on seven different game boards.
  • -
-

Rules and Strategies

-

The rules of Ludo King are simple and easy to learn. Here are some of them:

-
    -
  • You need four tokens of the same color to start playing. You can choose from red, blue, green, or yellow.
  • -
  • You need to roll a six on the dice to move a token from your home base to the starting square with your tokens to protect them from being captured or passed by your opponents.
  • -
  • Try to spread your tokens evenly on the board to have more options and flexibility.
  • -
  • Try to use the quick mode or the team up mode if you want to finish the game faster or have more fun with your friends.
  • -
-

Benefits of Playing Ludo King

-

Fun and Entertainment

-

Ludo King is a game that can provide you with hours of fun and entertainment. You can play it anytime, anywhere, with anyone. You can enjoy the nostalgia of the classic Ludo game with a modern twist. You can also explore the different themes and modes that Ludo King offers. You can also customize your game with your own preferences and choices. Ludo King is a game that can make you laugh, cheer, and celebrate with your friends and family.

-

Social and Cognitive Skills

-

Ludo King is not only a game that can entertain you, but also a game that can improve your social and cognitive skills. You can play Ludo King with your friends and family and strengthen your bonds and relationships. You can also make new friends and interact with other players from different countries and cultures. You can chat with them using voice chat and emojis and share your experiences and opinions. Ludo King is a game that can help you develop your communication, cooperation, and teamwork skills.

-

Ludo King can also enhance your cognitive skills, such as memory, concentration, problem-solving, decision-making, and strategy. You can play Ludo King to exercise your brain and challenge yourself. You can also learn from your mistakes and improve your performance. Ludo King is a game that can help you sharpen your mind and increase your mental agility.

-

ludu king online multiplayer
-ludu king voice chat
-ludu king quick mode
-ludu king team up mode
-ludu king game themes
-ludu king tournament mode
-ludu king inventory
-ludu king app download
-ludu king for PC
-ludu king APK
-ludu king official website
-ludu king game rules
-ludu king tips and tricks
-ludu king cheats and hacks
-ludu king reviews and ratings
-ludu king customer support
-ludu king latest update
-ludu king offline mode
-ludu king facebook friends
-ludu king buddies and chat
-ludu king emojis and stickers
-ludu king snake and ladders
-ludu king classic board game
-ludu king royal game of pachisi
-ludu king history and origin
-ludu king vs ludo star
-ludu king vs ludo club
-ludu king vs ludo all star
-ludu king vs ludo talent
-ludu king vs ludo champ
-how to play ludu king online
-how to play ludu king with friends
-how to play ludu king with 6 players
-how to play ludu king on PC
-how to play ludu king on TV
-how to win in ludu king every time
-how to get free coins in ludu king
-how to unlock themes in ludu king
-how to use voice chat in ludu king
-how to create a private room in ludu king
-best dice for ludu king game
-best theme for ludu king game
-best strategy for ludu king game
-best time to play ludu king game
-best device for ludu king game
-is ludu king rigged or random?
-is ludu king safe and secure?
-is ludu king addictive or fun?
-is ludu king free or paid?

-

Tips and Tricks to Win Ludo King

-

Choose the Right Tokens

-

One of the first things you need to do in Ludo King is to choose the right tokens for your game. You can choose from four colors: red, blue, green, or yellow. Each color has its own advantages and disadvantages, depending on the position of the board and the dice rolls. Here are some tips to help you choose the right tokens:

-
    -
  • Red tokens are good for beginners, as they are easy to move and have less chances of being captured by other players.
  • -
  • Blue tokens are good for advanced players, as they are hard to move but have more chances of capturing other players' tokens.
  • -
  • Green tokens are good for intermediate players, as they are balanced in terms of movement and capture.
  • -
  • Yellow tokens are good for adventurous players, as they are risky but rewarding in terms of movement and capture.
  • -
-

Use the Dice Wisely

-

The dice is the most important element in Ludo King, as it determines how many steps you can move your token. You need to use the dice wisely and strategically to win the game. Here are some tips to help you use the dice wisely:

-
    -
  • Roll the dice gently and smoothly to get a better chance of getting a six or a high number.
  • -
  • Avoid rolling the dice too hard or too fast, as it may result in a low number or a bad bounce.
  • -
  • Try to roll the dice when it is your turn, not before or after, as it may affect the outcome of the dice.
  • -
  • Use the extra turn wisely when you get a six or capture an opponent's token. Don't waste it on moving a token that is already safe or close to home.
  • -
  • Use the quick mode if you want to roll the dice faster and save time.
  • -
-

Be Smart and Aggressive

-

The last tip to win Ludo King is to be smart and aggressive in your gameplay. You need to be smart in choosing which token to move, when to move it, and where to move it. You also need to be aggressive in capturing your opponent's tokens, blocking their paths, and reaching home before them. Here are some tips to help you be smart and aggressive:

-
    -
  • Analyze the board carefully before making a move. Look at the position of your tokens and your opponents' tokens. Choose the token that has the best chance of reaching home or capturing an opponent's token.
  • -
  • Avoid moving a token that is already safe or close to home unless you have no other option. Focus on moving the token that is farthest from home or most vulnerable to capture.
  • -
  • Capture your opponent's tokens whenever possible, especially if they are close to home or blocking your path. This will delay their progress and give you an advantage.
  • -
  • Block your opponent's path with two or more tokens of the same color on the same square. This will prevent them from capturing or passing your tokens.
  • -
  • Reach home as soon as possible with all your tokens. Don't wait for your opponents to catch up or overtake you.
  • -
-

Frequently Asked Questions about Ludo King

-

How to Download and Install Ludo King?

-

Ludo King is available for free on various platforms, such as desktop, Android, iOS, HTML5, and Windows mobile. You can download and install Ludo King by following these steps:

-
    -
  • For desktop: Go to the official website of Ludo King and click on the download button. Choose the platform you want to download for, such as Windows or Mac. Follow the instructions to install the game on your computer.
  • -
  • For Android: Go to the Google Play Store and search for Ludo King. Tap on the install button and wait for the game to download and install on your device.
  • -
  • For iOS: Go to the App Store and search for Ludo King. Tap on the get button and wait for the game to download and install on your device.
  • -
  • For HTML5: Go to any browser that supports HTML5, such as Chrome or Firefox. Go to the official website of Ludo King and click on the play button. Enjoy the game online without any installation.
  • -
  • For Windows mobile: Go to the Microsoft Store and search for Ludo King. Tap on the install button and wait for the game to download and install on your device.
  • -
-

How to Play with Friends Online?

-

Ludo King allows you to play with your friends online in various ways. You can play with them by following these steps:

-
    -
  • Online Multiplayer: Go to the online multiplayer mode and choose between two to six players. You can either join a random game room or create your own private game room. To create a private game room, tap on the create room button and set a password. To join a private game room, tap on the join room button and enter the password. You can also invite and challenge your Facebook friends by tapping on the Facebook icon.
  • -
  • Local Multiplayer: Go to the local multiplayer mode and choose between two to six players. You can play with your friends on the same device by passing it around. You can also connect your device with other devices using Wi-Fi or Bluetooth.
  • -
-

How to Use Voice Chat and Emojis?

-

Ludo King lets you communicate with your friends and other players using voice chat and emojis. You can use them by following these steps:

-
    -
  • Voice Chat: To use voice chat, you need to enable it in the settings menu. You can also mute or unmute yourself or other players by tapping on the microphone icon. To start a voice chat, tap on the voice chat button and hold it while speaking. Release it when you are done speaking.
  • -
  • Emojis: To use emojis, you need to tap on the emoji button and choose from a variety of emojis. You can also unlock new emojis by playing more games or buying them from the inventory.
  • -
-

How to Unlock New Themes and Dices?

-

Ludo King offers you many themes and dices to customize your game experience. You can unlock them by following these steps:

-
    -
  • Themes: To unlock new themes, you need to collect coins by playing more games or buying them from the inventory. You can also get free coins by watching ads or completing offers. To change your theme, go to the settings menu and tap on the theme option. Choose from different themes and apply them to your game.
  • -
  • Dices: To unlock new dices, you need to collect gems by playing more games or buying them from the inventory. You can also get free gems by watching ads or completing offers. To change your dice, go to the settings menu and tap on the dice option. Choose from different dices and apply them to your game.
  • -
-

How to Participate in Tournaments and Events?

-

Ludo King organizes various tournaments and events for its players regularly. You can participate in them by following these steps:

-
    -
  • Tournaments: To participate in tournaments, you need to go to the tournament mode and choose between two types of tournaments: classic or quick. You can also choose between different entry fees and rewards. To join a tournament, tap on the join button and wait for other players to join. The tournament will start when eight players are ready. You need to win three consecutive games to win the tournament.
  • -
  • Events: To participate in events, you need to go to the event mode and choose between different events that are available at that time. You can also choose between different entry fees and rewards. To join an event, tap on the join button and wait for the event to start. You need to complete the event objectives to win the event.
  • -
-

Conclusion

-

Ludo King is a fun and exciting game that can bring back the memories of the classic Ludo game. It is a game that can be played by anyone, anywhere, anytime. It is a game that can provide you with entertainment, socialization, and cognition. It is a game that can challenge you, reward you, and make you happy. If you are looking for a game that can offer you all these benefits and more, then Ludo King is the game for you. Download it now and enjoy the royal game of Pachisi with your friends and family!

-

Frequently Asked Questions

-

Q: Is Ludo King a game of luck or skill?

-

A: Ludo King is a game that involves both luck and skill. Luck plays a role in the dice rolls, which determine how many steps you can move your token. Skill plays a role in the strategy, which determines which token to move, when to move it, and where to move it.

-

Q: Can I play Ludo King offline?

-

A: Yes, you can play Ludo King offline in the computer mode or the local multiplayer mode. You can also play Ludo King online in the online multiplayer mode or the tournament mode.

-

Q: How can I earn more coins and gems in Ludo King?

-

A: You can earn more coins and gems in Ludo King by playing more games, winning more games, participating in tournaments and events, watching ads, completing offers, or buying them from the inventory.

-

Q: How can I report a bug or a problem in Ludo King?

-

A: You can report a bug or a problem in Ludo King by going to the settings menu and tapping on the feedback option. You can also contact the customer support team by emailing them at ludoking@ludoking.com or visiting their website at www.ludoking.com.

-

Q: How can I update Ludo King to the latest version?

-

A: You can update Ludo King to the latest version by going to the platform where you downloaded it from, such as Google Play Store, App Store, Microsoft Store, or official website. You can also enable the auto-update option in your device settings to get the latest updates automatically.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Watch Pirates of the Caribbean 6 Tamil Dubbed Movie Full HD.md b/spaces/congsaPfin/Manga-OCR/logs/Watch Pirates of the Caribbean 6 Tamil Dubbed Movie Full HD.md deleted file mode 100644 index 24b6a7993044b505ce395d9f6483eaabda3539f9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Watch Pirates of the Caribbean 6 Tamil Dubbed Movie Full HD.md +++ /dev/null @@ -1,89 +0,0 @@ -
-
- The possible release date based on past films and industry trends. | | H2: Cast: Who Will Star in Pirates of the Caribbean 6? | - The confirmed and rumored actors for the movie.
- The return of old favorites and the introduction of new characters.
- The controversy over Johnny Depp's involvement as Jack Sparrow. | | H2: Plot: What Will Pirates of the Caribbean 6 Be About? | - The potential storylines and themes for the movie.
- The connection to previous films and the ride.
- The challenges and opportunities for the franchise. | | H2: Trailer: Where Can You Watch Pirates of the Caribbean 6 Trailer? | - The expected release date and platforms for the trailer.
- The possible scenes and hints in the trailer.
- The fan reactions and expectations for the trailer. | | H1: How to Watch Pirates of the Caribbean 6 Tamil Dubbed Movie Online? | - Introduction: A brief explanation of why some fans prefer to watch the movie in Tamil language. | | H2: Tamilrockers: The Popular Piracy Website for Tamil Movies | - What is Tamilrockers and how does it work?
- The legal and ethical issues of using Tamilrockers.
- The risks and drawbacks of downloading movies from Tamilrockers. | | H2: Alternatives: Other Ways to Watch Pirates of the Caribbean 6 Tamil Dubbed Movie Online | - The official streaming platforms and websites that offer the movie in Tamil language.
- The benefits and advantages of using legal and safe sources.
- The tips and tricks to find the best deals and quality for online streaming. | | H2: Tips: How to Enjoy Pirates of the Caribbean 6 Tamil Dubbed Movie Online? | - The best devices and settings to watch the movie online.
- The recommended accessories and snacks to enhance the viewing experience.
- The ways to interact with other fans and share your opinions online. | | H1: Conclusion | - A summary of the main points and takeaways from the article.
- A call to action for the readers to watch the movie and support the franchise. | Table 2: Article with HTML formatting

Pirates of the Caribbean 6: Everything You Need to Know

-

Pirates of the Caribbean is one of the most successful and beloved movie franchises of all time, based on the popular theme park ride of the same name. Since its debut in 2003, the series has spawned five films that have grossed over $4.5 billion worldwide, making it the 14th-highest-grossing film series ever. The films follow the adventures of Captain Jack Sparrow, a witty and eccentric pirate who often finds himself in trouble with various enemies, allies, and supernatural forces.

-

pirates of the caribbean 6 tamil dubbed movie download tamilrockers


Download Zip ✔✔✔ https://urlca.com/2uO5iT



-

The sixth installment of the franchise, Pirates of the Caribbean 6, is currently in development, with many fans eagerly awaiting its release. However, there is still a lot of mystery and uncertainty surrounding this project, as Disney has not revealed much information about it yet. In this article, we will try to answer some of the most common questions that fans have about Pirates of the Caribbean 6, such as when will it be released, who will star in it, what will it be about, and where can you watch its trailer.

-

Release Date: When Will Pirates of the Caribbean 6 Hit the Theaters?

-

The release date of Pirates of the Caribbean 6 is one of the most anticipated details that fans want to know, but unfortunately, there is no official announcement from Disney yet. However, based on some reports and rumors, we can make some educated guesses.

-

First of all, we know that Disney has hired Craig Mazin (the creator of Chernobyl) and Ted Elliott (one of the writers of Pirates 1-4) to write a new script for Pirates 6, which is reportedly a reboot or a spin-off rather than a direct sequel to Pirates 5. According to producer Jerry Bruckheimer, they were working on a draft as of May 2020, but there is no update on whether they have finished it or not.

-

pirates of the caribbean 6 full movie in tamil free download
-pirates of the caribbean 6 tamil dubbed watch online
-pirates of the caribbean 6 tamilrockers hd download
-pirates of the caribbean 6 tamil dubbed release date
-pirates of the caribbean 6 full movie tamil dubbed hotstar
-pirates of the caribbean 6 tamil dubbed movie online
-pirates of the caribbean 6 tamil dubbed download isaimini
-pirates of the caribbean 6 full movie in tamil 720p
-pirates of the caribbean 6 tamil dubbed movie name
-pirates of the caribbean 6 tamil dubbed trailer
-pirates of the caribbean 6 full movie in tamil tamilyogi
-pirates of the caribbean 6 tamil dubbed download moviesda
-pirates of the caribbean 6 full movie in tamil hd quality
-pirates of the caribbean 6 tamil dubbed cast and crew
-pirates of the caribbean 6 full movie in tamil kuttymovies
-pirates of the caribbean 6 tamil dubbed download telegram
-pirates of the caribbean 6 full movie in tamil youtube
-pirates of the caribbean 6 tamil dubbed review and rating
-pirates of the caribbean 6 full movie in tamil filmyzilla
-pirates of the caribbean 6 tamil dubbed download filmywap
-pirates of the caribbean 6 full movie in tamil mp4moviez
-pirates of the caribbean 6 tamil dubbed download torrentz2
-pirates of the caribbean 6 full movie in tamil dailymotion
-pirates of the caribbean 6 tamil dubbed download magnet link
-pirates of the caribbean 6 full movie in tamil google drive

-

Secondly, we know that Disney has reserved two dates for untitled live-action films in 2023: July 28 and October 6. Many fans speculate that one of these dates could be for Pirates 6, as the previous films were also released in the summer or fall seasons. However, this is not confirmed and could change depending on the progress of the script and production.

-

Thirdly, we know that the coronavirus pandemic has affected the film industry in many ways, causing delays and cancellations for many projects. It is possible that Pirates 6 could also face some challenges due to the health and safety protocols, travel restrictions, and budget constraints that the pandemic has imposed.

-

Therefore, based on these factors, we can estimate that Pirates 6 could be released sometime in late 2023 or early 2024, if everything goes smoothly. However, this is not a guarantee and we will have to wait for an official confirmation from Disney before we can mark our calendars.

-

Cast: Who Will Star in Pirates of the Caribbean 6?

-

The cast of Pirates of the Caribbean 6 is another topic that fans are curious about, as they want to know who will reprise their roles from the previous films and who will join the franchise as new characters. However, just like the release date, there is no official announcement from Disney yet. However, based on some reports and rumors, we can make some educated guesses.

-

First of all, we know that Johnny Depp, who played the iconic role of Captain Jack Sparrow in all five films, is unlikely to return for Pirates 6. This is due to his ongoing legal battles with his ex-wife Amber Heard, which have damaged his reputation and career. Disney has reportedly dropped him from the franchise and is looking for a new lead actor to replace him. Some names that have been suggested by fans and media outlets include Tom Holland, Zac Efron, Robert Pattinson, and Harry Styles.

-

Secondly, we know that some of the other actors who played important roles in the previous films are also uncertain about their involvement in Pirates 6. For example, Orlando Bloom (Will Turner), Keira Knightley (Elizabeth Swann), Geoffrey Rush (Barbossa), and Brenton Thwaites (Henry Turner) have all expressed their doubts or disinterest in returning for another sequel. However, none of them have officially confirmed or denied their participation yet.

-

Thirdly, we know that some new actors have been rumored to join the cast of Pirates 6, either as new characters or as replacements for existing ones. For example, Margot Robbie (Harley Quinn) is reportedly set to star in a female-led spin-off of Pirates of the Caribbean, which could be connected to Pirates 6. Karen Gillan (Nebula) is also rumored to play a female pirate in Pirates 6, possibly as a new love interest for Jack Sparrow. Additionally, Emma Watson (Hermione Granger) and Daisy Ridley (Rey) have been linked to the role of Redd, a female pirate character from the ride who was also featured in Pirates 5.

-

Therefore, based on these factors, we can estimate that Pirates 6 will have a mix of old and new actors, with some familiar faces and some fresh ones. However, this is not a guarantee and we will have to wait for an official confirmation from Disney before we can get excited about the cast.

-

Plot: What Will Pirates of the Caribbean 6 Be About?

-

The plot of Pirates of the Caribbean 6 is another aspect that fans are interested in, as they want to know what kind of adventures and challenges will await the characters in the next film. However, just like the release date and cast, there is no official announcement from Disney yet. However, based on some reports and rumors, we can make some educated guesses.

-

First of all, we know that Pirates 6 will be a reboot or a spin-off rather than a direct sequel to Pirates 5. This means that it will not continue the storylines and arcs that were established in the previous films, such as the curse of Davy Jones, the fate of Jack Sparrow's compass, or the reunion of Will and Elizabeth. Instead, it will introduce new characters and scenarios that will explore different aspects of the pirate world.

-

Secondly, we know that Pirates 6 will be inspired by the original ride at Disneyland and Walt Disney World. This means that it will feature some elements and scenes from the ride that were not included in the previous films, such as the auction scene where pirates bid on women (which was recently changed to a more politically correct version), the jail scene where pirates try to escape with the help of a dog holding keys (which was briefly shown in Pirates 1), or the battle scene where pirates fire cannons at each other (which was partially shown you might be wondering how you can watch it online. There are many websites and platforms that offer Tamil dubbed versions of Hollywood movies, but not all of them are legal, safe, or reliable. In this section, we will discuss one of the most popular piracy websites for Tamil movies, Tamilrockers, and its alternatives. We will also give you some tips on how to enjoy Pirates of the Caribbean 6 Tamil dubbed movie online.

-

Tamilrockers: The Popular Piracy Website for Tamil Movies

-

Tamilrockers is a notorious website that uploads and distributes pirated copies of Tamil movies, as well as other Indian and Hollywood movies dubbed in Tamil. The website was founded in 2011 and has since become one of the most visited and searched websites in India and other countries. Tamilrockers has a huge collection of movies from various genres, years, and languages, which it offers for free download or online streaming.

-

However, using Tamilrockers is not a good idea for several reasons. First of all, it is illegal and unethical to download or watch pirated movies, as it violates the intellectual property rights of the filmmakers and the distributors. Piracy causes huge losses to the film industry and affects the livelihoods of many people involved in it. By using Tamilrockers, you are supporting a criminal activity and harming the creators of the movies you love.

-

Secondly, using Tamilrockers is not safe or reliable, as it exposes you to various risks and drawbacks. For example, the website is often blocked by the authorities or taken down by the hackers, which makes it inaccessible or unstable. The website also contains many ads and pop-ups that can annoy you or redirect you to malicious sites that can infect your device with viruses or malware. The website also does not guarantee the quality or accuracy of the movies it provides, which can ruin your viewing experience.

-

Therefore, using Tamilrockers is not a wise or responsible choice for watching Pirates of the Caribbean 6 Tamil dubbed movie online. You should avoid this website and look for other ways to watch the movie legally and safely.

-

Alternatives: Other Ways to Watch Pirates of the Caribbean 6 Tamil Dubbed Movie Online

-

If you want to watch Pirates of the Caribbean 6 Tamil dubbed movie online, there are other options that you can consider that are legal, safe, and reliable. These options include the official streaming platforms and websites that offer the movie in Tamil language. These platforms and websites have several benefits and advantages over piracy websites like Tamilrockers.

-

First of all, these platforms and websites are legal and ethical, as they have the proper licenses and permissions to stream or distribute the movie in Tamil language. By using these platforms and websites, you are respecting the intellectual property rights of the filmmakers and the distributors, and supporting their work and efforts. You are also contributing to the film industry and encouraging more quality content to be produced.

-

Secondly, these platforms and websites are safe and reliable, as they protect you from various risks and drawbacks that piracy websites pose. For example, these platforms and websites do not contain any ads or pop-ups that can annoy you or harm your device. They also guarantee the quality and accuracy of the movie they provide, which can enhance your viewing experience. They also offer various features and options that can make your online streaming more convenient and enjoyable.

-

Some of the official streaming platforms and websites that offer Pirates of the Caribbean 6 Tamil dubbed movie online are:

-
    -
  • Disney+ Hotstar: This is the official streaming service of Disney in India, which offers a wide range of content from Disney's various brands, such as Marvel, Star Wars, Pixar, National Geographic, and more. It also offers many Hollywood movies dubbed in Tamil language, including Pirates of the Caribbean 6. You can subscribe to Disney+ Hotstar for Rs. 299 per month or Rs. 1499 per year.
  • -
  • Amazon Prime Video: This is another popular streaming service that offers a large collection of movies and shows from various genres, languages, and countries. It also offers many Hollywood movies dubbed in Tamil language, including Pirates of the Caribbean 6. You can subscribe to Amazon Prime Video for Rs. 129 per month or Rs. 999 per year.
  • -
  • Netflix: This is the most popular streaming service in the world, which offers a huge library of original and licensed content from various categories, regions, and languages. It also offers some Hollywood movies dubbed in Tamil language, including Pirates of the Caribbean 6. You can subscribe to Netflix for Rs. 199 to Rs. 799 per month, depending on the plan you choose.
  • -
-

Therefore, these are some of the alternatives that you can use to watch Pirates of the Caribbean 6 Tamil dubbed movie online. You should choose the one that suits your preferences and budget, and enjoy the movie legally and safely.

-

Tips: How to Enjoy Pirates of the Caribbean 6 Tamil Dubbed Movie Online?

-

If you have decided to watch Pirates of the Caribbean 6 Tamil dubbed movie online, you might want to know how you can make the most out of your online streaming experience. There are some tips and tricks that you can follow to enjoy the movie online, such as:

-
    -
  • Choose the best device and settings: You should watch the movie on a device that has a good screen size, resolution, sound quality, and battery life. You should also adjust the brightness, volume, and subtitles according to your comfort and convenience. You should also make sure that your device has a stable internet connection and enough storage space.
  • -
  • Use the best accessories and snacks: You should use some accessories that can enhance your viewing experience, such as headphones, speakers, or a projector. You should also prepare some snacks and drinks that you like, such as popcorn, chips, soda, or coffee. You should also keep some tissues or napkins handy in case you spill something or get emotional.
  • -
  • Interact with other fans and share your opinions: You should join some online communities and forums where you can chat with other fans who are watching or have watched the movie. You can share your thoughts, feelings, questions, and feedback about the movie with them. You can also participate in some polls, quizzes, games, or contests related to the movie. You can also post some reviews, ratings, or comments on the platforms or websites where you watched the movie.
  • -
-

Therefore, these are some of the tips and tricks that you can use to enjoy Pirates of the Caribbean 6 Tamil dubbed movie online. You should follow them and have a fun and memorable time watching the movie.

-

Conclusion

-

Pirates of the Caribbean 6 is one of the most awaited movies of the year, as it promises to deliver another thrilling and entertaining adventure in the pirate world. However, there is still a lot of mystery and uncertainty surrounding this project, as Disney has not revealed much information about it yet. In this article, we tried to answer some of the most common questions that fans have about Pirates of the Caribbean 6, such as when will it be released, who will star in it, what will it be about, and where can you watch its trailer. We also discussed how you can watch Pirates of the Caribbean 6 Tamil dubbed movie online, and what are the best platforms and websites to do so. We also gave you some tips and tricks on how to enjoy Pirates of the Caribbean 6 Tamil dubbed movie online.

-

We hope that this article was helpful and informative for you, and that it answered some of your questions and doubts about Pirates of the Caribbean 6. We also hope that you are excited and ready to watch the movie when it comes out, and that you will support the franchise and the filmmakers by watching it legally and safely. Pirates of the Caribbean 6 is expected to be a great movie that will delight and entertain you, so don't miss it!

-

FAQs

-

Here are some of the frequently asked questions that fans have about Pirates of the Caribbean 6:

-
    -
  1. Is Pirates of the Caribbean 6 the last movie in the franchise?
    -There is no official answer to this question yet, but it is possible that Pirates of the Caribbean 6 could be the last movie in the franchise, or at least the last one featuring Jack Sparrow. Disney has reportedly planned to reboot or spin-off the franchise with new characters and stories, which could mean that Pirates 6 will be the end of an era. However, this is not confirmed and could change depending on the success and reception of Pirates 6.
  2. -
  3. Will there be a post-credits scene in Pirates of the Caribbean 6?
    -There is no official answer to this question yet, but it is likely that there will be a post-credits scene in Pirates of the Caribbean 6, as all the previous films had one. The post-credits scene usually teases a possible sequel or a plot twist that could affect the future of the franchise. However, this is not guaranteed and could depend on the direction and intention of the filmmakers.
  4. -
  5. How can I watch Pirates of the Caribbean 6 in 3D or IMAX?
    -There is no official answer to this question yet, but it is probable that Pirates of the Caribbean 6 will be available in 3D or IMAX formats, as all the previous films were. The 3D or IMAX formats can enhance the visual effects and immersion of the movie, making it more realistic and spectacular. However, this is not certain and could vary depending on your location and availability.
  6. -
  7. Who is the composer of Pirates of the Caribbean 6?
    -There is no official answer to this question yet, but it is possible that Hans Zimmer will be the composer of Pirates of the Caribbean 6, as he was for Pirates 2-4. Hans Zimmer is one of the most acclaimed and influential composers in Hollywood, who has created memorable and iconic scores for many movies, such as The Lion King, Gladiator, Inception, Interstellar, and more. He also co-composed the score for Pirates 1 with Klaus Badelt. However, this is not confirmed and could change depending on the choice of the filmmakers.
  8. -
  9. Where can I find more information about Pirates of the Caribbean 6?
    -There is no official source of information about Pirates of the Caribbean 6 yet, but you can find some unofficial and fan-made sources online, such as websites, blogs, podcasts, videos, or social media pages. These sources can provide you with some news, rumors, theories, speculations, or opinions about Pirates 6. However, you should be careful and critical when using these sources, as they may not be accurate or reliable.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Agcmacroshadowbane The Best Macro Program for Shadowbane.md b/spaces/contluForse/HuggingGPT/assets/Agcmacroshadowbane The Best Macro Program for Shadowbane.md deleted file mode 100644 index 5b2b740e676897064718decf59cea7988f7a0931..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Agcmacroshadowbane The Best Macro Program for Shadowbane.md +++ /dev/null @@ -1,6 +0,0 @@ -

Artcam Express Download kaeintr


DOWNLOAD ⚹⚹⚹ https://ssurll.com/2uzw9W



- - aaccfb2cb3
-
-
-

diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/logger/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/logger/__init__.py deleted file mode 100644 index a0b6b345640a895368ac8a647afef6f24333d90e..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/logger/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import LoggerHook -from .dvclive import DvcliveLoggerHook -from .mlflow import MlflowLoggerHook -from .neptune import NeptuneLoggerHook -from .pavi import PaviLoggerHook -from .tensorboard import TensorboardLoggerHook -from .text import TextLoggerHook -from .wandb import WandbLoggerHook - -__all__ = [ - 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TensorboardLoggerHook', 'TextLoggerHook', 'WandbLoggerHook', - 'NeptuneLoggerHook', 'DvcliveLoggerHook' -] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/make.sh b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/make.sh deleted file mode 100644 index ca5c0b469da786c847ba04d437bb31ee0fc938da..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/make.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env bash -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -FORCE_CUDA=1 python setup.py build install diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/README.md b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/README.md deleted file mode 100644 index 1d43c2606767798ee46b34292e0483197424ec23..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/README.md +++ /dev/null @@ -1,131 +0,0 @@ -# MiDaS for ROS1 by using LibTorch in C++ - -### Requirements - -- Ubuntu 17.10 / 18.04 / 20.04, Debian Stretch -- ROS Melodic for Ubuntu (17.10 / 18.04) / Debian Stretch, ROS Noetic for Ubuntu 20.04 -- C++11 -- LibTorch >= 1.6 - -## Quick Start with a MiDaS Example - -MiDaS is a neural network to compute depth from a single image. - -* input from `image_topic`: `sensor_msgs/Image` - `RGB8` image with any shape -* output to `midas_topic`: `sensor_msgs/Image` - `TYPE_32FC1` inverse relative depth maps in range [0 - 255] with original size and channels=1 - -### Install Dependecies - -* install ROS Melodic for Ubuntu 17.10 / 18.04: -```bash -wget https://raw.githubusercontent.com/isl-org/MiDaS/master/ros/additions/install_ros_melodic_ubuntu_17_18.sh -./install_ros_melodic_ubuntu_17_18.sh -``` - -or Noetic for Ubuntu 20.04: - -```bash -wget https://raw.githubusercontent.com/isl-org/MiDaS/master/ros/additions/install_ros_noetic_ubuntu_20.sh -./install_ros_noetic_ubuntu_20.sh -``` - - -* install LibTorch 1.7 with CUDA 11.0: - -On **Jetson (ARM)**: -```bash -wget https://nvidia.box.com/shared/static/wa34qwrwtk9njtyarwt5nvo6imenfy26.whl -O torch-1.7.0-cp36-cp36m-linux_aarch64.whl -sudo apt-get install python3-pip libopenblas-base libopenmpi-dev -pip3 install Cython -pip3 install numpy torch-1.7.0-cp36-cp36m-linux_aarch64.whl -``` -Or compile LibTorch from source: https://github.com/pytorch/pytorch#from-source - -On **Linux (x86_64)**: -```bash -cd ~/ -wget https://download.pytorch.org/libtorch/cu110/libtorch-cxx11-abi-shared-with-deps-1.7.0%2Bcu110.zip -unzip libtorch-cxx11-abi-shared-with-deps-1.7.0+cu110.zip -``` - -* create symlink for OpenCV: - -```bash -sudo ln -s /usr/include/opencv4 /usr/include/opencv -``` - -* download and install MiDaS: - -```bash -source ~/.bashrc -cd ~/ -mkdir catkin_ws -cd catkin_ws -git clone https://github.com/isl-org/MiDaS -mkdir src -cp -r MiDaS/ros/* src - -chmod +x src/additions/*.sh -chmod +x src/*.sh -chmod +x src/midas_cpp/scripts/*.py -cp src/additions/do_catkin_make.sh ./do_catkin_make.sh -./do_catkin_make.sh -./src/additions/downloads.sh -``` - -### Usage - -* run only `midas` node: `~/catkin_ws/src/launch_midas_cpp.sh` - -#### Test - -* Test - capture video and show result in the window: - * place any `test.mp4` video file to the directory `~/catkin_ws/src/` - * run `midas` node: `~/catkin_ws/src/launch_midas_cpp.sh` - * run test nodes in another terminal: `cd ~/catkin_ws/src && ./run_talker_listener_test.sh` and wait 30 seconds - - (to use Python 2, run command `sed -i 's/python3/python2/' ~/catkin_ws/src/midas_cpp/scripts/*.py` ) - -## Mobile version of MiDaS - Monocular Depth Estimation - -### Accuracy - -* MiDaS v2 small - ResNet50 default-decoder 384x384 -* MiDaS v2.1 small - EfficientNet-Lite3 small-decoder 256x256 - -**Zero-shot error** (the lower - the better): - -| Model | DIW WHDR | Eth3d AbsRel | Sintel AbsRel | Kitti δ>1.25 | NyuDepthV2 δ>1.25 | TUM δ>1.25 | -|---|---|---|---|---|---|---| -| MiDaS v2 small 384x384 | **0.1248** | 0.1550 | **0.3300** | **21.81** | 15.73 | 17.00 | -| MiDaS v2.1 small 256x256 | 0.1344 | **0.1344** | 0.3370 | 29.27 | **13.43** | **14.53** | -| Relative improvement, % | -8 % | **+13 %** | -2 % | -34 % | **+15 %** | **+15 %** | - -None of Train/Valid/Test subsets of datasets (DIW, Eth3d, Sintel, Kitti, NyuDepthV2, TUM) were not involved in Training or Fine Tuning. - -### Inference speed (FPS) on nVidia GPU - -Inference speed excluding pre and post processing, batch=1, **Frames Per Second** (the higher - the better): - -| Model | Jetson Nano, FPS | RTX 2080Ti, FPS | -|---|---|---| -| MiDaS v2 small 384x384 | 1.6 | 117 | -| MiDaS v2.1 small 256x256 | 8.1 | 232 | -| SpeedUp, X times | **5x** | **2x** | - -### Citation - -This repository contains code to compute depth from a single image. It accompanies our [paper](https://arxiv.org/abs/1907.01341v3): - ->Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer -René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun - -Please cite our paper if you use this code or any of the models: -``` -@article{Ranftl2020, - author = {Ren\'{e} Ranftl and Katrin Lasinger and David Hafner and Konrad Schindler and Vladlen Koltun}, - title = {Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer}, - journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, - year = {2020}, -} -``` diff --git a/spaces/crylake/img2poem/README.md b/spaces/crylake/img2poem/README.md deleted file mode 100644 index 14015ae6c860ea539581b5a0ab9c9c71254c2a33..0000000000000000000000000000000000000000 --- a/spaces/crylake/img2poem/README.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Img2poem -emoji: ✒️ -colorFrom: pink -colorTo: blue -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. - -# Acknowledgement - -We thank the author of [Query2labels](https://github.com/SlongLiu/query2labels) and HuggingFace for facilitating such an opportunity for us to create this model. \ No newline at end of file diff --git a/spaces/cuiyuan605/chatgpt-demo/app.py b/spaces/cuiyuan605/chatgpt-demo/app.py deleted file mode 100644 index f7258c58657c90b52cd635eae4645503c206e207..0000000000000000000000000000000000000000 --- a/spaces/cuiyuan605/chatgpt-demo/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import gradio as gr -import openai -import requests -import csv - - -prompt_templates = {"Default ChatGPT": ""} - -def get_empty_state(): - return {"total_tokens": 0, "messages": []} - -def download_prompt_templates(): - url = "https://raw.githubusercontent.com/f/awesome-chatgpt-prompts/main/prompts.csv" - try: - response = requests.get(url) - reader = csv.reader(response.text.splitlines()) - next(reader) # skip the header row - for row in reader: - if len(row) >= 2: - act = row[0].strip('"') - prompt = row[1].strip('"') - prompt_templates[act] = prompt - - except requests.exceptions.RequestException as e: - print(f"An error occurred while downloading prompt templates: {e}") - return - - choices = list(prompt_templates.keys()) - choices = choices[:1] + sorted(choices[1:]) - return gr.update(value=choices[0], choices=choices) - -def on_token_change(user_token): - openai.api_key = user_token - -def on_prompt_template_change(prompt_template): - if not isinstance(prompt_template, str): return - return prompt_templates[prompt_template] - -def submit_message(user_token, prompt, prompt_template, temperature, max_tokens, context_length, state): - - history = state['messages'] - - if not prompt: - return gr.update(value=''), [(history[i]['content'], history[i+1]['content']) for i in range(0, len(history)-1, 2)], f"Total tokens used: {state['total_tokens']}", state - - prompt_template = prompt_templates[prompt_template] - - system_prompt = [] - if prompt_template: - system_prompt = [{ "role": "system", "content": prompt_template }] - - prompt_msg = { "role": "user", "content": prompt } - - if not user_token: - history.append(prompt_msg) - history.append({ - "role": "system", - "content": "Error: OpenAI API Key is not set." - }) - return '', [(history[i]['content'], history[i+1]['content']) for i in range(0, len(history)-1, 2)], f"Total tokens used: 0", state - - try: - completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=system_prompt + history[-context_length*2:] + [prompt_msg], temperature=temperature, max_tokens=max_tokens) - - history.append(prompt_msg) - history.append(completion.choices[0].message.to_dict()) - - state['total_tokens'] += completion['usage']['total_tokens'] - - except Exception as e: - history.append(prompt_msg) - history.append({ - "role": "system", - "content": f"Error: {e}" - }) - - total_tokens_used_msg = f"Total tokens used: {state['total_tokens']}" - chat_messages = [(history[i]['content'], history[i+1]['content']) for i in range(0, len(history)-1, 2)] - - return '', chat_messages, total_tokens_used_msg, state - -def clear_conversation(): - return gr.update(value=None, visible=True), None, "", get_empty_state() - - -css = """ - #col-container {max-width: 80%; margin-left: auto; margin-right: auto;} - #chatbox {min-height: 400px;} - #header {text-align: center;} - #prompt_template_preview {padding: 1em; border-width: 1px; border-style: solid; border-color: #e0e0e0; border-radius: 4px;} - #total_tokens_str {text-align: right; font-size: 0.8em; color: #666;} - #label {font-size: 0.8em; padding: 0.5em; margin: 0;} - .message { font-size: 1.2em; } - """ - -with gr.Blocks(css=css) as demo: - - state = gr.State(get_empty_state()) - - - with gr.Column(elem_id="col-container"): - gr.Markdown("""## OpenAI ChatGPT Demo - Using the ofiicial API (gpt-3.5-turbo model) - Prompt templates from [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts).""", - elem_id="header") - - with gr.Row(): - with gr.Column(): - chatbot = gr.Chatbot(elem_id="chatbox") - input_message = gr.Textbox(show_label=False, placeholder="Enter text and press enter", visible=True).style(container=False) - btn_submit = gr.Button("Submit") - total_tokens_str = gr.Markdown(elem_id="total_tokens_str") - btn_clear_conversation = gr.Button("🔃 Start New Conversation") - with gr.Column(): - gr.Markdown("Enter your OpenAI API Key. You can get one [here](https://platform.openai.com/account/api-keys).", elem_id="label") - user_token = gr.Textbox(value='', placeholder="OpenAI API Key", type="password", show_label=False) - prompt_template = gr.Dropdown(label="Set a custom insruction for the chatbot:", choices=list(prompt_templates.keys())) - prompt_template_preview = gr.Markdown(elem_id="prompt_template_preview") - with gr.Accordion("Advanced parameters", open=False): - temperature = gr.Slider(minimum=0, maximum=2.0, value=0.7, step=0.1, label="Temperature", info="Higher = more creative/chaotic") - max_tokens = gr.Slider(minimum=100, maximum=4096, value=1000, step=1, label="Max tokens per response") - context_length = gr.Slider(minimum=1, maximum=10, value=2, step=1, label="Context length", info="Number of previous messages to send to the chatbot. Be careful with high values, it can blow up the token budget quickly.") - - gr.HTML('''


You can duplicate this Space to skip the queue:Duplicate Space
-

visitors

''') - - btn_submit.click(submit_message, [user_token, input_message, prompt_template, temperature, max_tokens, context_length, state], [input_message, chatbot, total_tokens_str, state]) - input_message.submit(submit_message, [user_token, input_message, prompt_template, temperature, max_tokens, context_length, state], [input_message, chatbot, total_tokens_str, state]) - btn_clear_conversation.click(clear_conversation, [], [input_message, chatbot, total_tokens_str, state]) - prompt_template.change(on_prompt_template_change, inputs=[prompt_template], outputs=[prompt_template_preview]) - user_token.change(on_token_change, inputs=[user_token], outputs=[]) - - - demo.load(download_prompt_templates, inputs=None, outputs=[prompt_template], queur=False) - - -demo.queue(concurrency_count=10) -demo.launch(height='800px') diff --git a/spaces/dawood/Kanye-AI/inference/__init__.py b/spaces/dawood/Kanye-AI/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageSequence.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageSequence.py deleted file mode 100644 index c4bb6334acfde7d245c5bb1722b7c2381661e4ca..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageSequence.py +++ /dev/null @@ -1,76 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# sequence support classes -# -# history: -# 1997-02-20 fl Created -# -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1997 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -## - - -class Iterator: - """ - This class implements an iterator object that can be used to loop - over an image sequence. - - You can use the ``[]`` operator to access elements by index. This operator - will raise an :py:exc:`IndexError` if you try to access a nonexistent - frame. - - :param im: An image object. - """ - - def __init__(self, im): - if not hasattr(im, "seek"): - msg = "im must have seek method" - raise AttributeError(msg) - self.im = im - self.position = getattr(self.im, "_min_frame", 0) - - def __getitem__(self, ix): - try: - self.im.seek(ix) - return self.im - except EOFError as e: - raise IndexError from e # end of sequence - - def __iter__(self): - return self - - def __next__(self): - try: - self.im.seek(self.position) - self.position += 1 - return self.im - except EOFError as e: - raise StopIteration from e - - -def all_frames(im, func=None): - """ - Applies a given function to all frames in an image or a list of images. - The frames are returned as a list of separate images. - - :param im: An image, or a list of images. - :param func: The function to apply to all of the image frames. - :returns: A list of images. - """ - if not isinstance(im, list): - im = [im] - - ims = [] - for imSequence in im: - current = imSequence.tell() - - ims += [im_frame.copy() for im_frame in Iterator(imSequence)] - - imSequence.seek(current) - return [func(im) for im in ims] if func else ims diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_config.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_config.py deleted file mode 100644 index 96d4200773d85eef9e846a4e57d63d0f2ee1b9aa..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_config.py +++ /dev/null @@ -1,31 +0,0 @@ -# SPDX-License-Identifier: MIT - - -__all__ = ["set_run_validators", "get_run_validators"] - -_run_validators = True - - -def set_run_validators(run): - """ - Set whether or not validators are run. By default, they are run. - - .. deprecated:: 21.3.0 It will not be removed, but it also will not be - moved to new ``attrs`` namespace. Use `attrs.validators.set_disabled()` - instead. - """ - if not isinstance(run, bool): - raise TypeError("'run' must be bool.") - global _run_validators - _run_validators = run - - -def get_run_validators(): - """ - Return whether or not validators are run. - - .. deprecated:: 21.3.0 It will not be removed, but it also will not be - moved to new ``attrs`` namespace. Use `attrs.validators.get_disabled()` - instead. - """ - return _run_validators diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_h_e_a.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_h_e_a.py deleted file mode 100644 index c293fafcb040ed5fe661c28278309bae3a419b2b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_h_e_a.py +++ /dev/null @@ -1,134 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from fontTools.misc.fixedTools import ( - ensureVersionIsLong as fi2ve, - versionToFixed as ve2fi, -) -from . import DefaultTable -import math - - -hheaFormat = """ - > # big endian - tableVersion: L - ascent: h - descent: h - lineGap: h - advanceWidthMax: H - minLeftSideBearing: h - minRightSideBearing: h - xMaxExtent: h - caretSlopeRise: h - caretSlopeRun: h - caretOffset: h - reserved0: h - reserved1: h - reserved2: h - reserved3: h - metricDataFormat: h - numberOfHMetrics: H -""" - - -class table__h_h_e_a(DefaultTable.DefaultTable): - - # Note: Keep in sync with table__v_h_e_a - - dependencies = ["hmtx", "glyf", "CFF ", "CFF2"] - - # OpenType spec renamed these, add aliases for compatibility - @property - def ascender(self): - return self.ascent - - @ascender.setter - def ascender(self, value): - self.ascent = value - - @property - def descender(self): - return self.descent - - @descender.setter - def descender(self, value): - self.descent = value - - def decompile(self, data, ttFont): - sstruct.unpack(hheaFormat, data, self) - - def compile(self, ttFont): - if ttFont.recalcBBoxes and ( - ttFont.isLoaded("glyf") - or ttFont.isLoaded("CFF ") - or ttFont.isLoaded("CFF2") - ): - self.recalc(ttFont) - self.tableVersion = fi2ve(self.tableVersion) - return sstruct.pack(hheaFormat, self) - - def recalc(self, ttFont): - if "hmtx" in ttFont: - hmtxTable = ttFont["hmtx"] - self.advanceWidthMax = max(adv for adv, _ in hmtxTable.metrics.values()) - - boundsWidthDict = {} - if "glyf" in ttFont: - glyfTable = ttFont["glyf"] - for name in ttFont.getGlyphOrder(): - g = glyfTable[name] - if g.numberOfContours == 0: - continue - if g.numberOfContours < 0 and not hasattr(g, "xMax"): - # Composite glyph without extents set. - # Calculate those. - g.recalcBounds(glyfTable) - boundsWidthDict[name] = g.xMax - g.xMin - elif "CFF " in ttFont or "CFF2" in ttFont: - if "CFF " in ttFont: - topDict = ttFont["CFF "].cff.topDictIndex[0] - else: - topDict = ttFont["CFF2"].cff.topDictIndex[0] - charStrings = topDict.CharStrings - for name in ttFont.getGlyphOrder(): - cs = charStrings[name] - bounds = cs.calcBounds(charStrings) - if bounds is not None: - boundsWidthDict[name] = int( - math.ceil(bounds[2]) - math.floor(bounds[0]) - ) - - if boundsWidthDict: - minLeftSideBearing = float("inf") - minRightSideBearing = float("inf") - xMaxExtent = -float("inf") - for name, boundsWidth in boundsWidthDict.items(): - advanceWidth, lsb = hmtxTable[name] - rsb = advanceWidth - lsb - boundsWidth - extent = lsb + boundsWidth - minLeftSideBearing = min(minLeftSideBearing, lsb) - minRightSideBearing = min(minRightSideBearing, rsb) - xMaxExtent = max(xMaxExtent, extent) - self.minLeftSideBearing = minLeftSideBearing - self.minRightSideBearing = minRightSideBearing - self.xMaxExtent = xMaxExtent - - else: # No glyph has outlines. - self.minLeftSideBearing = 0 - self.minRightSideBearing = 0 - self.xMaxExtent = 0 - - def toXML(self, writer, ttFont): - formatstring, names, fixes = sstruct.getformat(hheaFormat) - for name in names: - value = getattr(self, name) - if name == "tableVersion": - value = fi2ve(value) - value = "0x%08x" % value - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "tableVersion": - setattr(self, name, ve2fi(attrs["value"])) - return - setattr(self, name, safeEval(attrs["value"])) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/csv-b0b7514a.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/csv-b0b7514a.js deleted file mode 100644 index 511b34b2aed1552447a6605d45d0760eccb992ab..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/csv-b0b7514a.js +++ /dev/null @@ -1,2 +0,0 @@ -import{d as a}from"./dsv-576afacd.js";var s=a(","),v=s.parse,o=s.parseRows;export{v as a,o as c}; -//# sourceMappingURL=csv-b0b7514a.js.map diff --git a/spaces/dccif/Real-CUGAN/app.py b/spaces/dccif/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/dccif/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/image_processor.py b/spaces/declare-lab/tango/diffusers/src/diffusers/image_processor.py deleted file mode 100644 index 80e3412991cfb925816eda85e38210292802ceef..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/image_processor.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import warnings -from typing import Union - -import numpy as np -import PIL -import torch -from PIL import Image - -from .configuration_utils import ConfigMixin, register_to_config -from .utils import CONFIG_NAME, PIL_INTERPOLATION - - -class VaeImageProcessor(ConfigMixin): - """ - Image Processor for VAE - - Args: - do_resize (`bool`, *optional*, defaults to `True`): - Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`. - vae_scale_factor (`int`, *optional*, defaults to `8`): - VAE scale factor. If `do_resize` is True, the image will be automatically resized to multiples of this - factor. - resample (`str`, *optional*, defaults to `lanczos`): - Resampling filter to use when resizing the image. - do_normalize (`bool`, *optional*, defaults to `True`): - Whether to normalize the image to [-1,1] - """ - - config_name = CONFIG_NAME - - @register_to_config - def __init__( - self, - do_resize: bool = True, - vae_scale_factor: int = 8, - resample: str = "lanczos", - do_normalize: bool = True, - ): - super().__init__() - - @staticmethod - def numpy_to_pil(images): - """ - Convert a numpy image or a batch of images to a PIL image. - """ - if images.ndim == 3: - images = images[None, ...] - images = (images * 255).round().astype("uint8") - if images.shape[-1] == 1: - # special case for grayscale (single channel) images - pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images] - else: - pil_images = [Image.fromarray(image) for image in images] - - return pil_images - - @staticmethod - def numpy_to_pt(images): - """ - Convert a numpy image to a pytorch tensor - """ - if images.ndim == 3: - images = images[..., None] - - images = torch.from_numpy(images.transpose(0, 3, 1, 2)) - return images - - @staticmethod - def pt_to_numpy(images): - """ - Convert a numpy image to a pytorch tensor - """ - images = images.cpu().permute(0, 2, 3, 1).float().numpy() - return images - - @staticmethod - def normalize(images): - """ - Normalize an image array to [-1,1] - """ - return 2.0 * images - 1.0 - - def resize(self, images: PIL.Image.Image) -> PIL.Image.Image: - """ - Resize a PIL image. Both height and width will be downscaled to the next integer multiple of `vae_scale_factor` - """ - w, h = images.size - w, h = (x - x % self.vae_scale_factor for x in (w, h)) # resize to integer multiple of vae_scale_factor - images = images.resize((w, h), resample=PIL_INTERPOLATION[self.resample]) - return images - - def preprocess( - self, - image: Union[torch.FloatTensor, PIL.Image.Image, np.ndarray], - ) -> torch.Tensor: - """ - Preprocess the image input, accepted formats are PIL images, numpy arrays or pytorch tensors" - """ - supported_formats = (PIL.Image.Image, np.ndarray, torch.Tensor) - if isinstance(image, supported_formats): - image = [image] - elif not (isinstance(image, list) and all(isinstance(i, supported_formats) for i in image)): - raise ValueError( - f"Input is in incorrect format: {[type(i) for i in image]}. Currently, we only support {', '.join(supported_formats)}" - ) - - if isinstance(image[0], PIL.Image.Image): - if self.do_resize: - image = [self.resize(i) for i in image] - image = [np.array(i).astype(np.float32) / 255.0 for i in image] - image = np.stack(image, axis=0) # to np - image = self.numpy_to_pt(image) # to pt - - elif isinstance(image[0], np.ndarray): - image = np.concatenate(image, axis=0) if image[0].ndim == 4 else np.stack(image, axis=0) - image = self.numpy_to_pt(image) - _, _, height, width = image.shape - if self.do_resize and (height % self.vae_scale_factor != 0 or width % self.vae_scale_factor != 0): - raise ValueError( - f"Currently we only support resizing for PIL image - please resize your numpy array to be divisible by {self.vae_scale_factor}" - f"currently the sizes are {height} and {width}. You can also pass a PIL image instead to use resize option in VAEImageProcessor" - ) - - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, axis=0) if image[0].ndim == 4 else torch.stack(image, axis=0) - _, _, height, width = image.shape - if self.do_resize and (height % self.vae_scale_factor != 0 or width % self.vae_scale_factor != 0): - raise ValueError( - f"Currently we only support resizing for PIL image - please resize your pytorch tensor to be divisible by {self.vae_scale_factor}" - f"currently the sizes are {height} and {width}. You can also pass a PIL image instead to use resize option in VAEImageProcessor" - ) - - # expected range [0,1], normalize to [-1,1] - do_normalize = self.do_normalize - if image.min() < 0: - warnings.warn( - "Passing `image` as torch tensor with value range in [-1,1] is deprecated. The expected value range for image tensor is [0,1] " - f"when passing as pytorch tensor or numpy Array. You passed `image` with value range [{image.min()},{image.max()}]", - FutureWarning, - ) - do_normalize = False - - if do_normalize: - image = self.normalize(image) - - return image - - def postprocess( - self, - image, - output_type: str = "pil", - ): - if isinstance(image, torch.Tensor) and output_type == "pt": - return image - - image = self.pt_to_numpy(image) - - if output_type == "np": - return image - elif output_type == "pil": - return self.numpy_to_pil(image) - else: - raise ValueError(f"Unsupported output_type {output_type}.") diff --git a/spaces/deepghs/auto_image_censor/app.py b/spaces/deepghs/auto_image_censor/app.py deleted file mode 100644 index fb3f843cff534b7f9b25cc98d970d0af3b59d819..0000000000000000000000000000000000000000 --- a/spaces/deepghs/auto_image_censor/app.py +++ /dev/null @@ -1,221 +0,0 @@ -import os -import time -import zipfile -from datetime import datetime - -import gradio as gr -import matplotlib -import matplotlib.pyplot as plt -from PIL import Image -from hbutils.system import remove - -from censor import censor_area -from detect import detect_areas -from visual import plot_detection - - -def _get_labels(f_pussy: bool, f_breast: bool, m_penis: bool): - labels = [] - if f_pussy: - labels.append('EXPOSED_GENITALIA_F') - if f_breast: - labels.append('EXPOSED_BREAST_F') - if m_penis: - labels.append('EXPOSED_GENITALIA_M') - - return labels - - -def detect_uncensored(image: Image.Image, threshold: float, model: str, - f_pussy: bool, f_breast: bool, m_penis: bool, - f_pussy_zoom: float, f_breast_zoom: float, m_penis_zoom: float): - plt.clf() - labels = _get_labels(f_pussy, f_breast, m_penis) - zoom = { - 'EXPOSED_BREAST_F': f_breast_zoom, - 'EXPOSED_GENITALIA_F': f_pussy_zoom, - 'EXPOSED_GENITALIA_M': m_penis_zoom, - } - - result = detect_areas(image, threshold, labels, model, zoom) - plot_detection(image, result) - return plt.gcf() - - -def censor_image(image: Image.Image, threshold: float, model: str, - f_pussy: bool, f_breast: bool, m_penis: bool, - f_pussy_zoom: float, f_breast_zoom: float, m_penis_zoom: float, - method: str, radius: float, color: str): - if method == 'blur': - from censor import _blur as method_func - elif method == 'pixelate': - from censor import _pixelate as method_func - elif method == 'color': - from censor import _color as method_func - else: - raise ValueError(f'Unknown censor method - {method!r}.') - - labels = _get_labels(f_pussy, f_breast, m_penis) - zoom = { - 'EXPOSED_BREAST_F': f_breast_zoom, - 'EXPOSED_GENITALIA_F': f_pussy_zoom, - 'EXPOSED_GENITALIA_M': m_penis_zoom, - } - - result = detect_areas(image, threshold, labels, model, zoom) - final_image = image.copy() - for item in result: - final_image = censor_area(final_image, tuple(item['box']), method_func, radius=radius, color=color) - - return final_image - - -TEMP_IMAGE_DIRECTORY = '.temp' -TEMP_IMAGE_SPAN = 20 * 60 # cleanup the directory every 20min -TEMP_IMAGE_EXPIRE = 2 * 60 * 60 # images created 2h ago will be removed -_LAST_VISIT = None - - -def _prepare_tempdir(): # auto delete the old files - os.makedirs(TEMP_IMAGE_DIRECTORY, exist_ok=True) - global _LAST_VISIT - current_time = time.time() - if _LAST_VISIT is None or _LAST_VISIT + TEMP_IMAGE_SPAN >= current_time: - _LAST_VISIT = current_time - for file in os.listdir(TEMP_IMAGE_DIRECTORY): - filename = os.path.join(TEMP_IMAGE_DIRECTORY, file) - if os.path.getmtime(filename) + TEMP_IMAGE_EXPIRE < current_time: - remove(filename) - - -def batch_censor_images(files, threshold: float, model: str, - f_pussy: bool, f_breast: bool, m_penis: bool, - f_pussy_zoom: float, f_breast_zoom: float, m_penis_zoom: float, - method: str, radius: float, color: str, create_package: bool, progress=gr.Progress()): - _prepare_tempdir() - output_images = [] - progress(0.0) - - for i, file in enumerate(files): - image = Image.open(file.name) - censored_image = censor_image( - image, threshold, model, f_pussy, f_breast, m_penis, - f_pussy_zoom, f_breast_zoom, m_penis_zoom, method, radius, color - ) - - filename = os.path.basename(file.name) - filebody, fileext = os.path.splitext(filename) - new_filename = f'{filebody}_censored{fileext}' - save_path = os.path.join(TEMP_IMAGE_DIRECTORY, new_filename) - censored_image.save(save_path) - output_images.append(save_path) - progress((i + 1) / len(files)) - - if create_package: - timestamp = datetime.now().strftime("%Y%m%d%H%M%S%f") - zip_filename = os.path.join(TEMP_IMAGE_DIRECTORY, f'censored_images_{timestamp}.zip') - with zipfile.ZipFile(zip_filename, 'w') as zfile: - for file in output_images: - zfile.write(file, os.path.basename(file)) - - output_images = [zip_filename, *output_images] - - return output_images - - -if __name__ == '__main__': - matplotlib.use('Agg') - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - sd_input_image = gr.Image(type='pil', label='Input Image') - with gr.Row(): - sd_threshold = gr.Slider(value=0.3, minimum=0.05, maximum=0.99, step=0.01, - label='Confidence Threshold') - sd_model = gr.Radio(['default', 'base'], value='default', label='Detection Model') - with gr.Row(): - sd_f_pussy = gr.Checkbox(value=True, label='Female\'s Pussy') - sd_f_pussy_zoom = gr.Slider(value=0.75, minimum=0.1, maximum=3.0, step=0.05, - label='Zooming') - with gr.Row(): - sd_f_breast = gr.Checkbox(value=False, label='Female\'s Breast') - sd_f_breast_zoom = gr.Slider(value=0.9, minimum=0.1, maximum=3.0, step=0.05, - label='Zooming') - with gr.Row(): - sd_m_penis = gr.Checkbox(value=True, label='Male\'s Penis') - sd_m_penis_zoom = gr.Slider(value=0.8, minimum=0.1, maximum=3.0, step=0.05, - label='Zooming') - - with gr.Column(): - with gr.Tabs(): - with gr.Tab('Sensitive Detection'): - with gr.Row(): - sd_d_submit = gr.Button(value='Detect', variant='primary') - with gr.Row(): - sd_output_plot = gr.Plot(label='Detection Result') - - sd_d_submit.click( - detect_uncensored, - inputs=[ - sd_input_image, - sd_threshold, sd_model, - sd_f_pussy, sd_f_breast, sd_m_penis, - sd_f_pussy_zoom, sd_f_breast_zoom, sd_m_penis_zoom, - ], - outputs=[sd_output_plot] - ) - - with gr.Tab('Auto Censor'): - with gr.Row(): - sd_c_method = gr.Radio(['blur', 'pixelate', 'color'], value='blur', label='Method') - sd_c_radius = gr.Slider(value=8, minimum=1, maximum=64, step=1, - label='Radius (blue/pixelate)') - sd_c_color = gr.ColorPicker(value='black', label='Color (color)') - with gr.Row(): - sd_c_submit = gr.Button(value='Censor This Image', variant='primary') - with gr.Row(): - sd_c_output = gr.Image(type='pil', label='Censored Image') - - sd_c_submit.click( - censor_image, - inputs=[ - sd_input_image, - sd_threshold, sd_model, - sd_f_pussy, sd_f_breast, sd_m_penis, - sd_f_pussy_zoom, sd_f_breast_zoom, sd_m_penis_zoom, - sd_c_method, sd_c_radius, sd_c_color, - ], - outputs=[sd_c_output] - ) - - with gr.Tab('Batch Censor'): - with gr.Row(): - sd_b_method = gr.Radio(['blur', 'pixelate', 'color'], value='blur', label='Method') - sd_b_radius = gr.Slider(value=8, minimum=1, maximum=64, step=1, - label='Radius (blue/pixelate)') - sd_b_color = gr.ColorPicker(value='black', label='Color (color)') - with gr.Row(): - with gr.Column(): - sd_b_input_files = gr.File(file_count='multiple', file_types=['image'], - label='Original Images') - gr.HTML('UPLOAD FILES HERE!!! Multiple files are supported.') - with gr.Column(): - sd_b_output = gr.File(file_count='multiple', label='Censored Images') - gr.HTML('The generated file is only kept for 2 hours.') - sd_b_package = gr.Checkbox(value=True, label='Create Zip Package') - with gr.Row(): - sd_b_submit = gr.Button(value='Censor All Images', variant='primary') - - sd_b_submit.click( - batch_censor_images, - inputs=[ - sd_b_input_files, - sd_threshold, sd_model, - sd_f_pussy, sd_f_breast, sd_m_penis, - sd_f_pussy_zoom, sd_f_breast_zoom, sd_m_penis_zoom, - sd_b_method, sd_b_radius, sd_b_color, sd_b_package, - ], - outputs=[sd_b_output] - ) - - demo.queue(os.cpu_count()).launch() diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_text_to_image.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_text_to_image.py deleted file mode 100644 index c5a0b872ff3e42036c8b240270cda75252ce2cf4..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_text_to_image.py +++ /dev/null @@ -1,117 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/18 -@Author : mashenquan -@File : metagpt_text_to_image.py -@Desc : MetaGPT Text-to-Image OAS3 api, which provides text-to-image functionality. -""" -import asyncio -import base64 -import os -import sys -from pathlib import Path -from typing import List, Dict - -import aiohttp -import requests -from pydantic import BaseModel - -from metagpt.config import CONFIG, Config - -sys.path.append(str(Path(__file__).resolve().parent.parent.parent)) # fix-bug: No module named 'metagpt' -from metagpt.logs import logger - - -class MetaGPTText2Image: - def __init__(self, model_url): - """ - :param model_url: Model reset api url - """ - self.model_url = model_url if model_url else CONFIG.METAGPT_TEXT_TO_IMAGE_MODEL - - async def text_2_image(self, text, size_type="512x512"): - """Text to image - - :param text: The text used for image conversion. - :param size_type: One of ['512x512', '512x768'] - :return: The image data is returned in Base64 encoding. - """ - - headers = { - "Content-Type": "application/json" - } - dims = size_type.split("x") - data = { - "prompt": text, - "negative_prompt": "(easynegative:0.8),black, dark,Low resolution", - "override_settings": {"sd_model_checkpoint": "galaxytimemachinesGTM_photoV20"}, - "seed": -1, - "batch_size": 1, - "n_iter": 1, - "steps": 20, - "cfg_scale": 11, - "width": int(dims[0]), - "height": int(dims[1]), # 768, - "restore_faces": False, - "tiling": False, - "do_not_save_samples": False, - "do_not_save_grid": False, - "enable_hr": False, - "hr_scale": 2, - "hr_upscaler": "Latent", - "hr_second_pass_steps": 0, - "hr_resize_x": 0, - "hr_resize_y": 0, - "hr_upscale_to_x": 0, - "hr_upscale_to_y": 0, - "truncate_x": 0, - "truncate_y": 0, - "applied_old_hires_behavior_to": None, - "eta": None, - "sampler_index": "DPM++ SDE Karras", - "alwayson_scripts": {}, - } - - class ImageResult(BaseModel): - images: List - parameters: Dict - - try: - async with aiohttp.ClientSession() as session: - async with session.post(self.model_url, headers=headers, json=data) as response: - result = ImageResult(**await response.json()) - if len(result.images) == 0: - return "" - return result.images[0] - except requests.exceptions.RequestException as e: - logger.error(f"An error occurred:{e}") - return "" - - -# Export -async def oas3_metagpt_text_to_image(text, size_type: str = "512x512", model_url=""): - """Text to image - - :param text: The text used for image conversion. - :param model_url: Model reset api - :param size_type: One of ['512x512', '512x768'] - :return: The image data is returned in Base64 encoding. - """ - if not text: - return "" - if not model_url: - model_url = CONFIG.METAGPT_TEXT_TO_IMAGE_MODEL_URL - return await MetaGPTText2Image(model_url).text_2_image(text, size_type=size_type) - - -if __name__ == "__main__": - Config() - loop = asyncio.new_event_loop() - task = loop.create_task(oas3_metagpt_text_to_image("Panda emoji")) - v = loop.run_until_complete(task) - print(v) - data = base64.b64decode(v) - with open("tmp.png", mode="wb") as writer: - writer.write(data) - print(v) diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/utils/test_parse_html.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/utils/test_parse_html.py deleted file mode 100644 index 42be416a6a09fd47d4c984ba1f989c2263bb7d7d..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/utils/test_parse_html.py +++ /dev/null @@ -1,68 +0,0 @@ -from metagpt.utils import parse_html - -PAGE = """ - - - - Random HTML Example - - -

This is a Heading

-

This is a paragraph with a link and some emphasized text.

-
    -
  • Item 1
  • -
  • Item 2
  • -
  • Item 3
  • -
-
    -
  1. Numbered Item 1
  2. -
  3. Numbered Item 2
  4. -
  5. Numbered Item 3
  6. -
- - - - - - - - - - - - - -
Header 1Header 2
Row 1, Cell 1Row 1, Cell 2
Row 2, Cell 1Row 2, Cell 2
- Sample Image -
- - - - - -
-
-

This is a div with a class "box".

-

a link

-

-

-

-
- - -""" - -CONTENT = 'This is a HeadingThis is a paragraph witha linkand someemphasizedtext.Item 1Item 2Item 3Numbered Item 1Numbered '\ -'Item 2Numbered Item 3Header 1Header 2Row 1, Cell 1Row 1, Cell 2Row 2, Cell 1Row 2, Cell 2Name:Email:SubmitThis is a div '\ -'with a class "box".a link' - - -def test_web_page(): - page = parse_html.WebPage(inner_text=CONTENT, html=PAGE, url="http://example.com") - assert page.title == "Random HTML Example" - assert list(page.get_links()) == ["http://example.com/test", "https://metagpt.com"] - - -def test_get_page_content(): - ret = parse_html.get_html_content(PAGE, "http://example.com") - assert ret == CONTENT diff --git a/spaces/deesea/safe_or_not/app.py b/spaces/deesea/safe_or_not/app.py deleted file mode 100644 index 9f58bf1a5ea334055ba7b378c5889fdd91ae010f..0000000000000000000000000000000000000000 --- a/spaces/deesea/safe_or_not/app.py +++ /dev/null @@ -1,23 +0,0 @@ -# Cell -from fastai.vision.all import * -import gradio as gr -import timm - -# Cell -learn = load_learner('model.pkl') - -# Cell -categories = learn.dls.vocab - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -# Cell -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() -examples = ['image.jpg'] - -# Cell -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -intf.launch() diff --git a/spaces/diacanFperku/AutoGPT/Download [BETTER] Ccboot 2.1 Full Crack.md b/spaces/diacanFperku/AutoGPT/Download [BETTER] Ccboot 2.1 Full Crack.md deleted file mode 100644 index 7232fd0525b3c140a8107e3140549943ed31c5ed..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download [BETTER] Ccboot 2.1 Full Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Ccboot 2.1 Full Crack


Download File ✵✵✵ https://gohhs.com/2uFTIA



-
- 3cee63e6c2
-
-
-

diff --git a/spaces/diagaiwei/ir_chinese_medqa/baleen/utils/annotate.py b/spaces/diagaiwei/ir_chinese_medqa/baleen/utils/annotate.py deleted file mode 100644 index 435f984a714322ed04eac70c906677788544fd1f..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/baleen/utils/annotate.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -import ujson - -from colbert.utils.utils import print_message, file_tqdm - - -def annotate_to_file(qas_path, ranking_path): - output_path = f'{ranking_path}.annotated' - assert not os.path.exists(output_path), output_path - - QID2pids = {} - - with open(qas_path) as f: - print_message(f"#> Reading QAs from {f.name} ..") - - for line in file_tqdm(f): - example = ujson.loads(line) - QID2pids[example['qid']] = example['support_pids'] - - with open(ranking_path) as f: - print_message(f"#> Reading ranked lists from {f.name} ..") - - with open(output_path, 'w') as g: - for line in file_tqdm(f): - qid, pid, *other = line.strip().split('\t') - qid, pid = map(int, [qid, pid]) - - label = int(pid in QID2pids[qid]) - - line_ = [qid, pid, *other, label] - line_ = '\t'.join(map(str, line_)) + '\n' - g.write(line_) - - print_message(g.name) - print_message("#> Done!") - - return g.name diff --git a/spaces/digitalxingtong/Luzao-Bert-Vits2/data_utils.py b/spaces/digitalxingtong/Luzao-Bert-Vits2/data_utils.py deleted file mode 100644 index 2c98d3dc8b9572bd05859033a74d155425a2a2ab..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Luzao-Bert-Vits2/data_utils.py +++ /dev/null @@ -1,332 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import torchaudio -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True) - ''' - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - ''' - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - if self.use_mel_spec_posterior: - # if os.path.exists(filename.replace(".wav", ".spec.pt")): - # # spec, n_fft, num_mels, sampling_rate, fmin, fmax - # spec = spec_to_mel_torch( - # torch.load(filename.replace(".wav", ".spec.pt")), - # self.filter_length, self.n_mel_channels, self.sampling_rate, - # self.hparams.mel_fmin, self.hparams.mel_fmax) - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - # print(text, word2ph,phone, tone, language_str) - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/schedules/schedule_adam_step_5e.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/schedules/schedule_adam_step_5e.py deleted file mode 100644 index 371a3781bfe51ab0b9d841a3911bfe00c4e85197..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/schedules/schedule_adam_step_5e.py +++ /dev/null @@ -1,8 +0,0 @@ -# optimizer -optimizer = dict(type='Adam', lr=1e-3) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='step', step=[3, 4]) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=5) -checkpoint_config = dict(interval=1) diff --git a/spaces/elonmuskceo/shiny-orbit-simulation/README.md b/spaces/elonmuskceo/shiny-orbit-simulation/README.md deleted file mode 100644 index 05e0228971eac221a4c8a86253d35bc985951cc1..0000000000000000000000000000000000000000 --- a/spaces/elonmuskceo/shiny-orbit-simulation/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Shiny Orbit Simulation -emoji: 🪐 -colorFrom: green -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/emc348/faces-through-time/utils/data_utils.py b/spaces/emc348/faces-through-time/utils/data_utils.py deleted file mode 100644 index 0909e3614fc158ec668cd2033100766a7c989f38..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/utils/data_utils.py +++ /dev/null @@ -1,44 +0,0 @@ -import os - -from PIL import Image - -IMG_EXTENSIONS = [ - ".jpg", - ".JPG", - ".jpeg", - ".JPEG", - ".png", - ".PNG", - ".ppm", - ".PPM", - ".bmp", - ".BMP", - ".tiff", - ".tif" -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def tensor2im(var): - # var shape: (3, H, W) - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = (var + 1) / 2 - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype("uint8")) - - -def make_dataset(dir): - images = [] - assert os.path.isdir(dir), "%s is not a valid directory" % dir - for root, _, fnames in sorted(os.walk(dir)): - for fname in fnames: - if is_image_file(fname): - path = os.path.join(root, fname) - fname = fname.split(".")[0] - images.append((fname, path)) - return images diff --git a/spaces/emilycrinaldi/AirBNB/README.md b/spaces/emilycrinaldi/AirBNB/README.md deleted file mode 100644 index c4f663bad91d0712e8b7ea654d7ec9ac5b1a76b9..0000000000000000000000000000000000000000 --- a/spaces/emilycrinaldi/AirBNB/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AirBNB -emoji: 👁 -colorFrom: yellow -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eruuin/something/style.css b/spaces/eruuin/something/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/eruuin/something/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/eson/tokenizer-arena/vocab/bert_base_chinese/README.md b/spaces/eson/tokenizer-arena/vocab/bert_base_chinese/README.md deleted file mode 100644 index 51fcc75fb1b124d2f538450498f040b84c510e11..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/bert_base_chinese/README.md +++ /dev/null @@ -1,2 +0,0 @@ - -vocab_size: 21128 diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_japanese/test.py b/spaces/eson/tokenizer-arena/vocab/gpt_neox_japanese/test.py deleted file mode 100644 index 2cbb326bbc54b3448ee4ea57ec7ad1dc47cae178..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_japanese/test.py +++ /dev/null @@ -1,19 +0,0 @@ - - -from transformers import AutoTokenizer, GPTNeoXJapaneseTokenizer - -tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("tokenizer") - -# tokenizer = AutoTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") - -tokens = tokenizer.encode("人とAIが協調するためには http://baidu.com 🤣") - -for token in tokens: - print(token, tokenizer.decode([token])) - - -tokens = tokenizer.tokenize("人とAIが協調するためには http://baidu.com 🤣", clean=True) -print(tokens) -# for token in tokens: -# print(token, tokenizer.decode([token])) - diff --git a/spaces/facebook/StyleNeRF/torch_utils/ops/upfirdn2d.cpp b/spaces/facebook/StyleNeRF/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 89f04901040a552b75bb62b08b16661312b3edaf..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,107 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - // TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.numel() > 0, "x has zero size"); - TORCH_CHECK(f.numel() > 0, "f has zero size"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - // TORCH_CHECK((x.size(0)-1)*x.stride(0) + (x.size(1)-1)*x.stride(1) + (x.size(2)-1)*x.stride(2) + (x.size(3)-1)*x.stride(3) <= INT_MAX, "x memory footprint is too large"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - // TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - // TORCH_CHECK((y.size(0)-1)*y.stride(0) + (y.size(1)-1)*y.stride(1) + (y.size(2)-1)*y.stride(2) + (y.size(3)-1)*y.stride(3) <= INT_MAX, "output memory footprint is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/falterWliame/Face_Mask_Detection/Casmate Pro 652 Windows 7.md b/spaces/falterWliame/Face_Mask_Detection/Casmate Pro 652 Windows 7.md deleted file mode 100644 index 862b0baa9bfacbc28b9585f55c5f606be73b4bf8..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Casmate Pro 652 Windows 7.md +++ /dev/null @@ -1,193 +0,0 @@ -
-

Casmate Pro 652 Windows 7: A Complete Review

-

If you are looking for a software program that can help you create and edit graphics for sign making, engraving, and cutting, you might want to consider Casmate Pro 652. This program is designed to work with various devices such as plotters, vinyl cutters, engravers, and routers. It also supports legacy operating systems such as Windows Vista and Windows XP. In this article, we will review the features, benefits, and drawbacks of Casmate Pro 652 Windows 7.

-

Features of Casmate Pro 652 Windows 7

-

Casmate Pro 652 Windows 7 has many features that make it a powerful and versatile software program for graphic design. Some of these features are:

-

Casmate Pro 652 Windows 7


DOWNLOADhttps://urlca.com/2uDc0w



-
    -
  • It allows you to import and export files in various formats such as EPS, AI, DXF, PLT, JPG, BMP, TIF, and more.
  • -
  • It has a user-friendly interface that lets you access all the tools and functions easily.
  • -
  • It has a built-in vectorizer that can convert bitmap images into vector graphics.
  • -
  • It has a text tool that can create and edit text in any font, size, style, and color.
  • -
  • It has a shape tool that can draw and edit basic shapes such as circles, rectangles, polygons, stars, and more.
  • -
  • It has a node editing tool that can modify the curves and angles of any vector object.
  • -
  • It has a fill tool that can apply solid colors, gradients, patterns, textures, and transparencies to any object.
  • -
  • It has an outline tool that can add strokes, shadows, bevels, and contours to any object.
  • -
  • It has a transform tool that can resize, rotate, skew, flip, align, and distribute any object.
  • -
  • It has a layer tool that can organize and manage multiple objects on different layers.
  • -
  • It has a group tool that can group and ungroup multiple objects for easier editing.
  • -
  • It has an undo/redo tool that can undo or redo any action up to 99 times.
  • -
  • It has a preview tool that can show you how your design will look on the device before printing or cutting.
  • -
  • It has a print/cut tool that can send your design to the device with the appropriate settings and options.
  • -
-

Benefits of Casmate Pro 652 Windows 7

-

Casmate Pro 652 Windows 7 has many benefits that make it a reliable and efficient software program for graphic design. Some of these benefits are:

-
    -
  • It is compatible with various devices such as plotters, vinyl cutters, engravers, and routers from different brands and models.
  • -
  • It is compatible with legacy operating systems such as Windows Vista and Windows XP which are still widely used by many users.
  • -
  • It is easy to use and learn even for beginners who have no prior experience in graphic design.
  • -
  • It is fast and accurate in creating and editing graphics for sign making, engraving, and cutting.
  • -
  • It is affordable and cost-effective compared to other software programs in the market.
  • -
-

Drawbacks of Casmate Pro 652 Windows 7

-

Casmate Pro 652 Windows 7 also has some drawbacks that you should be aware of before buying or using it. Some of these drawbacks are:

-
    -
  • It is not compatible with newer operating systems such as Windows 11, 10, 8, or 7 which are more advanced and secure than older ones.
  • -
  • It is not updated regularly or supported by the developer which means it may have bugs or errors that are not fixed or resolved.
  • -
  • It may not have some features or functions that are available in other software programs such as advanced effects, filters, plugins, or templates.
  • -
-

Conclusion

-

Casmate Pro 652 Windows 7 is a software program that can help you create and edit graphics for sign making, engraving, and cutting. It has many features and benefits that make it a powerful and versatile software program for graphic design. However, it also has some drawbacks that you should consider before buying or using it. If you are looking for a software program that is compatible with various devices and legacy operating systems such as Windows Vista and Windows XP, Casmate Pro 652 Windows 7 may be a good option for you. However, if you are looking for a software program that is compatible with newer operating systems such as Windows 11, -10, -8, -or -7 -and -has -more -features -and -functions -than -Casmate -Pro -652 -Windows -7, -you -may -want -to -look -for -other -alternatives -in -the -market.

- -
  • Use the print/cut tool to send your design to the device with the appropriate settings and options. This will help you print or cut your design accurately and efficiently. To use the print/cut tool, go to File > Print/Cut Output Device and choose the device you want to print or cut. You can adjust the settings such as scale, orientation, position, speed, pressure, blade offset, etc. Then click on Print/Cut and your design will be sent to the device.
  • - -

    Alternatives to Casmate Pro 652 Windows 7

    -

    Casmate Pro 652 Windows 7 is a software program that can help you create and edit graphics for sign making, engraving, and cutting. However, it is not the only software program that can do this. There are other alternatives that you can try if you are looking for a software program that is compatible with newer operating systems such as Windows 11, 10, 8, or 7 and has more features and functions than Casmate Pro 652 Windows 7. Here are some alternatives to Casmate Pro 652 Windows 7:

    -
      -
    • FlexiSIGN-PRO: This is a software program that can help you design and produce signs, banners, decals, stickers, vehicle wraps, and more. It has many features such as text effects, vector graphics, image editing, color management, contour cutting, nesting, tiling, etc. It is compatible with Windows 11, 10, 8, and 7 and supports various devices such as plotters, vinyl cutters, printers, scanners, etc.
    • -
    • CorelDRAW Graphics Suite: This is a software program that can help you create and edit graphics for any project. It has many features such as vector illustration, photo editing, layout design, typography, web graphics, etc. It is compatible with Windows 11, -10, -8, -and -7 -and -supports -various -devices -such -as -plotters, -vinyl -cutters, -engravers, -routers, -etc. -
    • Adobe Illustrator: This is a software program that can help you create and edit vector graphics for any project. It has many features such as drawing tools, brushes, gradients, patterns, symbols, effects, filters, etc. It is compatible with Windows 11, 10, 8, and 7 and supports various devices such as plotters, vinyl cutters, engravers, routers, etc.
    • -
    • Inkscape: This is a software program that can help you create and edit vector graphics for any project. It has many features such as drawing tools, node editing, text editing, fill and stroke, path operations, filters, extensions, etc. It is compatible with Windows 11, 10, 8, and 7 and supports various devices such as plotters, vinyl cutters, engravers, routers, etc.
    • -
    -

    Conclusion

    -

    Casmate Pro 652 Windows 7 is a software program that can help you create and edit graphics for sign making, engraving, and cutting. It has many features and benefits that make it a powerful and versatile software program for graphic design. However, it also has some drawbacks that you should consider before buying or using it. If you are looking for a software program that is compatible with various devices and legacy operating systems such as Windows Vista and Windows XP, Casmate Pro 652 Windows 7 may be a good option for you. However, if you are looking for a software program that is compatible with newer operating systems such as Windows 11, -10, -8, -or -7 -and -has -more -features -and -functions -than -Casmate -Pro -652 -Windows -7, -you -may -want -to -look -for -other -alternatives -in -the -market. -We hope this article has helped you learn more about Casmate Pro 652 Windows 7 and its alternatives. If you have any questions or comments, please feel free to share them below.

    -
  • SignLab: This is a software program that can help you design and produce signs, banners, decals, stickers, vehicle wraps, and more. It has many features such as text effects, vector graphics, image editing, color management, contour cutting, nesting, tiling, etc. It is compatible with Windows 11, 10, 8, and 7 and supports various devices such as plotters, vinyl cutters, printers, scanners, etc.
  • - -

    How to Choose the Best Software Program for Your Needs

    -

    As you can see, there are many software programs that can help you create and edit graphics for sign making, engraving, and cutting. However, not all software programs are suitable for your needs. You need to consider some factors before choosing the best software program for your needs. Here are some factors to consider:

    -
      -
    • Your device: You need to choose a software program that is compatible with your device such as plotter, vinyl cutter, engraver, or router. You also need to check the driver availability and compatibility for your device and operating system.
    • -
    • Your operating system: You need to choose a software program that is compatible with your operating system such as Windows 11, -10, -8, -or -7. -You -also -need -to -check -the -system -requirements -and -performance -of -the -software -program -on -your -operating -system. -
    • Your budget: You need to choose a software program that fits your budget and offers the best value for your money. You also need to consider the cost of updates, upgrades, support, and maintenance of the software program.
    • -
    • Your skill level: You need to choose a software program that matches your skill level and learning curve. You also need to consider the availability of tutorials, manuals, guides, and customer service of the software program.
    • -
    • Your purpose: You need to choose a software program that meets your purpose and goals. You also need to consider the quality, functionality, and versatility of the software program.
    • -
    -

    By considering these factors, you can choose the best software program for your needs. You can also compare different software programs based on these factors and see which one suits you better.

    -

    -

    Summary

    -

    In this article, we have reviewed Casmate Pro 652 Windows 7 and its alternatives. We have discussed the features, benefits, and drawbacks of Casmate Pro 652 Windows 7 and its alternatives. We have also discussed how to download and install Casmate Pro 652 Windows 7 and how to use some tips and tricks for using it. Finally, we have discussed how to choose the best software program for your needs based on some factors.

    -

    We hope this article has helped you learn more about Casmate Pro 652 Windows 7 and its alternatives. If you have any questions or comments, please feel free to share them below.

    -

    Casmate Pro 652 Windows 7 is a software program that can help you create and edit graphics for sign making, engraving, and cutting. It has many features and benefits that make it a powerful and versatile software program for graphic design. However, it also has some drawbacks that you should consider before buying or using it. If you are looking for a software program that is compatible with various devices and legacy operating systems such as Windows Vista and Windows XP, Casmate Pro 652 Windows 7 may be a good option for you. However, if you are looking for a software program that is compatible with newer operating systems such as Windows 11, -10, -8, -or -7 -and -has -more -features -and -functions -than -Casmate -Pro -652 -Windows -7, -you -may -want -to -look -for -other -alternatives -in -the -market. -We have discussed some alternatives to Casmate Pro 652 Windows 7 and how to choose the best software program for your needs based on some factors. By following these tips and tricks, you can create and edit graphics for sign making, engraving, and cutting with ease and efficiency.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Dream Stripper Ultimate 2010 PCENG18.md b/spaces/falterWliame/Face_Mask_Detection/Dream Stripper Ultimate 2010 PCENG18.md deleted file mode 100644 index 257fdeb4b0439453c1db64b7c56b12c5bb16ad67..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Dream Stripper Ultimate 2010 PCENG18.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Dream Stripper Ultimate 2010 PCENG18


    Downloadhttps://urlca.com/2uDcrp



    -
    - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/HACK AOMEI Backupper (All Editions) Patch [CracksNow] LINK.md b/spaces/falterWliame/Face_Mask_Detection/HACK AOMEI Backupper (All Editions) Patch [CracksNow] LINK.md deleted file mode 100644 index 64cbe3499eb472f8f18a023771b0630dcb3fa54b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/HACK AOMEI Backupper (All Editions) Patch [CracksNow] LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HACK AOMEI Backupper (All Editions) Patch [CracksNow]


    DOWNLOAD --->>> https://urlca.com/2uDe14



    -
    -The scanner crawlers are blocked by the web application firewall on this domain/website. The scan result could be incomplete. 1fdad05405
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/777 Slot Io The Ultimate Online Casino Experience - Play Now and Win Big.md b/spaces/fatiXbelha/sd/777 Slot Io The Ultimate Online Casino Experience - Play Now and Win Big.md deleted file mode 100644 index b1f1d42baf75028971b86ef46b83f543a2900109..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/777 Slot Io The Ultimate Online Casino Experience - Play Now and Win Big.md +++ /dev/null @@ -1,95 +0,0 @@ - -

    Download 777 Slot Io: How to Enjoy the Best Free Casino Slots Online

    -

    If you are looking for a fun and exciting way to play casino slots online, you should try 777 Slot Io. This is a free app that offers hundreds of Vegas-style slot machines with huge jackpots, non-stop bonuses, and amazing graphics. In this article, we will show you what 777 Slot Io is, how to download and install it on your device, and some tips and tricks to win big at this app.

    -

    download 777 slot io


    Download Filehttps://urllie.com/2uNCCf



    -

    What is 777 Slot Io?

    -

    777 Slot Io is a free casino slots app that lets you experience the thrill of Las Vegas from anywhere. You can choose from over 70 different slot machines, each with its own theme, features, and payouts. Some of the popular slots include Fire 777 Re-spin, The Bank 777, Golden 777, Sliding 777, and Multiply 777 Re-spin. You can also enjoy daily casino bonuses, free spins, and other bonus games that give you more chances to win.

    -

    Features and Benefits of 777 Slot Io

    -

    Here are some of the features and benefits of playing 777 Slot Io:

    -
      -
    • You can start rich with 10,000 free coins and get more free coins every hour.
    • -
    • You can play offline without an internet connection.
    • -
    • You can enjoy stunning Vegas slots graphics and sound effects.
    • -
    • You can play with individual reel stop, auto spin, and classic slot lever options.
    • -
    • You can compete with other players on the leaderboard and win prizes.
    • -
    -

    How to Download and Install 777 Slot Io on Your Device

    -

    Downloading and installing 777 Slot Io is easy and fast. Here are the steps to follow:

    -

    download 777 slot io apk
    -download 777 slot io app
    -download 777 slot io free coins
    -download 777 slot io for android
    -download 777 slot io for ios
    -download 777 slot io for pc
    -download 777 slot io for windows
    -download 777 slot io for mac
    -download 777 slot io online
    -download 777 slot io offline
    -download 777 slot io vegas casino
    -download 777 slot io new games
    -download 777 slot io bonus games
    -download 777 slot io jackpot games
    -download 777 slot io classic games
    -download 777 slot io dragonplay slots
    -download 777 slot io google play
    -download 777 slot io app store
    -download 777 slot io no registration
    -download 777 slot io no deposit
    -download 777 slot io real money
    -download 777 slot io fun mode
    -download 777 slot io cheats
    -download 777 slot io hacks
    -download 777 slot io tips
    -download 777 slot io tricks
    -download 777 slot io reviews
    -download 777 slot io ratings
    -download 777 slot io updates
    -download 777 slot io latest version
    -download 777 slot io mod apk
    -download 777 slot io unlimited coins
    -download 777 slot io vip club
    -download 777 slot io tournaments
    -download 777 slot io leaderboard
    -download 777 slot io social casino
    -download 777 slot io facebook login
    -download 777 slot io invite friends
    -download 777 slot io referral code
    -download 777 slot io customer service
    -download 777 slot io contact us
    -download 777 slot io faq
    -download 777 slot io how to play
    -download 777 slot io rules and regulations
    -download 777 slot io terms and conditions
    -download 777 slot io privacy policy

    -
      -
    1. Go to the Google Play Store or the App Store on your device.
    2. -
    3. Search for "777 Slots - Vegas Casino Slot!" or click on one of these links: [Google Play](^1^) or [App Store](^2^).
    4. -
    5. Tap on the "Install" button and wait for the app to download.
    6. -
    7. Open the app and enjoy playing your favorite slot machines.
    8. -
    -

    Tips and Tricks to Win Big at 777 Slot Io

    -

    Playing 777 Slot Io is fun and easy, but if you want to increase your chances of winning big, you should follow these tips and tricks:

    -

    Choose the Right Slot Machine

    -

    Not all slot machines are created equal. Some have higher payouts, more bonus features, or higher volatility than others. You should choose a slot machine that suits your preferences and budget. For example, if you want to win big but don't mind losing often, you should play a high-volatility slot machine. If you want to win more frequently but with smaller amounts, you should play a low-volatility slot machine.

    -

    Use the Bonuses and Promotions

    -

    One of the best things about 777 Slot Io is that it offers many bonuses and promotions that can boost your bankroll and give you more chances to win. You should take advantage of these offers as much as possible. For example, you should claim your free coins every hour, spin the daily wheel for extra rewards, and participate in the bonus games that pop up randomly.

    -

    Manage Your Bankroll Wisely

    -

    The most important tip for playing any casino game is to manage your bankroll wisely. You should never bet more than you can afford to lose, and you should set a limit for how much you are willing to spend on each session. You should also vary your bet size according to your results. For example, if you are on a winning streak, you can increase your bet slightly to maximize your profits. If you are on a losing streak, you can decrease your bet slightly to minimize your losses.

    -

    Conclusion

    -

    777 Slot Io is a free casino slots app that offers you the opportunity to play hundreds of Vegas-style slot machines with huge jackpots, non-stop bonuses, and amazing graphics. You can download and install it on your device easily and enjoy playing offline or online. You can also use some tips and tricks to win big at this app, such as choosing the right slot machine, using the bonuses and promotions, and managing your bankroll wisely. If you are looking for a fun and exciting way to play casino slots online, you should try 777 Slot Io today.

    -

    FAQs

    -

    Here are some frequently asked questions about 777 Slot Io:

    -
      -
    1. Is 777 Slot Io safe and secure?
    2. -

      Yes, 777 Slot Io is safe and secure. It uses encryption technology to protect your personal and financial information. It also complies with the privacy policies of Google Play and App Store.

      -
    3. Can I play 777 Slot Io with real money?
    4. -

      No, 777 Slot Io is a free app that does not involve real money gambling. You can only play with virtual coins that have no real value. You cannot withdraw or exchange your coins for real money or prizes.

      -
    5. Can I play 777 Slot Io with friends?
    6. -

      Yes, you can play 777 Slot Io with friends. You can connect your Facebook account to the app and invite your friends to join you. You can also chat with other players and compete with them on the leaderboard.

      -
    7. What are the system requirements for 777 Slot Io?
    8. -

      777 Slot Io requires Android 4.1 or higher or iOS 9.0 or later. It also requires a stable internet connection for some features.

      -
    9. How can I contact the customer support of 777 Slot Io?
    10. -

      You can contact the customer support of 777 Slot Io by sending an email to support@777slot.io or by filling out the contact form on their website: [https://www.777slot.io/contact-us].

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Counter-Strike Global Offensive - The Ultimate PC Shooter Game from Ocean of Games for Free.md b/spaces/fatiXbelha/sd/Counter-Strike Global Offensive - The Ultimate PC Shooter Game from Ocean of Games for Free.md deleted file mode 100644 index e36193936c9fad6b8119313023646827e0c51958..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Counter-Strike Global Offensive - The Ultimate PC Shooter Game from Ocean of Games for Free.md +++ /dev/null @@ -1,126 +0,0 @@ -
    -

    Counter Strike Global Offensive Free Download for PC Ocean of Games

    -

    If you are a fan of first-person shooter games, you have probably heard of Counter Strike Global Offensive (CS:GO), one of the most popular and competitive games in the world. But did you know that you can download CS:GO for free from Ocean of Games, a website that offers thousands of PC games for free? In this article, we will tell you everything you need to know about CS:GO and Ocean of Games, including how to download, install, and play the game, as well as the pros and cons of using this website. Let's get started!

    -

    counter strike global offensive free download for pc ocean of games


    DOWNLOAD ::: https://urllie.com/2uNCQn



    -

    Introduction

    -

    What is Counter Strike Global Offensive (CS:GO)?

    -

    Counter Strike Global Offensive (CS:GO) is a multiplayer first-person shooter game developed by Valve and Hidden Path Entertainment. It was released in 2012 as the fourth game in the Counter Strike series, which started as a mod for Half-Life in 1999. CS:GO features new maps, characters, weapons, and game modes, as well as updated versions of the classic CS content like de_dust2, etc.

    -

    What are the features and game modes of CS:GO?

    -

    CS:GO has many features and game modes that make it fun and challenging for players of different skill levels and preferences. Some of the main features and game modes are:

    -
      -
    • Competitive: This is the most popular mode in CS:GO, where two teams of five players each compete in a best-of-30 match. The teams take turns playing as terrorists or counter-terrorists, trying to plant or defuse bombs, or eliminate the enemy team. The players can earn money by winning rounds, killing enemies, or completing objectives, which they can use to buy weapons and equipment. The players also have a rank based on their performance, which determines their skill group and matchmaking.
    • -
    • Casual: This is a more relaxed mode where players can join or leave a match at any time, without affecting their rank or stats. The matches are shorter and have fewer rules than competitive mode, such as friendly fire being off, team collision being off, etc. The players can also chat with anyone in the match, regardless of their team.
    • -
    • Deathmatch: This is a fast-paced mode where players can respawn instantly after dying, without waiting for the next round. The players can choose any weapon they want, and they earn points by killing enemies or picking up bonus weapons. The player with the most points at the end of the match wins.
    • -
    • Arms Race: This is a mode where players have to kill enemies with every weapon in a predetermined order. The first player to get a kill with the final weapon, which is usually a knife or a golden knife, wins the match.
    • -
    • Danger Zone: This is a mode inspired by the battle royale genre, where up to 18 players compete in a shrinking map filled with loot, enemies, and hazards. The last player or team standing wins the match.
    • -
    • Wingman: This is a mode where two teams of two players each compete in a best-of-16 match on smaller maps. The rules are similar to competitive mode, but with shorter rounds and fewer bomb sites.
    • -
    • F lying Scouts: This is a mode where two teams of two players each compete in a best-of-9 match on maps designed for sniping. The players can only use the SSG 08 sniper rifle and a knife, and they have low gravity and high mobility.
    • -
    -

    These are just some of the game modes available in CS:GO. There are also other modes like Demolition, Retake, Guardian, Co-op Strike, etc. that offer different challenges and experiences for the players.

    -

    Why is CS:GO popular among gamers?

    -

    CS:GO is one of the most popular and influential games in the world, with millions of players and fans across the globe. Some of the reasons why CS:GO is so popular are:

    -
      -
    • It is fun and addictive: CS:GO is a game that can be enjoyed by anyone, regardless of their age, gender, or background. It is easy to learn but hard to master, which makes it appealing to both casual and hardcore gamers. It is also a game that can be played for hours without getting bored, as each match is different and unpredictable.
    • -
    • It is competitive and rewarding: CS:GO is a game that rewards skill, strategy, teamwork, and communication. It is a game that challenges the players to improve their abilities and rank up in the ladder. It is also a game that has a thriving esports scene, with professional teams and tournaments that attract huge audiences and prizes.
    • -
    • It is customizable and community-driven: CS:GO is a game that allows the players to customize their experience with various options and settings. The players can choose their preferred game mode, map, weapon, skin, etc. They can also create their own content with the Steam Workshop, where they can upload and download user-generated maps, skins, mods, etc. The players can also join and create communities with other players who share their interests and passions.
    • -
    -

    These are just some of the reasons why CS:GO is popular among gamers. There are many more reasons why CS:GO is a great game that deserves your attention and time.

    -

    How to download and install CS:GO for free on your PC
    -CS:GO free download full version for Windows 10
    -Ocean of games CS:GO multiplayer crack
    -Best sites to download CS:GO for free legally
    -CS:GO system requirements and gameplay features
    -CS:GO free download with all maps and modes
    -Ocean of games CS:GO offline installer
    -CS:GO free download no steam required
    -How to play CS:GO online with friends for free
    -Ocean of games CS:GO latest update
    -CS:GO free download with bots and skins
    -Ocean of games CS:GO torrent link
    -How to fix CS:GO errors and issues on PC
    -CS:GO free download with cheats and hacks
    -Ocean of games CS:GO review and rating
    -How to optimize CS:GO performance and graphics on PC
    -CS:GO free download with custom servers and mods
    -Ocean of games CS:GO comparison with other FPS games
    -How to uninstall CS:GO from your PC
    -CS:GO free download with Steam Workshop support
    -Ocean of games CS:GO tips and tricks for beginners
    -How to get CS:GO Prime Status for free
    -Ocean of games CS:GO alternatives and similar games
    -How to create your own CS:GO maps and modes
    -CS:GO free download with voice chat and communication options

    -

    How to Download CS:GO for Free from Ocean of Games?

    -

    What is Ocean of Games and how does it work?

    -

    Ocean of Games is a website that offers thousands of PC games for free download. It is one of the largest and most popular sources of free games on the internet. It has games from various genres, such as action, adventure, racing, simulation, sports, etc. It also has games from different eras, such as classic, retro, modern, etc.

    -

    Ocean of Games works by providing direct links to download the games from third-party servers or hosts. The users do not need to register or pay anything to access the games. They just need to click on the download button and follow the instructions to get the game on their PC. However, Ocean of Games does not guarantee the quality or safety of the games or the links. The users are responsible for checking the files for viruses or malware before installing them on their PC.

    -

    What are the steps to download CS:GO from Ocean of Games?

    -

    If you want to download CS:GO for free from Ocean of Games, you need to follow these steps:

    -
      -
    1. Go to the Ocean of Games website: You can use any web browser to visit the Ocean of Games website at oceanofgames.com. You will see a homepage with various categories and featured games.
    2. -
    3. Search for CS:GO: You can use the search bar on the top right corner of the website to type in "Counter Strike Global Offensive" or "CS:GO" and hit enter. You will see a list of results with different versions or editions of CS:GO.
    4. -
    5. Select the version you want: You can choose any version of CS:GO that you want from the list, such as CS:GO Warzone Edition, CS:GO Online Edition, etc. You will see a page with more details about the game, such as its description, features, screenshots, system requirements, etc.
    6. -
    7. Click on the download button: You will see a big green button that says "Download Now" at the bottom of the page. You need to click on this button to start downloading the game.
    8. -
    9. Wait for the download to finish: Depending on your internet speed and connection, it may take some time for the download to finish. You will see a progress bar that shows you the download speed and the remaining time. You can also pause or resume the download if you want.
    10. -
    11. Extract the game files: Once the download is complete, you will see a compressed file with the name of the game and the extension .zip or .rar. You need to extract this file using a software like WinRAR or 7-Zip. You will see a folder with the game files inside.
    12. -
    13. Run the game setup: Inside the folder, you will see a file with the name of the game and the extension .exe. This is the game setup file that you need to run to install the game on your PC. You need to double-click on this file and follow the instructions on the screen to complete the installation.
    14. -
    -

    That's it! You have successfully downloaded and installed CS:GO for free from Ocean of Games. You can now launch the game from your desktop or start menu and enjoy playing it.

    -

    What are the system requirements and installation instructions for CS:GO?

    -

    Before you download and install CS:GO from Ocean of Games, you need to make sure that your PC meets the minimum or recommended system requirements for the game. Here are the system requirements for CS:GO according to Steam :

    - - - - - - - - - -
    MinimumRecommended
    OS: Windows® 7/Vista/XPOS: Windows® 7/Vista/XP
    Processor: Intel® Core™ 2 Duo E6600 or AMD Phenom™ X3 8750 processor or betterProcessor: Intel® Core™ i5-2400/AMD FX-8320 or better
    Memory: 2 GB RAMMemory: 4 GB RAM
    Graphics: Video card must be 256 MB or more and should be a DirectX 9-compatible with support for Pixel Shader 3.0Graphics: NVIDIA GeForce GTX 660/AMD Radeon HD 7870 or better
    DirectX: Version 9.0cDirectX: Version 9.0c
    Storage: 15 GB available spaceStorage: 15 GB available space
    Sound Card: DirectX CompatibleSound Card: DirectX Compatible
    -

    If your PC meets these requirements, you can follow the steps mentioned above to download and install CS:GO from Ocean of Games. However, if you face any problems or errors during the installation, you can try these solutions:

    -
      -
    • Disable your antivirus or firewall: Sometimes, your antivirus or firewall may block or delete some files that are necessary for the game to run. You can try disabling your antivirus or firewall temporarily while installing the game, and then enable it again after the installation is done.
    • -
    • Run the game as administrator: Sometimes, you may need to run the game as administrator to grant it permission to access some files or folders on your PC. You can do this by right-clicking on the game shortcut or setup file and choosing "Run as administrator" from the menu.
    • -
    • Update your drivers: Sometimes, your drivers may be outdated or incompatible with the game. You can try updating your drivers to the latest version using a software like Driver Booster or Driver Easy, or manually from the manufacturer's website.
    • -
    • Contact Ocean of Games support: Sometimes, you may need to contact Ocean of Games support for more help or guidance. You can do this by visiting their website and filling out a contact form, or by sending them an email at support@oceanofgames.com.
    • -
    -

    We hope these solutions will help you install CS:GO from Ocean of Games without any issues.

    -

    Pros and Cons of Downloading CS:GO from Ocean of Games

    -

    What are the advantages of downloading CS:GO from Ocean of Games?

    -

    Downloading CS:GO from Ocean of Games has some advantages that may appeal to some users. Some of these advantages are:

    -
      -
    • It is free: The most obvious advantage of downloading CS:GO from Ocean of Games is that it is free. You do not need to pay anything to get the game on your PC. This can save you some money and allow you to play CS:GO without any restrictions.
    • -
    • It is easy and fast: Another advantage of downloading CS:GO from Ocean of Games is that it is easy and fast. You do not need to register or sign up for anything to access the game. You just need to click on the download button and follow the simple steps to get the game on your PC. The download speed is also decent, depending on your internet connection and the server load.
    • -
    • It has a large collection of games: Another advantage of downloading CS:GO from Ocean of Games is that it has a large collection of games that you can explore and enjoy. You can find games from various genres, eras, and platforms on the website. You can also discover new games that you may not have heard of before. You can also request games that are not available on the website, and they may add them in the future.
    • -
    -

    These are some of the advantages of downloading CS:GO from Ocean of Games. However, there are also some disadvantages or risks that you should be aware of before using this website.

    -

    What are the disadvantages or risks of downloading CS:GO from Ocean of Games?

    -

    Downloading CS:GO from Ocean of Games has some disadvantages or risks that may deter some users. Some of these disadvantages or risks are:

    -
      -
    • It is illegal: The most obvious disadvantage or risk of downloading CS:GO from Ocean of Games is that it is illegal. CS:GO is a copyrighted game that belongs to Valve and Hidden Path Entertainment. Downloading it for free from Ocean of Games is a violation of their intellectual property rights and a form of piracy. This can have legal consequences for you, such as fines, lawsuits, or even jail time.
    • -
    • It is unsafe: Another disadvantage or risk of downloading CS:GO from Ocean of Games is that it is unsafe. Ocean of Games does not guarantee the quality or safety of the games or the links that they provide. The games or the links may contain viruses, malware, spyware, adware, etc. that can harm your PC or steal your personal information. You may also encounter fake or broken links that can waste your time or money.
    • -
    • It is unreliable: Another disadvantage or risk of downloading CS:GO from Ocean of Games is that it is unreliable. Ocean of Games does not update or maintain the games or the links that they provide. The games or the links may become outdated, corrupted, or incompatible with your PC or system. You may also face problems or errors during the installation or gameplay, such as crashes, bugs, glitches, etc.
    • -
    • It is unethical: Another disadvantage or risk of downloading CS:GO from Ocean of Games is that it is unethical. By downloading CS:GO for free from Ocean of Games, you are depriving the developers and publishers of their rightful income and recognition. You are also hurting the gaming industry and community by supporting piracy and illegal distribution. You are also missing out on the official features and updates that come with the legitimate version of CS:GO.
    • -
    -

    These are some of the disadvantages or risks of downloading CS:GO from Ocean of Games. You should weigh these factors carefully before deciding to use this website.

    -

    Conclusion

    -

    In conclusion, CS:GO is a fantastic game that you can download for free from Ocean of Games, a website that offers thousands of PC games for free download. However, you should also be aware of the pros and cons of using this website, such as its legality, safety, reliability, and ethics. You should also check your PC's system requirements and installation instructions before downloading and installing CS:GO from Ocean of Games.

    -

    We hope this article has helped you learn more about CS:GO and Ocean of Games. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Happy gaming!

    -

    FAQs

    -

    What are some common questions and answers about CS:GO and Ocean of Games?

    -

    Here are some common questions and answers that you may have about CS:GO and Ocean of Games:

    -
      -
    • Q: Is CS:GO free to play on Steam?
    • -
    • A: Yes, CS:GO is free to play on Steam since 2018. You can download and play the game without paying anything. However, you will have some limitations, such as not being able to access Prime matchmaking, which is reserved for players who have bought the game or reached level 21. You will also have to deal with more hackers and cheaters in the free version. You can upgrade to Prime status by buying the game or reaching level 21.
    • -
    • Q: Is Ocean of Games safe to use?
    • -
    • A: Ocean of Games is not safe to use, as it is an illegal and unregulated website that provides pirated games. The games or the links may contain viruses, malware, spyware, adware, etc. that can harm your PC or steal your personal information. You may also encounter fake or broken links that can waste your time or money. You may also face legal consequences for using this website.
    • -
    • Q: How can I play CS:GO online with my friends?
    • -
    • A: You can play CS:GO online with your friends by inviting them to join your lobby or party. You can do this by clicking on the "Play" button on the main menu, then choosing "Play with Friends". You can then invite your friends from your Steam friends list or by sending them a lobby link. You can also join your friends' lobbies or parties by accepting their invitations or clicking on their lobby links.
    • -
    • Q: How can I improve my skills and rank in CS:GO?
    • -
    • A: You can improve your skills and rank in CS:GO by practicing regularly, learning from your mistakes, watching tutorials and guides, studying the maps and strategies, playing with better players, joining a team or a community, and having fun.
    • -
    • Q: How can I get more skins and items in CS:GO?
    • -
    • A: You can get more skins and items in CS:GO by playing the game and earning drops, opening cases and using keys, trading with other players, buying from the Steam market or third-party websites, or creating your own skins and uploading them to the Steam Workshop.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/felixz/open_llm_leaderboard/README.md b/spaces/felixz/open_llm_leaderboard/README.md deleted file mode 100644 index fa53f9f8ac6a4c0a7e1e80543537db644ea0e0b5..0000000000000000000000000000000000000000 --- a/spaces/felixz/open_llm_leaderboard/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Open LLM Leaderboard -emoji: 🏆 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: true -license: apache-2.0 -duplicated_from: HuggingFaceH4/open_llm_leaderboard ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/bash_scripts/inference.sh b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/bash_scripts/inference.sh deleted file mode 100644 index d687c1b8d483bdf9c99998ff4fc587f82d7e0c46..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/bash_scripts/inference.sh +++ /dev/null @@ -1,15 +0,0 @@ -set -exo - -list="$1" -ckpt="${2:-pretrained_models/e4e_ffhq_encode.pt}" - -base_dir="$REPHOTO/dataset/historically_interesting/aligned/manual_celebrity_in_19th_century/tier1/${list}/" -save_dir="results_test/${list}/" - - -TORCH_EXTENSIONS_DIR=/tmp/torch_extensions -PYTHONPATH="" \ -python scripts/inference.py \ - --images_dir="${base_dir}" \ - --save_dir="${save_dir}" \ - "${ckpt}" diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Igilenna Thahanam Nam Durak MP3 for Free - Sinhala Music.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Igilenna Thahanam Nam Durak MP3 for Free - Sinhala Music.md deleted file mode 100644 index 82a206d5ad8756751f71d8978196bf679e2f7b00..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Igilenna Thahanam Nam Durak MP3 for Free - Sinhala Music.md +++ /dev/null @@ -1,91 +0,0 @@ - -

    Igilenna Thahanam Nam Durak: A Popular Sinhala Song

    -

    If you are a fan of Sinhala music, you might have heard of a song called Igilenna Thahanam Nam Durak. This song is one of the most popular and loved songs in the Sinhala music industry, and it has been streamed and downloaded by millions of listeners. But what is this song about? Who are the singers and composers behind it? And how can you download it as an mp3 file? In this article, we will answer these questions and more.

    -

    igilenna thahanam nam durak mp3 download


    Download Zip ––– https://gohhs.com/2uPvnC



    -

    What is Igilenna Thahanam Nam Durak?

    -

    Igilenna Thahanam Nam Durak is a Sinhala song that was released in 2020. It is a duet song sung by two famous Sinhala singers, Rohana Weerasinghe and Nirosha Virajini. The song was composed by Rohana Weerasinghe, who is also a renowned music director and lyricist in Sri Lanka. The song is part of an album called Sanda Diya Uthura, which features 12 songs by Rohana Weerasinghe and various artists.

    -

    The meaning of the title

    -

    The title of the song, Igilenna Thahanam Nam Durak, can be translated as "If we have to live apart". It is a phrase that expresses the pain and sorrow of separation between two lovers. The song is a sad love song that tells the story of a couple who are forced to part ways due to circumstances beyond their control.

    -

    The singers and composers

    -

    Rohana Weerasinghe is one of the most respected and influential figures in the Sinhala music industry. He has been active since the 1970s, and he has composed music for over 300 films, 40 teledramas, and 2000 songs. He has won many awards and honors for his contributions to Sinhala music and culture, including the Presidential Award, the Sarasaviya Award, the Swarna Sanka Award, and the Kalashoori Award. He is also a versatile singer who can sing in different genres and styles.

    -

    Nirosha Virajini is one of the most popular and talented female singers in Sri Lanka. She started her singing career at the age of 13, and she has recorded over 1000 songs in various languages, including Sinhala, Tamil, Hindi, English, Malayalam, Telugu, Kannada, and Arabic. She has also won many awards and accolades for her singing skills, such as the Sumathi Award, the Raigam Award, the Derana Music Award, and the SLIM-Nielsen People's Award. She is known for her powerful voice and expressive emotions.

    -

    The lyrics and theme

    -

    The lyrics of Igilenna Thahanam Nam Durak are written by Rohana Weerasinghe himself. They are poetic and poignant, conveying the feelings of longing, regret, and hopelessness that the lovers experience. The lyrics also use metaphors and imagery to describe their situation, such as comparing their love to a flower that withers away, or a star that fades away. The theme of the song is universal and timeless, as it deals with the dilemma of choosing between love and duty, or between heart and mind.

    -

    igilenna thahanam nam durak song lyrics
    -igilenna thahanam nam durak rohana weerasinghe
    -igilenna thahanam nam durak nirosha virajini
    -igilenna thahanam nam durak spotify
    -igilenna thahanam nam durak shazam
    -igilenna thahanam nam durak youtube
    -igilenna thahanam nam durak video
    -igilenna thahanam nam durak chords
    -igilenna thahanam nam durak karaoke
    -igilenna thahanam nam durak sinhala song
    -igilenna thahanam nam durak mp3 free download
    -igilenna thahanam nam durak mp3 320kbps
    -igilenna thahanam nam durak mp3 ringtone
    -igilenna thahanam nam durak mp3 hiru fm
    -igilenna thahanam nam durak mp3 ananmanan
    -igilenna thahanam sanda diya uthura mp3 download
    -igilenna thahanam sanda diya uthura song lyrics
    -igilenna thahanam sanda diya uthura nirosha virajini
    -igilenna thahanam sanda diya uthura jiosaavn
    -igilenna thahanam sanda diya uthura album
    -igilenna thahanam sanda diya uthura youtube
    -igilenna thahanam sanda diya uthura video
    -igilenna thahanam sanda diya uthura chords
    -igilenna thahanam sanda diya uthura karaoke
    -igilenna thahanam sanda diya uthura sinhala song
    -igilenna thahanam sanda diya uthura mp3 free download
    -igilenna thahanam sanda diya uthura mp3 320kbps
    -igilenna thahanam sanda diya uthura mp3 ringtone
    -igilenna thahanam sanda diya uthura mp3 hiru fm
    -igilenna thahanam sanda diya uthura mp3 ananmanan
    -rohana weerasinghe songs mp3 download
    -rohana weerasinghe songs list
    -rohana weerasinghe songs lyrics
    -rohana weerasinghe songs spotify
    -rohana weerasinghe songs shazam
    -rohana weerasinghe songs youtube
    -rohana weerasinghe songs video
    -rohana weerasinghe songs chords
    -rohana weerasinghe songs karaoke
    -rohana weerasinghe songs sinhala songs

    -

    Why is Igilenna Thahanam Nam Durak popular?

    Why is Igilenna Thahanam Nam Durak popular?

    -

    Igilenna Thahanam Nam Durak is popular for many reasons. Some of them are:

    -

    The appeal of the melody and rhythm

    -

    The song has a beautiful and catchy melody that captures the attention of the listeners. The song is composed in a minor scale, which gives it a melancholic and nostalgic tone. The song also has a rhythmic and lively beat that contrasts with the sad lyrics, creating a dynamic and interesting effect. The song uses various instruments, such as guitars, keyboards, drums, violins, and flutes, to create a rich and diverse sound.

    -

    The emotional connection with the listeners

    -

    The song has a strong emotional impact on the listeners, as it touches their hearts and souls. The song resonates with the listeners who have experienced or witnessed similar situations of love and separation. The song also evokes empathy and sympathy for the lovers who are suffering and struggling. The song makes the listeners feel their pain and sorrow, as well as their hope and courage.

    -

    The availability of online platforms

    -

    The song has been widely available on various online platforms, such as YouTube, Spotify, JioSaavn, Shazam, and others. These platforms have enabled the song to reach a large and diverse audience, both locally and globally. The song has also received positive feedback and reviews from the users, who have liked, shared, commented, and subscribed to the song. The song has also been featured on various playlists, charts, and radio stations.

    -

    How to download Igilenna Thahanam Nam Durak mp3?

    -

    If you want to download Igilenna Thahanam Nam Durak mp3, you need to consider some legal and ethical issues first. Then, you need to choose the best sources to download the song from.

    -

    The legal and ethical issues of downloading music

    -

    Downloading music from the internet can be illegal and unethical if you do not have the permission or license from the owners or creators of the music. This can violate their intellectual property rights and harm their income and reputation. Therefore, you should always respect the rights of the artists and composers, and support their work by paying for their music or using authorized platforms. You should also avoid downloading music from pirated or illegal websites, as they can expose you to viruses, malware, or scams.

    -

    The best sources to download Igilenna Thahanam Nam Durak mp3

    -

    There are many sources to download Igilenna Thahanam Nam Durak mp3 legally and ethically. Some of them are:

    -

    Spotify

    -

    Spotify is one of the most popular and reliable music streaming services in the world. It offers millions of songs in various genres and languages, including Igilenna Thahanam Nam Durak. You can listen to the song for free with ads, or you can subscribe to Spotify Premium for ad-free listening and offline downloading. You can also create your own playlists, discover new music, and share your favorites with your friends.

    -

    JioSaavn

    -

    JioSaavn is one of the leading music streaming services in South Asia. It offers a vast collection of songs in different languages, especially Sinhala, Tamil, Hindi, and English. You can listen to Igilenna Thahanam Nam Durak for free with ads, or you can upgrade to JioSaavn Pro for ad-free listening and offline downloading. You can also access exclusive content, podcasts, radio stations, and recommendations.

    -

    Shazam

    -

    Shazam is a unique and innovative music app that can identify any song playing around you. It can also provide you with information about the song, such as the title, artist, album, genre, lyrics, and more. You can also listen to Igilenna Thahanam Nam Durak on Shazam for free with ads, or you can link your Shazam account to other music services like Spotify or Apple Music for ad-free listening and offline downloading. You can also discover new music, follow your favorite artists, and share your discoveries with your friends.

    -

    Conclusion

    -

    Igilenna Thahanam Nam Durak is a popular Sinhala song that has captivated the hearts of many listeners. It is a sad love song that tells the story of two lovers who have to live apart due to circumstances beyond their control. The song is sung by Rohana Weerasinghe and Nirosha Virajini, who are both famous and talented Sinhala singers. The song is composed by Rohana Weerasinghe himself, who is also a renowned music director and lyricist in Sri Lanka. The song has a beautiful and catchy melody that captures the attention of the listeners. The song has a strong emotional impact on the listeners, as it resonates with their feelings of longing, regret, and hopelessness. The song is also widely available on various online platforms, such as YouTube, Spotify, JioSaavn, and Shazam. If you want to download the song as an mp3 file, you need to consider the legal and ethical issues of downloading music, and choose the best sources to download the song from. Igilenna Thahanam Nam Durak is a song that you should not miss if you love Sinhala music.

    -

    FAQs

    -

    Here are some frequently asked questions about Igilenna Thahanam Nam Durak:

    -
      -
    1. What is the meaning of Igilenna Thahanam Nam Durak?
    2. -

      Igilenna Thahanam Nam Durak means "If we have to live apart". It is a phrase that expresses the pain and sorrow of separation between two lovers.

      -
    3. Who are the singers and composers of Igilenna Thahanam Nam Durak?
    4. -

      Igilenna Thahanam Nam Durak is sung by Rohana Weerasinghe and Nirosha Virajini, who are both famous and talented Sinhala singers. The song is composed by Rohana Weerasinghe himself, who is also a renowned music director and lyricist in Sri Lanka.

      -
    5. When was Igilenna Thahanam Nam Durak released?
    6. -

      Igilenna Thahanam Nam Durak was released in 2020. It is part of an album called Sanda Diya Uthura, which features 12 songs by Rohana Weerasinghe and various artists.

      -
    7. Why is Igilenna Thahanam Nam Durak popular?
    8. -

      Igilenna Thahanam Nam Durak is popular because it has a beautiful and catchy melody, a strong emotional impact, and a wide availability on online platforms. The song appeals to the listeners who have experienced or witnessed similar situations of love and separation.

      -
    9. How can I download Igilenna Thahanam Nam Durak mp3?
    10. -

      You can download Igilenna Thahanam Nam Durak mp3 from various sources, such as Spotify, JioSaavn, and Shazam. However, you need to consider the legal and ethical issues of downloading music, and respect the rights of the artists and composers.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/SplitTrack2MusicGen/tests/common_utils/temp_utils.py b/spaces/fffiloni/SplitTrack2MusicGen/tests/common_utils/temp_utils.py deleted file mode 100644 index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittenly. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/on-finished/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/on-finished/README.md deleted file mode 100644 index 8973cded6589a6cc5a9e1718e3fb0d709fe6e8d8..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/on-finished/README.md +++ /dev/null @@ -1,162 +0,0 @@ -# on-finished - -[![NPM Version][npm-version-image]][npm-url] -[![NPM Downloads][npm-downloads-image]][npm-url] -[![Node.js Version][node-image]][node-url] -[![Build Status][ci-image]][ci-url] -[![Coverage Status][coveralls-image]][coveralls-url] - -Execute a callback when a HTTP request closes, finishes, or errors. - -## Install - -This is a [Node.js](https://nodejs.org/en/) module available through the -[npm registry](https://www.npmjs.com/). Installation is done using the -[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): - -```sh -$ npm install on-finished -``` - -## API - -```js -var onFinished = require('on-finished') -``` - -### onFinished(res, listener) - -Attach a listener to listen for the response to finish. The listener will -be invoked only once when the response finished. If the response finished -to an error, the first argument will contain the error. If the response -has already finished, the listener will be invoked. - -Listening to the end of a response would be used to close things associated -with the response, like open files. - -Listener is invoked as `listener(err, res)`. - - - -```js -onFinished(res, function (err, res) { - // clean up open fds, etc. - // err contains the error if request error'd -}) -``` - -### onFinished(req, listener) - -Attach a listener to listen for the request to finish. The listener will -be invoked only once when the request finished. If the request finished -to an error, the first argument will contain the error. If the request -has already finished, the listener will be invoked. - -Listening to the end of a request would be used to know when to continue -after reading the data. - -Listener is invoked as `listener(err, req)`. - - - -```js -var data = '' - -req.setEncoding('utf8') -req.on('data', function (str) { - data += str -}) - -onFinished(req, function (err, req) { - // data is read unless there is err -}) -``` - -### onFinished.isFinished(res) - -Determine if `res` is already finished. This would be useful to check and -not even start certain operations if the response has already finished. - -### onFinished.isFinished(req) - -Determine if `req` is already finished. This would be useful to check and -not even start certain operations if the request has already finished. - -## Special Node.js requests - -### HTTP CONNECT method - -The meaning of the `CONNECT` method from RFC 7231, section 4.3.6: - -> The CONNECT method requests that the recipient establish a tunnel to -> the destination origin server identified by the request-target and, -> if successful, thereafter restrict its behavior to blind forwarding -> of packets, in both directions, until the tunnel is closed. Tunnels -> are commonly used to create an end-to-end virtual connection, through -> one or more proxies, which can then be secured using TLS (Transport -> Layer Security, [RFC5246]). - -In Node.js, these request objects come from the `'connect'` event on -the HTTP server. - -When this module is used on a HTTP `CONNECT` request, the request is -considered "finished" immediately, **due to limitations in the Node.js -interface**. This means if the `CONNECT` request contains a request entity, -the request will be considered "finished" even before it has been read. - -There is no such thing as a response object to a `CONNECT` request in -Node.js, so there is no support for one. - -### HTTP Upgrade request - -The meaning of the `Upgrade` header from RFC 7230, section 6.1: - -> The "Upgrade" header field is intended to provide a simple mechanism -> for transitioning from HTTP/1.1 to some other protocol on the same -> connection. - -In Node.js, these request objects come from the `'upgrade'` event on -the HTTP server. - -When this module is used on a HTTP request with an `Upgrade` header, the -request is considered "finished" immediately, **due to limitations in the -Node.js interface**. This means if the `Upgrade` request contains a request -entity, the request will be considered "finished" even before it has been -read. - -There is no such thing as a response object to a `Upgrade` request in -Node.js, so there is no support for one. - -## Example - -The following code ensures that file descriptors are always closed -once the response finishes. - -```js -var destroy = require('destroy') -var fs = require('fs') -var http = require('http') -var onFinished = require('on-finished') - -http.createServer(function onRequest (req, res) { - var stream = fs.createReadStream('package.json') - stream.pipe(res) - onFinished(res, function () { - destroy(stream) - }) -}) -``` - -## License - -[MIT](LICENSE) - -[ci-image]: https://badgen.net/github/checks/jshttp/on-finished/master?label=ci -[ci-url]: https://github.com/jshttp/on-finished/actions/workflows/ci.yml -[coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/on-finished/master -[coveralls-url]: https://coveralls.io/r/jshttp/on-finished?branch=master -[node-image]: https://badgen.net/npm/node/on-finished -[node-url]: https://nodejs.org/en/download -[npm-downloads-image]: https://badgen.net/npm/dm/on-finished -[npm-url]: https://npmjs.org/package/on-finished -[npm-version-image]: https://badgen.net/npm/v/on-finished diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_25.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_25.py deleted file mode 100644 index e60e2a473d9a59fedb6018397a764e5442081605..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_25.py +++ /dev/null @@ -1,25 +0,0 @@ -def is_spam(message: str) -> bool: - import re - - # Check for common spam phrases and patterns - spam_phrases = ['당첨 되셨습니다', '공시발표', '급등예정', '증권사 매집주 공개', '정회원방 입장'] - for phrase in spam_phrases: - if phrase in message: - return True - - # Check for excessive use of symbols - symbols_pattern = r'[!@#\$%\^&\*\(\)\-_=+\[\]\{\};:"\|,.<>/?~`§※✭]' - if len(re.findall(symbols_pattern, message)) > 5: - return True - - # Check for suspicious urls - url_pattern = r'(?:http|https)://|bit\.ly|han\.gl|me2\.kr|gg\.gg|buly\.kr|openkakao\.at|abit\.ly' - if re.search(url_pattern, message): - return True - - # Check for excessive use of numbers or any potential monetary values - numbers_pattern = r'\d{4,}|[0-9]+원|[0-9]+,\d{3,}|[0-9]+%\s*\+' - if re.search(numbers_pattern, message): - return True - - return False \ No newline at end of file diff --git a/spaces/fuckyoudeki/AutoGPT/BULLETIN.md b/spaces/fuckyoudeki/AutoGPT/BULLETIN.md deleted file mode 100644 index 735048ddc87a914987c6bd70ccdb231a80242ae3..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/BULLETIN.md +++ /dev/null @@ -1,2 +0,0 @@ -Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. -If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag \ No newline at end of file diff --git a/spaces/gebebieve/gen/Dockerfile b/spaces/gebebieve/gen/Dockerfile deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/dice.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/dice.py deleted file mode 100644 index e4d3dc816a71c146dc34602f330471d52bf094b7..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/dice.py +++ /dev/null @@ -1,138 +0,0 @@ -from typing import Optional, List - -import torch -import torch.nn.functional as F -from torch.nn.modules.loss import _Loss -from ._functional import soft_dice_score, to_tensor -from .constants import BINARY_MODE, MULTICLASS_MODE, MULTILABEL_MODE - -__all__ = ["DiceLoss"] - - -class DiceLoss(_Loss): - def __init__( - self, - mode: str, - classes: Optional[List[int]] = None, - log_loss: bool = False, - from_logits: bool = True, - smooth: float = 0.0, - ignore_index: Optional[int] = None, - eps: float = 1e-7, - ): - """Dice loss for image segmentation task. - It supports binary, multiclass and multilabel cases - - Args: - mode: Loss mode 'binary', 'multiclass' or 'multilabel' - classes: List of classes that contribute in loss computation. By default, all channels are included. - log_loss: If True, loss computed as `- log(dice_coeff)`, otherwise `1 - dice_coeff` - from_logits: If True, assumes input is raw logits - smooth: Smoothness constant for dice coefficient (a) - ignore_index: Label that indicates ignored pixels (does not contribute to loss) - eps: A small epsilon for numerical stability to avoid zero division error - (denominator will be always greater or equal to eps) - - Shape - - **y_pred** - torch.Tensor of shape (N, C, H, W) - - **y_true** - torch.Tensor of shape (N, H, W) or (N, C, H, W) - - Reference - https://github.com/BloodAxe/pytorch-toolbelt - """ - assert mode in {BINARY_MODE, MULTILABEL_MODE, MULTICLASS_MODE} - super(DiceLoss, self).__init__() - self.mode = mode - if classes is not None: - assert ( - mode != BINARY_MODE - ), "Masking classes is not supported with mode=binary" - classes = to_tensor(classes, dtype=torch.long) - - self.classes = classes - self.from_logits = from_logits - self.smooth = smooth - self.eps = eps - self.log_loss = log_loss - self.ignore_index = ignore_index - - def forward(self, y_pred: torch.Tensor, y_true: torch.Tensor) -> torch.Tensor: - - assert y_true.size(0) == y_pred.size(0) - - if self.from_logits: - # Apply activations to get [0..1] class probabilities - # Using Log-Exp as this gives more numerically stable result and does not cause vanishing gradient on - # extreme values 0 and 1 - if self.mode == MULTICLASS_MODE: - y_pred = y_pred.log_softmax(dim=1).exp() - else: - y_pred = F.logsigmoid(y_pred).exp() - - bs = y_true.size(0) - num_classes = y_pred.size(1) - dims = (0, 2) - - if self.mode == BINARY_MODE: - y_true = y_true.view(bs, 1, -1) - y_pred = y_pred.view(bs, 1, -1) - - if self.ignore_index is not None: - mask = y_true != self.ignore_index - y_pred = y_pred * mask - y_true = y_true * mask - - if self.mode == MULTICLASS_MODE: - y_true = y_true.view(bs, -1) - y_pred = y_pred.view(bs, num_classes, -1) - - if self.ignore_index is not None: - mask = y_true != self.ignore_index - y_pred = y_pred * mask.unsqueeze(1) - - y_true = F.one_hot( - (y_true * mask).to(torch.long), num_classes - ) # N,H*W -> N,H*W, C - y_true = y_true.permute(0, 2, 1) * mask.unsqueeze(1) # N, C, H*W - else: - y_true = F.one_hot(y_true, num_classes) # N,H*W -> N,H*W, C - y_true = y_true.permute(0, 2, 1) # N, C, H*W - - if self.mode == MULTILABEL_MODE: - y_true = y_true.view(bs, num_classes, -1) - y_pred = y_pred.view(bs, num_classes, -1) - - if self.ignore_index is not None: - mask = y_true != self.ignore_index - y_pred = y_pred * mask - y_true = y_true * mask - - scores = self.compute_score( - y_pred, y_true.type_as(y_pred), smooth=self.smooth, eps=self.eps, dims=dims - ) - - if self.log_loss: - loss = -torch.log(scores.clamp_min(self.eps)) - else: - loss = 1.0 - scores - - # Dice loss is undefined for non-empty classes - # So we zero contribution of channel that does not have true pixels - # NOTE: A better workaround would be to use loss term `mean(y_pred)` - # for this case, however it will be a modified jaccard loss - - mask = y_true.sum(dims) > 0 - loss *= mask.to(loss.dtype) - - if self.classes is not None: - loss = loss[self.classes] - - return self.aggregate_loss(loss) - - def aggregate_loss(self, loss): - return loss.mean() - - def compute_score( - self, output, target, smooth=0.0, eps=1e-7, dims=None - ) -> torch.Tensor: - return soft_dice_score(output, target, smooth, eps, dims) diff --git a/spaces/gligen/demo/gligen/distributed.py b/spaces/gligen/demo/gligen/distributed.py deleted file mode 100644 index b39bc6e92f74fc46c6ec316e1e41859744a91b7a..0000000000000000000000000000000000000000 --- a/spaces/gligen/demo/gligen/distributed.py +++ /dev/null @@ -1,122 +0,0 @@ -import math -import pickle - -import torch -from torch import distributed as dist -from torch.utils.data.sampler import Sampler - - -def get_rank(): - if not dist.is_available(): - return 0 - - if not dist.is_initialized(): - return 0 - - return dist.get_rank() - - -def synchronize(): - if not dist.is_available(): - return - if not dist.is_initialized(): - return - - world_size = dist.get_world_size() - if world_size == 1: - return - - dist.barrier() - - -def get_world_size(): - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size() - - -def reduce_sum(tensor): - if not dist.is_available(): - return tensor - - if not dist.is_initialized(): - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - - return tensor - - -def gather_grad(params): - world_size = get_world_size() - - if world_size == 1: - return - - for param in params: - if param.grad is not None: - dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM) - param.grad.data.div_(world_size) - - -def all_gather(data): - world_size = get_world_size() - - if world_size == 1: - return [data] - - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to('cuda') - - local_size = torch.IntTensor([tensor.numel()]).to('cuda') - size_list = [torch.IntTensor([0]).to('cuda') for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to('cuda')) - - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to('cuda') - tensor = torch.cat((tensor, padding), 0) - - dist.all_gather(tensor_list, tensor) - - data_list = [] - - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_loss_dict(loss_dict): - world_size = get_world_size() - - if world_size < 2: - return loss_dict - - with torch.no_grad(): - keys = [] - losses = [] - - for k in sorted(loss_dict.keys()): - keys.append(k) - losses.append(loss_dict[k]) - - losses = torch.stack(losses, 0) - dist.reduce(losses, dst=0) - - if dist.get_rank() == 0: - losses /= world_size - - reduced_losses = {k: v for k, v in zip(keys, losses)} - - return reduced_losses diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Computer Networking A Top Down Approach 6th Edition Solutions Pdf __HOT__.md b/spaces/gotiQspiryo/whisper-ui/examples/Computer Networking A Top Down Approach 6th Edition Solutions Pdf __HOT__.md deleted file mode 100644 index 6d6b9a6eb8d707aea535387b4785fb5bdea393a4..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Computer Networking A Top Down Approach 6th Edition Solutions Pdf __HOT__.md +++ /dev/null @@ -1,34 +0,0 @@ - -

    How to Find Computer Networking: A Top-Down Approach 6th Edition Solutions PDF

    - -

    If you are looking for a comprehensive and easy-to-understand guide to computer networking, you may have come across the textbook Computer Networking: A Top-Down Approach by James F. Kurose and Keith W. Ross. This book covers the essential topics of computer networks, such as application layer, transport layer, network layer, link layer, wireless and mobile networks, multimedia networking, security and network management.

    -

    computer networking a top down approach 6th edition solutions pdf


    DOWNLOAD ✺✺✺ https://urlgoal.com/2uyMyM



    - -

    However, reading the textbook alone may not be enough to master the concepts and skills of computer networking. You may also need to practice solving problems and answering questions that test your knowledge and understanding of the material. That's why you may be interested in finding the solutions manual for this textbook, which contains the answers to all the review questions and problems at the end of each chapter.

    - -

    But where can you find the computer networking a top down approach 6th edition solutions pdf? In this article, we will show you some of the best sources to download or access this valuable resource online.

    - -

    Quizlet

    - -

    One of the most popular websites for studying and learning is Quizlet[^1^]. Quizlet allows you to create flashcards, quizzes, games and other study tools for any subject or topic. You can also browse millions of flashcards and study sets created by other users and teachers.

    - -

    Quizlet has a study set for Computer Networking: A Top-Down Approach 6th Edition that contains the solutions and answers to all the review questions and problems in the textbook[^1^]. You can view the solutions online or download them as PDF files. You can also use Quizlet's features to test yourself on the solutions, such as matching, multiple choice, true/false and written questions.

    -

    - -

    Studocu

    - -

    Another website that offers study materials and resources for students is Studocu[^2^]. Studocu allows you to upload and share your notes, summaries, exams, essays and other documents with other students. You can also download and access documents uploaded by other students from various universities and courses.

    - -

    Studocu has a document for Computer Networking: A Top-Down Approach 6th Edition that contains the solutions to all the review questions and problems in the textbook[^2^]. You can view the solutions online or download them as PDF files. You can also use Studocu's features to highlight, annotate and bookmark the solutions.

    - -

    Academia.edu

    - -

    A third website that provides academic papers and publications for researchers and scholars is Academia.edu[^3^]. Academia.edu allows you to upload and share your research papers, books, chapters, dissertations and other works with other academics. You can also follow and connect with other researchers who share your interests and fields of study.

    - -

    Academia.edu has a paper for Computer Networking: A Top-Down Approach 6th Edition that contains the solutions to all the review questions and problems in the textbook[^3^]. You can view the solutions online or download them as PDF files. You can also use Academia.edu's features to cite, comment and recommend the paper.

    - -

    Conclusion

    - -

    In conclusion, finding the computer networking a top down approach 6th edition solutions pdf is not difficult if you know where to look. We have shown you three of the best websites that offer this resource online: Quizlet[^1^], Studocu[^2^] and Academia.edu[^3^]. You can use these websites to view or download the solutions manual for this textbook and enhance your learning experience.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Free Fixed Adult Videos 89.md b/spaces/gotiQspiryo/whisper-ui/examples/Free Fixed Adult Videos 89.md deleted file mode 100644 index 4c3904483683cd1d441ed476be1fbad5f1d7aa28..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Free Fixed Adult Videos 89.md +++ /dev/null @@ -1,6 +0,0 @@ -

    free adult videos 89


    Download Zip ===== https://urlgoal.com/2uyMYZ



    - -You can also see more videos like Safadas sex 89 we have more videos of Lesbian sex. The best free porn videos XXX. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Kollimalai Singam Tamil Dubbed Movie 49.md b/spaces/gotiQspiryo/whisper-ui/examples/Kollimalai Singam Tamil Dubbed Movie 49.md deleted file mode 100644 index b315fcdfb2e4e00e15287ae40b48c8cf87ed3053..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Kollimalai Singam Tamil Dubbed Movie 49.md +++ /dev/null @@ -1,11 +0,0 @@ - -

    Kollimalai Singam: A Tamil Dubbed Version of Anji, a Telugu Fantasy Film

    -

    Kollimalai Singam is a Tamil dubbed version of Anji, a Telugu fantasy film released in 2004. The film stars Chiranjeevi, Namrata Shirodkar, and Tinnu Anand in the lead roles. The film is based on the legend of the Aatmalingam, a powerful divine stone that grants immortality to anyone who possesses it.

    -

    The film follows the adventures of Anji, a forest ranger who accidentally discovers the Aatmalingam hidden in the Himalayas. He is pursued by a greedy and evil businessman, Bhatia, who wants to use the stone for his own benefit. Anji also meets Swapna, a journalist who helps him in his quest to protect the Aatmalingam from falling into the wrong hands.

    -

    Kollimalai Singam Tamil Dubbed Movie 49


    Download 🌟 https://urlgoal.com/2uyM1y



    -

    Kollimalai Singam was dubbed in Tamil and released in 2013. The film received positive reviews from the critics and the audience for its visual effects, action sequences, and Chiranjeevi's performance. The film was also dubbed in Hindi as Diler-The Daring and in Malayalam as Chekavan.

    - -

    Anji and Swapna join forces to protect the Aatmalingam from Bhatia and his henchmen. They also learn that Anji is the chosen one who can touch the Aatmalingam without any harm. Anji also has a personal vendetta against Bhatia, who killed his parents when he was a child. Anji and Swapna face many obstacles and challenges as they try to reach the Himalayas before the Akasaganga flows.

    -

    The film culminates in a climactic showdown between Anji and Bhatia at the cave where the Aatmalingam is located. Anji manages to defeat Bhatia and his men with the help of Sivanna and the forest dwellers. He then drinks the holy water of the Akasaganga and becomes immortal. He decides to leave the Aatmalingam in its place as it belongs to Lord Shiva. He also marries Swapna and adopts the four orphans as his children.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/simultaneous_translation/modules/fixed_pre_decision.py b/spaces/gradio/HuBERT/examples/simultaneous_translation/modules/fixed_pre_decision.py deleted file mode 100644 index dd29c031b3b23401dcf61bbe48991934099429a8..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/simultaneous_translation/modules/fixed_pre_decision.py +++ /dev/null @@ -1,254 +0,0 @@ -from functools import partial - -import torch -from torch import Tensor -import math -import torch.nn.functional as F - -from . import register_monotonic_attention -from .monotonic_multihead_attention import ( - MonotonicMultiheadAttentionWaitK, - MonotonicMultiheadAttentionHardAligned, - MonotonicMultiheadAttentionInfiniteLookback, -) -from typing import Dict, Optional -from examples.simultaneous_translation.utils import p_choose_strategy - -def fixed_pooling_monotonic_attention(monotonic_attention): - def create_model(monotonic_attention, klass): - class FixedStrideMonotonicAttention(monotonic_attention): - def __init__(self, args): - self.waitk_lagging = 0 - self.num_heads = 0 - self.noise_mean = 0.0 - self.noise_var = 0.0 - super().__init__(args) - self.pre_decision_type = args.fixed_pre_decision_type - self.pre_decision_ratio = args.fixed_pre_decision_ratio - self.pre_decision_pad_threshold = args.fixed_pre_decision_pad_threshold - if self.pre_decision_ratio == 1: - return - - self.strategy = args.simul_type - - if args.fixed_pre_decision_type == "average": - self.pooling_layer = torch.nn.AvgPool1d( - kernel_size=self.pre_decision_ratio, - stride=self.pre_decision_ratio, - ceil_mode=True, - ) - elif args.fixed_pre_decision_type == "last": - - def last(key): - if key.size(2) < self.pre_decision_ratio: - return key - else: - k = key[ - :, - :, - self.pre_decision_ratio - 1 :: self.pre_decision_ratio, - ].contiguous() - if key.size(-1) % self.pre_decision_ratio != 0: - k = torch.cat([k, key[:, :, -1:]], dim=-1).contiguous() - return k - - self.pooling_layer = last - else: - raise NotImplementedError - - @staticmethod - def add_args(parser): - super( - FixedStrideMonotonicAttention, FixedStrideMonotonicAttention - ).add_args(parser) - parser.add_argument( - "--fixed-pre-decision-ratio", - type=int, - required=True, - help=( - "Ratio for the fixed pre-decision," - "indicating how many encoder steps will start" - "simultaneous decision making process." - ), - ) - parser.add_argument( - "--fixed-pre-decision-type", - default="average", - choices=["average", "last"], - help="Pooling type", - ) - parser.add_argument( - "--fixed-pre-decision-pad-threshold", - type=float, - default=0.3, - help="If a part of the sequence has pad" - ",the threshold the pooled part is a pad.", - ) - - def insert_zeros(self, x): - bsz_num_heads, tgt_len, src_len = x.size() - stride = self.pre_decision_ratio - weight = F.pad(torch.ones(1, 1, 1).to(x), (stride - 1, 0)) - x_upsample = F.conv_transpose1d( - x.view(-1, src_len).unsqueeze(1), - weight, - stride=stride, - padding=0, - ) - return x_upsample.squeeze(1).view(bsz_num_heads, tgt_len, -1) - - def p_choose_waitk( - self, query, key, key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None - ): - """ - query: bsz, tgt_len - key: bsz, src_len - key_padding_mask: bsz, src_len - """ - if incremental_state is not None: - # Retrieve target length from incremental states - # For inference the length of query is always 1 - tgt = incremental_state["steps"]["tgt"] - assert tgt is not None - tgt_len = int(tgt) - else: - tgt_len, bsz, _ = query.size() - - src_len, bsz, _ = key.size() - - p_choose = torch.ones(bsz, tgt_len, src_len).to(query) - p_choose = torch.tril(p_choose, diagonal=self.waitk_lagging - 1) - p_choose = torch.triu(p_choose, diagonal=self.waitk_lagging - 1) - - if incremental_state is not None: - p_choose = p_choose[:, -1:] - tgt_len = 1 - - # Extend to each head - p_choose = ( - p_choose.contiguous() - .unsqueeze(1) - .expand(-1, self.num_heads, -1, -1) - .contiguous() - .view(-1, tgt_len, src_len) - ) - - return p_choose - - def p_choose( - self, - query: Optional[Tensor], - key: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - assert key is not None - assert query is not None - src_len = key.size(0) - tgt_len = query.size(0) - batch_size = query.size(1) - - if self.pre_decision_ratio == 1: - if self.strategy == "waitk": - return p_choose_strategy.waitk( - query, - key, - self.waitk_lagging, - self.num_heads, - key_padding_mask, - incremental_state=incremental_state, - ) - else: # hard_aligned or infinite_lookback - q_proj, k_proj, _ = self.input_projections(query, key, None, "monotonic") - attn_energy = self.attn_energy(q_proj, k_proj, key_padding_mask) - return p_choose_strategy.hard_aligned( - q_proj, - k_proj, - attn_energy, - self.noise_mean, - self.noise_var, - self.training - ) - - key_pool = self.pooling_layer(key.transpose(0, 2)).transpose(0, 2) - - if key_padding_mask is not None: - key_padding_mask_pool = ( - self.pooling_layer(key_padding_mask.unsqueeze(0).float()) - .squeeze(0) - .gt(self.pre_decision_pad_threshold) - ) - # Make sure at least one element is not pad - key_padding_mask_pool[:, 0] = 0 - else: - key_padding_mask_pool = None - - if incremental_state is not None: - # The floor instead of ceil is used for inference - # But make sure the length key_pool at least 1 - if ( - max(1, math.floor(key.size(0) / self.pre_decision_ratio)) - ) < key_pool.size(0): - key_pool = key_pool[:-1] - if key_padding_mask_pool is not None: - key_padding_mask_pool = key_padding_mask_pool[:-1] - - p_choose_pooled = self.p_choose_waitk( - query, - key_pool, - key_padding_mask_pool, - incremental_state=incremental_state, - ) - - # Upsample, interpolate zeros - p_choose = self.insert_zeros(p_choose_pooled) - - if p_choose.size(-1) < src_len: - # Append zeros if the upsampled p_choose is shorter than src_len - p_choose = torch.cat( - [ - p_choose, - torch.zeros( - p_choose.size(0), - tgt_len, - src_len - p_choose.size(-1) - ).to(p_choose) - ], - dim=2 - ) - else: - # can be larger than src_len because we used ceil before - p_choose = p_choose[:, :, :src_len] - p_choose[:, :, -1] = p_choose_pooled[:, :, -1] - - assert list(p_choose.size()) == [ - batch_size * self.num_heads, - tgt_len, - src_len, - ] - - return p_choose - - FixedStrideMonotonicAttention.__name__ = klass.__name__ - return FixedStrideMonotonicAttention - - return partial(create_model, monotonic_attention) - - -@register_monotonic_attention("waitk_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicMultiheadAttentionWaitK) -class MonotonicMultiheadAttentionWaitkFixedStride: - pass - - -@register_monotonic_attention("hard_aligned_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicMultiheadAttentionHardAligned) -class MonotonicMultiheadAttentionHardFixedStride: - pass - - -@register_monotonic_attention("infinite_lookback_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicMultiheadAttentionInfiniteLookback) -class MonotonicMultiheadAttentionInfiniteLookbackFixedStride: - pass diff --git a/spaces/gradio/HuBERT/examples/simultaneous_translation/modules/monotonic_multihead_attention.py b/spaces/gradio/HuBERT/examples/simultaneous_translation/modules/monotonic_multihead_attention.py deleted file mode 100644 index f49b1daa2fbe920c290055b44a09bfe404fc4f89..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/simultaneous_translation/modules/monotonic_multihead_attention.py +++ /dev/null @@ -1,910 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from torch import Tensor -import torch.nn as nn - -from examples.simultaneous_translation.utils.functions import ( - exclusive_cumprod, - lengths_to_mask, -) -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules import MultiheadAttention - -from . import register_monotonic_attention -from typing import Dict, Optional - -from examples.simultaneous_translation.utils import p_choose_strategy - -@with_incremental_state -class MonotonicAttention(nn.Module): - """ - Abstract class of monotonic attentions - """ - - def __init__(self, args): - self.eps = args.attention_eps - self.mass_preservation = args.mass_preservation - - self.noise_type = args.noise_type - self.noise_mean = args.noise_mean - self.noise_var = args.noise_var - - self.energy_bias_init = args.energy_bias_init - self.energy_bias = ( - nn.Parameter(self.energy_bias_init * torch.ones([1])) - if args.energy_bias is True - else 0 - ) - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--no-mass-preservation', action="store_false", - dest="mass_preservation", - help='Do not stay on the last token when decoding') - parser.add_argument('--mass-preservation', action="store_true", - dest="mass_preservation", - help='Stay on the last token when decoding') - parser.set_defaults(mass_preservation=True) - parser.add_argument('--noise-var', type=float, default=1.0, - help='Variance of discretness noise') - parser.add_argument('--noise-mean', type=float, default=0.0, - help='Mean of discretness noise') - parser.add_argument('--noise-type', type=str, default="flat", - help='Type of discretness noise') - parser.add_argument('--energy-bias', action="store_true", - default=False, - help='Bias for energy') - parser.add_argument('--energy-bias-init', type=float, default=-2.0, - help='Initial value of the bias for energy') - parser.add_argument('--attention-eps', type=float, default=1e-6, - help='Epsilon when calculating expected attention') - - def p_choose(self, *args): - raise NotImplementedError - - def input_projections(self, *args): - raise NotImplementedError - - def attn_energy( - self, q_proj, k_proj, key_padding_mask=None, attn_mask=None - ): - """ - Calculating monotonic energies - - ============================================================ - Expected input size - q_proj: bsz * num_heads, tgt_len, self.head_dim - k_proj: bsz * num_heads, src_len, self.head_dim - key_padding_mask: bsz, src_len - attn_mask: tgt_len, src_len - """ - bsz, tgt_len, embed_dim = q_proj.size() - bsz = bsz // self.num_heads - src_len = k_proj.size(1) - - attn_energy = ( - torch.bmm(q_proj, k_proj.transpose(1, 2)) + self.energy_bias - ) - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - attn_energy += attn_mask - - attn_energy = attn_energy.view(bsz, self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - attn_energy = attn_energy.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - - return attn_energy - - def expected_alignment_train(self, p_choose, key_padding_mask: Optional[Tensor]): - """ - Calculating expected alignment for MMA - Mask is not need because p_choose will be 0 if masked - - q_ij = (1 − p_{ij−1})q_{ij−1} + a+{i−1j} - a_ij = p_ij q_ij - - Parallel solution: - ai = p_i * cumprod(1 − pi) * cumsum(a_i / cumprod(1 − pi)) - - ============================================================ - Expected input size - p_choose: bsz * num_heads, tgt_len, src_len - """ - - # p_choose: bsz * num_heads, tgt_len, src_len - bsz_num_heads, tgt_len, src_len = p_choose.size() - - # cumprod_1mp : bsz * num_heads, tgt_len, src_len - cumprod_1mp = exclusive_cumprod(1 - p_choose, dim=2, eps=self.eps) - cumprod_1mp_clamp = torch.clamp(cumprod_1mp, self.eps, 1.0) - - init_attention = p_choose.new_zeros([bsz_num_heads, 1, src_len]) - init_attention[:, :, 0] = 1.0 - - previous_attn = [init_attention] - - for i in range(tgt_len): - # p_choose: bsz * num_heads, tgt_len, src_len - # cumprod_1mp_clamp : bsz * num_heads, tgt_len, src_len - # previous_attn[i]: bsz * num_heads, 1, src_len - # alpha_i: bsz * num_heads, src_len - alpha_i = ( - p_choose[:, i] - * cumprod_1mp[:, i] - * torch.cumsum(previous_attn[i][:, 0] / cumprod_1mp_clamp[:, i], dim=1) - ).clamp(0, 1.0) - previous_attn.append(alpha_i.unsqueeze(1)) - - # alpha: bsz * num_heads, tgt_len, src_len - alpha = torch.cat(previous_attn[1:], dim=1) - - if self.mass_preservation: - # Last token has the residual probabilities - if key_padding_mask is not None and key_padding_mask[:, -1].any(): - # right padding - batch_size = key_padding_mask.size(0) - residuals = 1 - alpha.sum(dim=-1, keepdim=True).clamp(0.0, 1.0) - src_lens = src_len - key_padding_mask.sum(dim=1, keepdim=True) - src_lens = src_lens.expand( - batch_size, self.num_heads - ).contiguous().view(-1, 1) - src_lens = src_lens.expand(-1, tgt_len).contiguous() - # add back the last value - residuals += alpha.gather(2, src_lens.unsqueeze(-1) - 1) - alpha = alpha.scatter(2, src_lens.unsqueeze(-1) - 1, residuals) - else: - residuals = 1 - alpha[:, :, :-1].sum(dim=-1).clamp(0.0, 1.0) - alpha[:, :, -1] = residuals - - if torch.isnan(alpha).any(): - # Something is wrong - raise RuntimeError("NaN in alpha.") - - return alpha - - def expected_alignment_infer( - self, p_choose, encoder_padding_mask: Optional[Tensor], incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ): - # TODO modify this function - """ - Calculating mo alignment for MMA during inference time - - ============================================================ - Expected input size - p_choose: bsz * num_heads, tgt_len, src_len - incremental_state: dict - encodencoder_padding_mask: bsz * src_len - """ - # p_choose: bsz * self.num_heads, src_len - bsz_num_heads, tgt_len, src_len = p_choose.size() - # One token at a time - assert tgt_len == 1 - p_choose = p_choose[:, 0, :] - - monotonic_cache = self._get_monotonic_buffer(incremental_state) - - # prev_monotonic_step: bsz, num_heads - bsz = bsz_num_heads // self.num_heads - prev_monotonic_step = monotonic_cache.get( - "head_step", - p_choose.new_zeros([bsz, self.num_heads]).long() - ) - assert prev_monotonic_step is not None - bsz, num_heads = prev_monotonic_step.size() - assert num_heads == self.num_heads - assert bsz * num_heads == bsz_num_heads - - # p_choose: bsz, num_heads, src_len - p_choose = p_choose.view(bsz, num_heads, src_len) - - if encoder_padding_mask is not None: - src_lengths = src_len - \ - encoder_padding_mask.sum(dim=1, keepdim=True).long() - else: - src_lengths = prev_monotonic_step.new_ones(bsz, 1) * src_len - - # src_lengths: bsz, num_heads - src_lengths = src_lengths.expand_as(prev_monotonic_step) - # new_monotonic_step: bsz, num_heads - new_monotonic_step = prev_monotonic_step - - step_offset = 0 - if encoder_padding_mask is not None: - if encoder_padding_mask[:, 0].any(): - # left_pad_source = True: - step_offset = encoder_padding_mask.sum(dim=-1, keepdim=True) - - max_steps = src_lengths - 1 if self.mass_preservation else src_lengths - - # finish_read: bsz, num_heads - finish_read = new_monotonic_step.eq(max_steps) - p_choose_i = 1 - while finish_read.sum().item() < bsz * self.num_heads: - # p_choose: bsz * self.num_heads, src_len - # only choose the p at monotonic steps - # p_choose_i: bsz , self.num_heads - p_choose_i = ( - p_choose.gather( - 2, - (step_offset + new_monotonic_step) - .unsqueeze(2) - .clamp(0, src_len - 1), - ) - ).squeeze(2) - - action = ( - (p_choose_i < 0.5) - .type_as(prev_monotonic_step) - .masked_fill(finish_read, 0) - ) - # 1 x bsz - # sample actions on unfinished seq - # 1 means stay, finish reading - # 0 means leave, continue reading - # dist = torch.distributions.bernoulli.Bernoulli(p_choose) - # action = dist.sample().type_as(finish_read) * (1 - finish_read) - - new_monotonic_step += action - - finish_read = new_monotonic_step.eq(max_steps) | (action == 0) - - monotonic_cache["head_step"] = new_monotonic_step - # Whether a head is looking for new input - monotonic_cache["head_read"] = ( - new_monotonic_step.eq(max_steps) & (p_choose_i < 0.5) - ) - - # alpha: bsz * num_heads, 1, src_len - # new_monotonic_step: bsz, num_heads - alpha = ( - p_choose - .new_zeros([bsz * self.num_heads, src_len]) - .scatter( - 1, - (step_offset + new_monotonic_step) - .view(bsz * self.num_heads, 1).clamp(0, src_len - 1), - 1 - ) - ) - - if not self.mass_preservation: - alpha = alpha.masked_fill( - (new_monotonic_step == max_steps) - .view(bsz * self.num_heads, 1), - 0 - ) - - alpha = alpha.unsqueeze(1) - - self._set_monotonic_buffer(incremental_state, monotonic_cache) - - return alpha - - def _get_monotonic_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]]): - return self.get_incremental_state( - incremental_state, - 'monotonic', - ) or {} - - def _set_monotonic_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], buffer: Dict[str, Optional[Tensor]]): - self.set_incremental_state( - incremental_state, - 'monotonic', - buffer, - ) - - def v_proj_output(self, value): - raise NotImplementedError - - def forward( - self, query, key, value, - key_padding_mask=None, attn_mask=None, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights=True, static_kv=False - ): - - tgt_len, bsz, embed_dim = query.size() - src_len = value.size(0) - - # stepwise prob - # p_choose: bsz * self.num_heads, tgt_len, src_len - p_choose = self.p_choose( - query, key, key_padding_mask, incremental_state, - ) - - # expected alignment alpha - # bsz * self.num_heads, tgt_len, src_len - if incremental_state is not None: - alpha = self.expected_alignment_infer( - p_choose, key_padding_mask, incremental_state) - else: - alpha = self.expected_alignment_train( - p_choose, key_padding_mask) - - # expected attention beta - # bsz * self.num_heads, tgt_len, src_len - beta = self.expected_attention( - alpha, query, key, value, - key_padding_mask, attn_mask, - incremental_state - ) - - attn_weights = beta - - v_proj = self.v_proj_output(value) - - attn = torch.bmm(attn_weights.type_as(v_proj), v_proj) - - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - - attn = self.out_proj(attn) - - beta = beta.view(bsz, self.num_heads, tgt_len, src_len) - alpha = alpha.view(bsz, self.num_heads, tgt_len, src_len) - p_choose = p_choose.view(bsz, self.num_heads, tgt_len, src_len) - - return attn, { - "alpha": alpha, - "beta": beta, - "p_choose": p_choose, - } - - -@register_monotonic_attention("hard_aligned") -class MonotonicMultiheadAttentionHardAligned( - MonotonicAttention, MultiheadAttention -): - def __init__(self, args): - MultiheadAttention.__init__( - self, - embed_dim=args.decoder_embed_dim, - num_heads=args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - - MonotonicAttention.__init__(self, args) - - self.k_in_proj = {"monotonic": self.k_proj} - self.q_in_proj = {"monotonic": self.q_proj} - self.v_in_proj = {"output": self.v_proj} - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--no-mass-preservation', action="store_false", - dest="mass_preservation", - help='Do not stay on the last token when decoding') - parser.add_argument('--mass-preservation', action="store_true", - dest="mass_preservation", - help='Stay on the last token when decoding') - parser.set_defaults(mass_preservation=True) - parser.add_argument('--noise-var', type=float, default=1.0, - help='Variance of discretness noise') - parser.add_argument('--noise-mean', type=float, default=0.0, - help='Mean of discretness noise') - parser.add_argument('--noise-type', type=str, default="flat", - help='Type of discretness noise') - parser.add_argument('--energy-bias', action="store_true", - default=False, - help='Bias for energy') - parser.add_argument('--energy-bias-init', type=float, default=-2.0, - help='Initial value of the bias for energy') - parser.add_argument('--attention-eps', type=float, default=1e-6, - help='Epsilon when calculating expected attention') - - def attn_energy( - self, q_proj: Optional[Tensor], k_proj: Optional[Tensor], key_padding_mask: Optional[Tensor] = None, attn_mask: Optional[Tensor] = None - ): - """ - Calculating monotonic energies - - ============================================================ - Expected input size - q_proj: bsz * num_heads, tgt_len, self.head_dim - k_proj: bsz * num_heads, src_len, self.head_dim - key_padding_mask: bsz, src_len - attn_mask: tgt_len, src_len - """ - assert q_proj is not None # Optional[Tensor] annotations in the signature above are to make the JIT compiler happy - assert k_proj is not None - bsz, tgt_len, embed_dim = q_proj.size() - bsz = bsz // self.num_heads - src_len = k_proj.size(1) - - attn_energy = ( - torch.bmm(q_proj, k_proj.transpose(1, 2)) + self.energy_bias - ) - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - attn_energy += attn_mask - - attn_energy = attn_energy.view(bsz, self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - attn_energy = attn_energy.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - - return attn_energy - - def expected_alignment_train(self, p_choose, key_padding_mask: Optional[Tensor]): - """ - Calculating expected alignment for MMA - Mask is not need because p_choose will be 0 if masked - - q_ij = (1 − p_{ij−1})q_{ij−1} + a+{i−1j} - a_ij = p_ij q_ij - - Parallel solution: - ai = p_i * cumprod(1 − pi) * cumsum(a_i / cumprod(1 − pi)) - - ============================================================ - Expected input size - p_choose: bsz * num_heads, tgt_len, src_len - """ - - # p_choose: bsz * num_heads, tgt_len, src_len - bsz_num_heads, tgt_len, src_len = p_choose.size() - - # cumprod_1mp : bsz * num_heads, tgt_len, src_len - cumprod_1mp = exclusive_cumprod(1 - p_choose, dim=2, eps=self.eps) - cumprod_1mp_clamp = torch.clamp(cumprod_1mp, self.eps, 1.0) - - init_attention = p_choose.new_zeros([bsz_num_heads, 1, src_len]) - init_attention[:, :, 0] = 1.0 - - previous_attn = [init_attention] - - for i in range(tgt_len): - # p_choose: bsz * num_heads, tgt_len, src_len - # cumprod_1mp_clamp : bsz * num_heads, tgt_len, src_len - # previous_attn[i]: bsz * num_heads, 1, src_len - # alpha_i: bsz * num_heads, src_len - alpha_i = ( - p_choose[:, i] - * cumprod_1mp[:, i] - * torch.cumsum(previous_attn[i][:, 0] / cumprod_1mp_clamp[:, i], dim=1) - ).clamp(0, 1.0) - previous_attn.append(alpha_i.unsqueeze(1)) - - # alpha: bsz * num_heads, tgt_len, src_len - alpha = torch.cat(previous_attn[1:], dim=1) - - if self.mass_preservation: - # Last token has the residual probabilities - if key_padding_mask is not None and key_padding_mask[:, -1].any(): - # right padding - batch_size = key_padding_mask.size(0) - residuals = 1 - alpha.sum(dim=-1, keepdim=True).clamp(0.0, 1.0) - src_lens = src_len - key_padding_mask.sum(dim=1, keepdim=True) - src_lens = src_lens.expand( - batch_size, self.num_heads - ).contiguous().view(-1, 1) - src_lens = src_lens.expand(-1, tgt_len).contiguous() - # add back the last value - residuals += alpha.gather(2, src_lens.unsqueeze(-1) - 1) - alpha = alpha.scatter(2, src_lens.unsqueeze(-1) - 1, residuals) - else: - residuals = 1 - alpha[:, :, :-1].sum(dim=-1).clamp(0.0, 1.0) - alpha[:, :, -1] = residuals - - if torch.isnan(alpha).any(): - # Something is wrong - raise RuntimeError("NaN in alpha.") - - return alpha - - def expected_alignment_infer( - self, p_choose, encoder_padding_mask: Optional[Tensor], incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ): - # TODO modify this function - """ - Calculating mo alignment for MMA during inference time - - ============================================================ - Expected input size - p_choose: bsz * num_heads, tgt_len, src_len - incremental_state: dict - encodencoder_padding_mask: bsz * src_len - """ - # p_choose: bsz * self.num_heads, src_len - bsz_num_heads, tgt_len, src_len = p_choose.size() - # One token at a time - assert tgt_len == 1 - p_choose = p_choose[:, 0, :] - - monotonic_cache = self._get_monotonic_buffer(incremental_state) - - # prev_monotonic_step: bsz, num_heads - bsz = bsz_num_heads // self.num_heads - prev_monotonic_step = monotonic_cache.get( - "head_step", - p_choose.new_zeros([bsz, self.num_heads]).long() - ) - assert prev_monotonic_step is not None - bsz, num_heads = prev_monotonic_step.size() - assert num_heads == self.num_heads - assert bsz * num_heads == bsz_num_heads - - # p_choose: bsz, num_heads, src_len - p_choose = p_choose.view(bsz, num_heads, src_len) - - if encoder_padding_mask is not None: - src_lengths = src_len - \ - encoder_padding_mask.sum(dim=1, keepdim=True).long() - else: - src_lengths = torch.ones(bsz, 1).to(prev_monotonic_step) * src_len - - # src_lengths: bsz, num_heads - src_lengths = src_lengths.expand_as(prev_monotonic_step) - # new_monotonic_step: bsz, num_heads - new_monotonic_step = prev_monotonic_step - - step_offset = torch.tensor(0) - if encoder_padding_mask is not None: - if encoder_padding_mask[:, 0].any(): - # left_pad_source = True: - step_offset = encoder_padding_mask.sum(dim=-1, keepdim=True) - - max_steps = src_lengths - 1 if self.mass_preservation else src_lengths - - # finish_read: bsz, num_heads - finish_read = new_monotonic_step.eq(max_steps) - p_choose_i = torch.tensor(1) - while finish_read.sum().item() < bsz * self.num_heads: - # p_choose: bsz * self.num_heads, src_len - # only choose the p at monotonic steps - # p_choose_i: bsz , self.num_heads - p_choose_i = ( - p_choose.gather( - 2, - (step_offset + new_monotonic_step) - .unsqueeze(2) - .clamp(0, src_len - 1), - ) - ).squeeze(2) - - action = ( - (p_choose_i < 0.5) - .type_as(prev_monotonic_step) - .masked_fill(finish_read, 0) - ) - # 1 x bsz - # sample actions on unfinished seq - # 1 means stay, finish reading - # 0 means leave, continue reading - # dist = torch.distributions.bernoulli.Bernoulli(p_choose) - # action = dist.sample().type_as(finish_read) * (1 - finish_read) - - new_monotonic_step += action - - finish_read = new_monotonic_step.eq(max_steps) | (action == 0) - - monotonic_cache["head_step"] = new_monotonic_step - # Whether a head is looking for new input - monotonic_cache["head_read"] = ( - new_monotonic_step.eq(max_steps) & (p_choose_i < 0.5) - ) - - # alpha: bsz * num_heads, 1, src_len - # new_monotonic_step: bsz, num_heads - alpha = ( - p_choose - .new_zeros([bsz * self.num_heads, src_len]) - .scatter( - 1, - (step_offset + new_monotonic_step) - .view(bsz * self.num_heads, 1).clamp(0, src_len - 1), - 1 - ) - ) - - if not self.mass_preservation: - alpha = alpha.masked_fill( - (new_monotonic_step == max_steps) - .view(bsz * self.num_heads, 1), - 0 - ) - - alpha = alpha.unsqueeze(1) - - self._set_monotonic_buffer(incremental_state, monotonic_cache) - - return alpha - - def _get_monotonic_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]]): - maybe_incremental_state = self.get_incremental_state( - incremental_state, - 'monotonic', - ) - if maybe_incremental_state is None: - typed_empty_dict: Dict[str, Optional[Tensor]] = {} - return typed_empty_dict - else: - return maybe_incremental_state - - def _set_monotonic_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], buffer: Dict[str, Optional[Tensor]]): - self.set_incremental_state( - incremental_state, - 'monotonic', - buffer, - ) - - def forward( - self, query: Optional[Tensor], key: Optional[Tensor], value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, attn_mask: Optional[Tensor] = None, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, static_kv: bool = False, need_head_weights: bool = False, - ): - assert query is not None - assert value is not None - tgt_len, bsz, embed_dim = query.size() - src_len = value.size(0) - - # stepwise prob - # p_choose: bsz * self.num_heads, tgt_len, src_len - p_choose = self.p_choose( - query, key, key_padding_mask, incremental_state, - ) - - # expected alignment alpha - # bsz * self.num_heads, tgt_len, src_len - if incremental_state is not None: - alpha = self.expected_alignment_infer( - p_choose, key_padding_mask, incremental_state) - else: - alpha = self.expected_alignment_train( - p_choose, key_padding_mask) - - # expected attention beta - # bsz * self.num_heads, tgt_len, src_len - beta = self.expected_attention( - alpha, query, key, value, - key_padding_mask, attn_mask, - incremental_state - ) - - attn_weights = beta - - v_proj = self.v_proj_output(value) - assert v_proj is not None - - attn = torch.bmm(attn_weights.type_as(v_proj), v_proj) - - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - - attn = self.out_proj(attn) - - beta = beta.view(bsz, self.num_heads, tgt_len, src_len) - alpha = alpha.view(bsz, self.num_heads, tgt_len, src_len) - p_choose = p_choose.view(bsz, self.num_heads, tgt_len, src_len) - - return attn, { - "alpha": alpha, - "beta": beta, - "p_choose": p_choose, - } - - def input_projections(self, query: Optional[Tensor], key: Optional[Tensor], value: Optional[Tensor], name: str): - """ - Prepare inputs for multihead attention - - ============================================================ - Expected input size - query: tgt_len, bsz, embed_dim - key: src_len, bsz, embed_dim - value: src_len, bsz, embed_dim - name: monotonic or soft - """ - - if query is not None: - bsz = query.size(1) - q = self.q_proj(query) - q *= self.scaling - q = q.contiguous().view( - -1, bsz * self.num_heads, self.head_dim - ).transpose(0, 1) - else: - q = None - - if key is not None: - bsz = key.size(1) - k = self.k_proj(key) - k = k.contiguous().view( - -1, bsz * self.num_heads, self.head_dim - ).transpose(0, 1) - else: - k = None - - if value is not None: - bsz = value.size(1) - v = self.v_proj(value) - v = v.contiguous().view( - -1, bsz * self.num_heads, self.head_dim - ).transpose(0, 1) - else: - v = None - - return q, k, v - - def p_choose( - self, query: Optional[Tensor], key: Optional[Tensor], key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None - ): - """ - Calculating step wise prob for reading and writing - 1 to read, 0 to write - - ============================================================ - Expected input size - query: bsz, tgt_len, embed_dim - key: bsz, src_len, embed_dim - value: bsz, src_len, embed_dim - key_padding_mask: bsz, src_len - attn_mask: bsz, src_len - query: bsz, tgt_len, embed_dim - """ - - # prepare inputs - q_proj, k_proj, _ = self.input_projections( - query, key, None, "monotonic" - ) - - # attention energy - attn_energy = self.attn_energy(q_proj, k_proj, key_padding_mask) - - return p_choose_strategy.hard_aligned(q_proj, k_proj, attn_energy, self.noise_mean, self.noise_var, self.training) - - def expected_attention(self, alpha, *args): - """ - For MMA-H, beta = alpha - """ - return alpha - - def v_proj_output(self, value): - _, _, v_proj = self.input_projections(None, None, value, "output") - return v_proj - - -@register_monotonic_attention("infinite_lookback") -class MonotonicMultiheadAttentionInfiniteLookback( - MonotonicMultiheadAttentionHardAligned -): - def __init__(self, args): - super().__init__(args) - self.init_soft_attention() - - def init_soft_attention(self): - self.k_proj_soft = nn.Linear(self.kdim, self.embed_dim, bias=True) - self.q_proj_soft = nn.Linear(self.embed_dim, self.embed_dim, bias=True) - self.k_in_proj["soft"] = self.k_proj_soft - self.q_in_proj["soft"] = self.q_proj_soft - - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_( - self.k_in_proj["soft"].weight, gain=1 / math.sqrt(2) - ) - nn.init.xavier_uniform_( - self.q_in_proj["soft"].weight, gain=1 / math.sqrt(2) - ) - else: - nn.init.xavier_uniform_(self.k_in_proj["soft"].weight) - nn.init.xavier_uniform_(self.q_in_proj["soft"].weight) - - def expected_attention( - self, alpha, query: Optional[Tensor], key: Optional[Tensor], value: Optional[Tensor], - key_padding_mask: Optional[Tensor], attn_mask: Optional[Tensor], incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ): - # monotonic attention, we will calculate milk here - bsz_x_num_heads, tgt_len, src_len = alpha.size() - bsz = int(bsz_x_num_heads / self.num_heads) - - q, k, _ = self.input_projections(query, key, None, "soft") - soft_energy = self.attn_energy(q, k, key_padding_mask, attn_mask) - - assert list(soft_energy.size()) == \ - [bsz, self.num_heads, tgt_len, src_len] - - soft_energy = soft_energy.view(bsz * self.num_heads, tgt_len, src_len) - - if incremental_state is not None: - monotonic_cache = self._get_monotonic_buffer(incremental_state) - head_step = monotonic_cache["head_step"] - assert head_step is not None - monotonic_length = head_step + 1 - step_offset = 0 - if key_padding_mask is not None: - if key_padding_mask[:, 0].any(): - # left_pad_source = True: - step_offset = key_padding_mask.sum(dim=-1, keepdim=True) - monotonic_length += step_offset - mask = lengths_to_mask( - monotonic_length.view(-1), - soft_energy.size(2), 1 - ).unsqueeze(1) - - soft_energy = soft_energy.masked_fill(~mask.to(torch.bool), float("-inf")) - soft_energy = soft_energy - soft_energy.max(dim=2, keepdim=True)[0] - exp_soft_energy = torch.exp(soft_energy) - exp_soft_energy_sum = exp_soft_energy.sum(dim=2) - beta = exp_soft_energy / exp_soft_energy_sum.unsqueeze(2) - - else: - soft_energy = soft_energy - soft_energy.max(dim=2, keepdim=True)[0] - exp_soft_energy = torch.exp(soft_energy) + self.eps - inner_items = alpha / (torch.cumsum(exp_soft_energy, dim=2)) - - beta = ( - exp_soft_energy - * torch.cumsum(inner_items.flip(dims=[2]), dim=2) - .flip(dims=[2]) - ) - - beta = beta.view(bsz, self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - beta = beta.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), 0) - - beta = beta / beta.sum(dim=3, keepdim=True) - beta = beta.view(bsz * self.num_heads, tgt_len, src_len) - beta = self.dropout_module(beta) - - if torch.isnan(beta).any(): - # Something is wrong - raise RuntimeError("NaN in beta.") - - return beta - - -@register_monotonic_attention("waitk") -class MonotonicMultiheadAttentionWaitK( - MonotonicMultiheadAttentionInfiniteLookback -): - def __init__(self, args): - super().__init__(args) - self.q_in_proj["soft"] = self.q_in_proj["monotonic"] - self.k_in_proj["soft"] = self.k_in_proj["monotonic"] - self.waitk_lagging = args.waitk_lagging - assert self.waitk_lagging > 0, ( - f"Lagging has to been larger than 0, get {self.waitk_lagging}." - ) - - @staticmethod - def add_args(parser): - super( - MonotonicMultiheadAttentionWaitK, - MonotonicMultiheadAttentionWaitK, - ).add_args(parser) - - parser.add_argument( - "--waitk-lagging", type=int, required=True, help="Wait K lagging" - ) - - def p_choose( - self, query: Optional[Tensor], key: Optional[Tensor], key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - """ - query: bsz, tgt_len - key: bsz, src_len - key_padding_mask: bsz, src_len - """ - return p_choose_strategy.waitk(query, key, self.waitk_lagging, self.num_heads, key_padding_mask, incremental_state) diff --git a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh b/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh deleted file mode 100644 index af68715ab0d87ae40666596d9d877d593684f8e2..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh +++ /dev/null @@ -1,175 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2012 Johns Hopkins University (Author: Daniel Povey) -# Apache 2.0 - -# Begin configuration. -stage=-4 # This allows restarting after partway, when something when wrong. -config= -cmd=run.pl -scale_opts="--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1" -realign_iters="10 20 30"; -num_iters=35 # Number of iterations of training -max_iter_inc=25 # Last iter to increase #Gauss on. -beam=10 -careful=false -retry_beam=40 -boost_silence=1.0 # Factor by which to boost silence likelihoods in alignment -power=0.25 # Exponent for number of gaussians according to occurrence counts -cluster_thresh=-1 # for build-tree control final bottom-up clustering of leaves -norm_vars=false # deprecated. Prefer --cmvn-opts "--norm-vars=true" - # use the option --cmvn-opts "--norm-means=false" -cmvn_opts= -delta_opts= -context_opts= # use"--context-width=5 --central-position=2" for quinphone -num_nonsil_states=3 -# End configuration. - -echo "$0 $@" # Print the command line for logging - -[ -f path.sh ] && . ./path.sh; -. parse_options.sh || exit 1; - -if [ $# != 6 ]; then - echo "Usage: steps/train_deltas.sh " - echo "e.g.: steps/train_deltas.sh 2000 10000 data/train_si84_half data/lang exp/mono_ali exp/tri1" - echo "main options (for others, see top of script file)" - echo " --cmd (utils/run.pl|utils/queue.pl ) # how to run jobs." - echo " --config # config containing options" - echo " --stage # stage to do partial re-run from." - exit 1; -fi - -numleaves=$1 -totgauss=$2 -data=$3 -lang=$4 -alidir=$5 -dir=$6 - -for f in $alidir/final.mdl $alidir/ali.1.gz $data/feats.scp $lang/phones.txt; do - [ ! -f $f ] && echo "train_deltas.sh: no such file $f" && exit 1; -done - -numgauss=$numleaves -incgauss=$[($totgauss-$numgauss)/$max_iter_inc] # per-iter increment for #Gauss -oov=`cat $lang/oov.int` || exit 1; -ciphonelist=`cat $lang/phones/context_indep.csl` || exit 1; -nj=`cat $alidir/num_jobs` || exit 1; -mkdir -p $dir/log -echo $nj > $dir/num_jobs - -utils/lang/check_phones_compatible.sh $lang/phones.txt $alidir/phones.txt || exit 1; -cp $lang/phones.txt $dir || exit 1; - -sdata=$data/split$nj; -split_data.sh $data $nj || exit 1; - - -[ $(cat $alidir/cmvn_opts 2>/dev/null | wc -c) -gt 1 ] && [ -z "$cmvn_opts" ] && \ - echo "$0: warning: ignoring CMVN options from source directory $alidir" -$norm_vars && cmvn_opts="--norm-vars=true $cmvn_opts" -echo $cmvn_opts > $dir/cmvn_opts # keep track of options to CMVN. -[ ! -z $delta_opts ] && echo $delta_opts > $dir/delta_opts - -feats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- | add-deltas $delta_opts ark:- ark:- |" - -rm $dir/.error 2>/dev/null - -if [ $stage -le -3 ]; then - echo "$0: accumulating tree stats" - $cmd JOB=1:$nj $dir/log/acc_tree.JOB.log \ - acc-tree-stats $context_opts \ - --ci-phones=$ciphonelist $alidir/final.mdl "$feats" \ - "ark:gunzip -c $alidir/ali.JOB.gz|" $dir/JOB.treeacc || exit 1; - sum-tree-stats $dir/treeacc $dir/*.treeacc 2>$dir/log/sum_tree_acc.log || exit 1; - rm $dir/*.treeacc -fi - -if [ $stage -le -2 ]; then - echo "$0: getting questions for tree-building, via clustering" - # preparing questions, roots file... - cluster-phones --pdf-class-list=$(($num_nonsil_states / 2)) $context_opts \ - $dir/treeacc $lang/phones/sets.int \ - $dir/questions.int 2> $dir/log/questions.log || exit 1; - cat $lang/phones/extra_questions.int >> $dir/questions.int - compile-questions $context_opts $lang/topo $dir/questions.int \ - $dir/questions.qst 2>$dir/log/compile_questions.log || exit 1; - - echo "$0: building the tree" - $cmd $dir/log/build_tree.log \ - build-tree $context_opts --verbose=1 --max-leaves=$numleaves \ - --cluster-thresh=$cluster_thresh $dir/treeacc $lang/phones/roots.int \ - $dir/questions.qst $lang/topo $dir/tree || exit 1; - - $cmd $dir/log/init_model.log \ - gmm-init-model --write-occs=$dir/1.occs \ - $dir/tree $dir/treeacc $lang/topo $dir/1.mdl || exit 1; - if grep 'no stats' $dir/log/init_model.log; then - echo "** The warnings above about 'no stats' generally mean you have phones **" - echo "** (or groups of phones) in your phone set that had no corresponding data. **" - echo "** You should probably figure out whether something went wrong, **" - echo "** or whether your data just doesn't happen to have examples of those **" - echo "** phones. **" - fi - - gmm-mixup --mix-up=$numgauss $dir/1.mdl $dir/1.occs $dir/1.mdl 2>$dir/log/mixup.log || exit 1; - rm $dir/treeacc -fi - -if [ $stage -le -1 ]; then - # Convert the alignments. - echo "$0: converting alignments from $alidir to use current tree" - $cmd JOB=1:$nj $dir/log/convert.JOB.log \ - convert-ali $alidir/final.mdl $dir/1.mdl $dir/tree \ - "ark:gunzip -c $alidir/ali.JOB.gz|" "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; -fi - -if [ $stage -le 0 ]; then - echo "$0: compiling graphs of transcripts" - $cmd JOB=1:$nj $dir/log/compile_graphs.JOB.log \ - compile-train-graphs --read-disambig-syms=$lang/phones/disambig.int $dir/tree $dir/1.mdl $lang/L.fst \ - "ark:utils/sym2int.pl --map-oov $oov -f 2- $lang/words.txt < $sdata/JOB/text |" \ - "ark:|gzip -c >$dir/fsts.JOB.gz" || exit 1; -fi - -x=1 -while [ $x -lt $num_iters ]; do - echo "$0: training pass $x" - if [ $stage -le $x ]; then - if echo $realign_iters | grep -w $x >/dev/null; then - echo "$0: aligning data" - mdl="gmm-boost-silence --boost=$boost_silence `cat $lang/phones/optional_silence.csl` $dir/$x.mdl - |" - $cmd JOB=1:$nj $dir/log/align.$x.JOB.log \ - gmm-align-compiled $scale_opts --beam=$beam --retry-beam=$retry_beam --careful=$careful "$mdl" \ - "ark:gunzip -c $dir/fsts.JOB.gz|" "$feats" \ - "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; - fi - $cmd JOB=1:$nj $dir/log/acc.$x.JOB.log \ - gmm-acc-stats-ali $dir/$x.mdl "$feats" \ - "ark,s,cs:gunzip -c $dir/ali.JOB.gz|" $dir/$x.JOB.acc || exit 1; - $cmd $dir/log/update.$x.log \ - gmm-est --mix-up=$numgauss --power=$power \ - --write-occs=$dir/$[$x+1].occs $dir/$x.mdl \ - "gmm-sum-accs - $dir/$x.*.acc |" $dir/$[$x+1].mdl || exit 1; - rm $dir/$x.mdl $dir/$x.*.acc - rm $dir/$x.occs - fi - [ $x -le $max_iter_inc ] && numgauss=$[$numgauss+$incgauss]; - x=$[$x+1]; -done - -rm $dir/final.mdl $dir/final.occs 2>/dev/null -ln -s $x.mdl $dir/final.mdl -ln -s $x.occs $dir/final.occs - -steps/diagnostic/analyze_alignments.sh --cmd "$cmd" $lang $dir - -# Summarize warning messages... -utils/summarize_warnings.pl $dir/log - -steps/info/gmm_dir_info.pl $dir - -echo "$0: Done training system with delta+delta-delta features in $dir" - -exit 0 diff --git a/spaces/gradio/HuBERT/fairseq_cli/__init__.py b/spaces/gradio/HuBERT/fairseq_cli/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/PluginKeys.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/PluginKeys.tsx deleted file mode 100644 index 1dcfe17d90a1e3eb72c55ca876acc7617e788983..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/PluginKeys.tsx +++ /dev/null @@ -1,235 +0,0 @@ -import { IconKey } from '@tabler/icons-react'; -import { KeyboardEvent, useContext, useEffect, useRef, useState } from 'react'; -import { useTranslation } from 'react-i18next'; - -import { PluginID, PluginKey } from '@/types/plugin'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { SidebarButton } from '@/components/Sidebar/SidebarButton'; - -import ChatbarContext from '../Chatbar.context'; - -export const PluginKeys = () => { - const { t } = useTranslation('sidebar'); - - const { - state: { pluginKeys }, - } = useContext(HomeContext); - - const { handlePluginKeyChange, handleClearPluginKey } = - useContext(ChatbarContext); - - const [isChanging, setIsChanging] = useState(false); - - const modalRef = useRef(null); - - const handleEnter = (e: KeyboardEvent) => { - if (e.key === 'Enter' && !e.shiftKey) { - e.preventDefault(); - setIsChanging(false); - } - }; - - useEffect(() => { - const handleMouseDown = (e: MouseEvent) => { - if (modalRef.current && !modalRef.current.contains(e.target as Node)) { - window.addEventListener('mouseup', handleMouseUp); - } - }; - - const handleMouseUp = (e: MouseEvent) => { - window.removeEventListener('mouseup', handleMouseUp); - setIsChanging(false); - }; - - window.addEventListener('mousedown', handleMouseDown); - - return () => { - window.removeEventListener('mousedown', handleMouseDown); - }; - }, []); - - return ( - <> - } - onClick={() => setIsChanging(true)} - /> - - {isChanging && ( -
    -
    -
    - -
    -
    - )} - - ); -}; diff --git a/spaces/gstaff/articulator/app.py b/spaces/gstaff/articulator/app.py deleted file mode 100644 index c2fe87f7157e96cc18f094db9104297b59a0e6bd..0000000000000000000000000000000000000000 --- a/spaces/gstaff/articulator/app.py +++ /dev/null @@ -1,176 +0,0 @@ -import base64 -import os -import pathlib -import shutil -import uuid -import requests -import gradio as gr - -# Setup variables for calling Stable Diffusion API. -engine_id = "stable-diffusion-v1-5" -api_host = os.getenv('API_HOST', 'https://api.stability.ai') -api_key = os.getenv("STABILITY_API_KEY") - -# if api_key is None: -# raise Exception("Missing Stability API key.") - - -def query_stable_diffusion(prompt): - response = requests.post( - f"{api_host}/v1/generation/{engine_id}/text-to-image", - headers={ - "Content-Type": "application/json", - "Accept": "application/json", - "Authorization": f"Bearer {api_key}" - }, - json={ - "text_prompts": [ - { - "text": f"{prompt}" - } - ], - "cfg_scale": 7, - "clip_guidance_preset": "FAST_BLUE", - "height": 512, - "width": 512, - "samples": 1, - "steps": 30, - }, - ) - - if response.status_code != 200: - raise Exception("Non-200 response: " + str(response.text)) - - data = response.json() - - path = None - for i, image in enumerate(data["artifacts"]): - path = pathlib.Path(f"./out/v1_txt2img_{str(uuid.uuid4())[:8]}.png") - with open(path, "wb") as f: - f.write(base64.b64decode(image["base64"])) - return path - - -def query_placekitten(prompt): - response = requests.get("http://placekitten.com/512/512", stream=True) - if response.status_code != 200: - raise Exception("Non-200 response: " + str(response.text)) - path = pathlib.Path(f"./out/v1_txt2img_{str(uuid.uuid4())[:8]}.png") - with open(path, "wb") as f: - shutil.copyfileobj(response.raw, f) - del response - return path - - -def generate(prompt): - if api_key is None: - return query_placekitten(prompt) - else: - try: - return query_stable_diffusion(prompt) - except: - return query_placekitten(prompt) - - -def gradio_reset(chat_state, img_list): - if chat_state is not None: - chat_state.messages = [] - if img_list is not None: - img_list = [] - return None, gr.update(value=None, interactive=True), gr.update(placeholder='Please upload your image first', interactive=False), gr.update(value="Upload & Start Chat", interactive=True), chat_state, img_list - - -def upload_img(gr_img, text_input, chat_state): - if gr_img is None: - return None, None, gr.update(interactive=True), chat_state, None - # chat_state = CONV_VISION.copy() - img_list = [] - # llm_message = chat.upload_img(gr_img, chat_state, img_list) - return gr.update(interactive=False), gr.update(interactive=True, placeholder='Type and press Enter'), gr.update(value="Start Chatting", interactive=False), chat_state, img_list - - -def gradio_ask(user_message, chatbot, chat_state): - if len(user_message) == 0: - return gr.update(interactive=True, placeholder='Input should not be empty!'), chatbot, chat_state - # chat.ask(user_message, chat_state) - chatbot = chatbot + [[user_message, None]] - return '', chatbot, chat_state - - -def gradio_answer(chatbot, chat_state, img_list, num_beams, temperature): - llm_message = "Let me see..." # chat.answer(conv=chat_state, img_list=img_list, max_new_tokens=300, num_beams=1, temperature=temperature, max_length=2000)[0] - chatbot[-1][1] = llm_message - return chatbot, chat_state, img_list - - -title = """

    Articulator

    🖼️🤖💬🤔

    """ -description = """

    Generation + Curation = Elation. -Generate images and start chatting!

    """ - -query = "Write me a story about this image." -story = """ -The image shows a lush green landscape with tall grass and trees. The sky is a bright blue with fluffy clouds floating in the distance. In the foreground, there is a small path leading through the grass towards a rocky outcropping. The outcropping has a small waterfall flowing down from the top. On the other side of the path, there is a small village with houses and buildings. The village is surrounded by more grass and trees. -The story of this image is about a young adventurer who sets out on a journey to explore the world. As he walks through the grassy landscape, he sees the small village in the distance. He decides to explore the village and meets the villagers who tell him about the history of the place. They tell him about the waterfall and how it is a sacred place for the villagers. -The adventurer is fascinated by the beauty of the place and decides to spend some time there. He explores the village and the surrounding areas, taking in the sights and sounds of nature. He spends some time talking to the villagers and learning about their way of life. -As the day comes to an end, the adventurer reflects on his journey and the beauty of the place he has visited. He decides to continue his journey, but knows that he will always remember the peace and tranquility of this place.""" - -with gr.Blocks(theme="gstaff/sketch", css="footer {visibility: hidden}") as demo: - gr.Markdown(title) - gr.Markdown(description) - - with gr.Row(): - with gr.Column(scale=0.5): - gr.Markdown('## First enter a prompt and click generate.') - prompt = gr.Textbox(label="Prompt", placeholder="A clear blue sky over a grassy hillside in the style of Studio Ghibli") - generate_button = gr.Button("Generate", variant="primary") - image2 = gr.Image(type="pil", value="sample_image.png", show_label=False, interactive=False) - with gr.Column(scale=0.5): - gr.Markdown("## Once you have an image you like:") - gr.Markdown("### Drag and drop to the image box below") - gr.Markdown("### Click 'Upload and Start Chat'") - gr.Markdown("### Discuss the image with the chatbot!") - - # with gr.Row(): - # with gr.Column(scale=0.5): - # image = gr.Image(type="pil", value="sample_image.png") - # upload_button = gr.Button(value="Upload & Start Chat", interactive=True, variant="primary") - # clear = gr.Button("Restart") - # - # num_beams = gr.Slider( - # minimum=1, - # maximum=5, - # value=1, - # step=1, - # interactive=True, - # label="beam search numbers)", - # visible=False, - # ) - # - # temperature = gr.Slider( - # minimum=0.1, - # maximum=2.0, - # value=1.0, - # step=0.1, - # interactive=True, - # label="Temperature", - # visible=False, - # ) - # - # with gr.Column(): - # chat_state = gr.State() - # img_list = gr.State() - # chatbot = gr.Chatbot(label='Articulator', value=[[query, story]]) - # text_input = gr.Textbox(label='User', placeholder='Please upload your image first', interactive=False) - - generate_button.click(fn=generate, inputs=[prompt], outputs=[image2]) - - # upload_button.click(upload_img, [image, text_input, chat_state], - # [image, text_input, upload_button, chat_state, img_list]) - # - # text_input.submit(gradio_ask, [text_input, chatbot, chat_state], [text_input, chatbot, chat_state]).then( - # gradio_answer, [chatbot, chat_state, img_list, num_beams, temperature], [chatbot, chat_state, img_list] - # ) - # clear.click(gradio_reset, [chat_state, img_list], [chatbot, image, text_input, upload_button, chat_state, img_list], - # queue=False) - -demo.launch(enable_queue=True) diff --git a/spaces/gwang-kim/DATID-3D/datid3d_train.py b/spaces/gwang-kim/DATID-3D/datid3d_train.py deleted file mode 100644 index 83fe0ad88730821258cf4f6905a2de352c95b844..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/datid3d_train.py +++ /dev/null @@ -1,105 +0,0 @@ -import os -import argparse - -### Parameters -parser = argparse.ArgumentParser() - -# For all -parser.add_argument('--mode', type=str, required=True, choices=['pdg', 'ft', 'both'], - help="pdg: Pose-aware dataset generation, ft: Fine-tuning 3D generative models, both: Doing both") -parser.add_argument('--down_src_eg3d_from_nvidia', default=True) -# Pose-aware dataset generation -parser.add_argument('--pdg_prompt', type=str, required=True) -parser.add_argument('--pdg_generator_type', default='ffhq', type=str, choices=['ffhq', 'cat']) # ffhq, cat -parser.add_argument('--pdg_strength', default=0.7, type=float) -parser.add_argument('--pdg_guidance_scale', default=8, type=float) -parser.add_argument('--pdg_num_images', default=1000, type=int) -parser.add_argument('--pdg_sd_model_id', default='stabilityai/stable-diffusion-2-1-base', type=str) -parser.add_argument('--pdg_num_inference_steps', default=50, type=int) -parser.add_argument('--pdg_name_tag', default='', type=str) -parser.add_argument('--down_src_eg3d_from_nvidia', default=True) -# Fine-tuning 3D generative models -parser.add_argument('--ft_generator_type', default='same', help="None: The same type as pdg_generator_type", type=str, choices=['ffhq', 'cat', 'same']) -parser.add_argument('--ft_kimg', default=200, type=int) -parser.add_argument('--ft_batch', default=20, type=int) -parser.add_argument('--ft_tick', default=1, type=int) -parser.add_argument('--ft_snap', default=50, type=int) -parser.add_argument('--ft_outdir', default='../training_runs', type=str) # -parser.add_argument('--ft_gpus', default=1, type=str) # -parser.add_argument('--ft_workers', default=8, type=int) # -parser.add_argument('--ft_data_max_size', default=500000000, type=int) # -parser.add_argument('--ft_freeze_dec_sr', default=True, type=bool) # - -args = parser.parse_args() - - -### Pose-aware target generation -if args.mode in ['pdg', 'both']: - os.chdir('eg3d') - if args.pdg_generator_type == 'cat': - pdg_generator_id = 'afhqcats512-128.pkl' - else: - pdg_generator_id = 'ffhqrebalanced512-128.pkl' - - pdg_generator_path = f'pretrained/{pdg_generator_id}' - if not os.path.exists(pdg_generator_path): - os.makedirs(f'pretrained', exist_ok=True) - print("Pretrained EG3D model cannot be found. Downloading the pretrained EG3D models.") - if args.down_src_eg3d_from_nvidia == True: - os.system(f'wget -c https://api.ngc.nvidia.com/v2/models/nvidia/research/eg3d/versions/1/files/{pdg_generator_id} -O {pdg_generator_path}') - else: - os.system(f'wget https://huggingface.co/gwang-kim/datid3d-finetuned-eg3d-models/resolve/main/finetuned_models/nvidia_{pdg_generator_id} -O {pdg_generator_path}') - command = f"""python datid3d_data_gen.py \ - --prompt="{args.pdg_prompt}" \ - --data_type={args.pdg_generator_type} \ - --strength={args.pdg_strength} \ - --guidance_scale={args.pdg_guidance_scale} \ - --num_images={args.pdg_num_images} \ - --sd_model_id="{args.pdg_sd_model_id}" \ - --num_inference_steps={args.pdg_num_inference_steps} \ - --name_tag={args.pdg_name_tag} - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') - -### Filtering process -# TODO - - -### Fine-tuning 3D generative models -if args.mode in ['ft', 'both']: - os.chdir('eg3d') - if args.ft_generator_type == 'same': - args.ft_generator_type = args.pdg_generator_type - - if args.ft_generator_type == 'cat': - ft_generator_id = 'afhqcats512-128.pkl' - else: - ft_generator_id = 'ffhqrebalanced512-128.pkl' - - ft_generator_path = f'pretrained/{ft_generator_id}' - if not os.path.exists(ft_generator_path): - os.makedirs(f'pretrained', exist_ok=True) - print("Pretrained EG3D model cannot be found. Downloading the pretrained EG3D models.") - if args.down_src_eg3d_from_nvidia == True: - os.system(f'wget -c https://api.ngc.nvidia.com/v2/models/nvidia/research/eg3d/versions/1/files/{ft_generator_id} -O {ft_generator_path}') - else: - os.system(f'wget https://huggingface.co/gwang-kim/datid3d-finetuned-eg3d-models/resolve/main/finetuned_models/nvidia_{ft_generator_id} -O {ft_generator_path}') - - dataset_id = f'data_{args.pdg_generator_type}_{args.pdg_prompt.replace(" ", "_")}{args.pdg_name_tag}' - dataset_path = f'./exp_data/{dataset_id}/{dataset_id}.zip' - - - command = f"""python train.py \ - --outdir={args.ft_outdir} \ - --cfg={args.ft_generator_type} \ - --data="{dataset_path}" \ - --resume={ft_generator_path} --freeze_dec_sr={args.ft_freeze_dec_sr} \ - --batch={args.ft_batch} --workers={args.ft_workers} --gpus={args.ft_gpus} \ - --tick={args.ft_tick} --snap={args.ft_snap} --data_max_size={args.ft_data_max_size} --kimg={args.ft_kimg} \ - --gamma=5 --aug=ada --neural_rendering_resolution_final=128 --gen_pose_cond=True --gpc_reg_prob=0.8 --metrics=None - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/alignment.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/alignment.py deleted file mode 100644 index 46f58c79061ed8030562300f131f97f04e5ea42f..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/alignment.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - - -import os -import argparse -import numpy as np -import torch -from torch.utils.data import DataLoader -from torchvision.transforms import transforms -from utils.ImagesDataset import ImagesDataset - -import cv2 -import time -import copy -import imutils - -# for openpose body keypoint detector : # (src:https://github.com/Hzzone/pytorch-openpose) -from openpose.src import util -from openpose.src.body import Body - -# for paddlepaddle human segmentation : #(src: https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.5/contrib/PP-HumanSeg/) -from PP_HumanSeg.deploy.infer import Predictor as PP_HumenSeg_Predictor - -import math - - -def angle_between_points(p0, p1, p2): - if p0[1] == -1 or p1[1] == -1 or p2[1] == -1: - return -1 - a = (p1[0]-p0[0])**2 + (p1[1]-p0[1])**2 - b = (p1[0]-p2[0])**2 + (p1[1]-p2[1])**2 - c = (p2[0]-p0[0])**2 + (p2[1]-p0[1])**2 - if a * b == 0: - return -1 - return math.acos((a+b-c) / math.sqrt(4*a*b)) * 180 / math.pi - - -def crop_img_with_padding(img, keypoints, rect): - person_xmin, person_xmax, ymin, ymax = rect - img_h, img_w, _ = img.shape # find body center using keypoints - middle_shoulder_x = keypoints[1][0] - middle_hip_x = (keypoints[8][0] + keypoints[11][0]) // 2 - mid_x = (middle_hip_x + middle_shoulder_x) // 2 - mid_y = (ymin + ymax) // 2 - # find which side (l or r) is further than center x, use the further side - if abs(mid_x-person_xmin) > abs(person_xmax-mid_x): # left further - xmin = person_xmin - xmax = mid_x + (mid_x-person_xmin) - else: - # may be negtive - # in this case, the script won't output any image, leave the case like this - # since we don't want to pad human body - xmin = mid_x - (person_xmax-mid_x) - xmax = person_xmax - - w = xmax - xmin - h = ymax - ymin - # pad rectangle to w:h = 1:2 ## calculate desired border length - if h / w >= 2: # pad horizontally - target_w = h // 2 - xmin_prime = int(mid_x - target_w / 2) - xmax_prime = int(mid_x + target_w / 2) - if xmin_prime < 0: - pad_left = abs(xmin_prime) # - xmin - xmin = 0 - else: - pad_left = 0 - xmin = xmin_prime - if xmax_prime > img_w: - pad_right = xmax_prime - img_w - xmax = img_w - else: - pad_right = 0 - xmax = xmax_prime - - cropped_img = img[int(ymin):int(ymax), int(xmin):int(xmax)] - im_pad = cv2.copyMakeBorder(cropped_img, 0, 0, int( - pad_left), int(pad_right), cv2.BORDER_REPLICATE) - else: # pad vertically - target_h = w * 2 - ymin_prime = mid_y - (target_h / 2) - ymax_prime = mid_y + (target_h / 2) - if ymin_prime < 0: - pad_up = abs(ymin_prime) # - ymin - ymin = 0 - else: - pad_up = 0 - ymin = ymin_prime - if ymax_prime > img_h: - pad_down = ymax_prime - img_h - ymax = img_h - else: - pad_down = 0 - ymax = ymax_prime - print(ymin, ymax, xmin, xmax, img.shape) - - cropped_img = img[int(ymin):int(ymax), int(xmin):int(xmax)] - im_pad = cv2.copyMakeBorder(cropped_img, int(pad_up), int(pad_down), 0, - 0, cv2.BORDER_REPLICATE) - result = cv2.resize(im_pad, (512, 1024), interpolation=cv2.INTER_AREA) - return result - - -def run(args): - os.makedirs(args.output_folder, exist_ok=True) - dataset = ImagesDataset( - args.image_folder, transforms.Compose([transforms.ToTensor()])) - dataloader = DataLoader(dataset, batch_size=1, shuffle=False) - - body_estimation = Body('openpose/model/body_pose_model.pth') - - total = len(dataloader) - print('Num of dataloader : ', total) - os.makedirs(f'{args.output_folder}', exist_ok=True) - # os.makedirs(f'{args.output_folder}/middle_result', exist_ok=True) - - # initialzide HumenSeg - human_seg_args = {} - human_seg_args['cfg'] = 'PP_HumanSeg/export_model/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax/deploy.yaml' - human_seg_args['input_shape'] = [1024, 512] - human_seg_args['save_dir'] = args.output_folder - human_seg_args['soft_predict'] = False - human_seg_args['use_gpu'] = True - human_seg_args['test_speed'] = False - human_seg_args['use_optic_flow'] = False - human_seg_args['add_argmax'] = True - human_seg_args = argparse.Namespace(**human_seg_args) - human_seg = PP_HumenSeg_Predictor(human_seg_args) - - from tqdm import tqdm - for fname, image in tqdm(dataloader): - # try: - # tensor to numpy image - fname = fname[0] - print(f'Processing \'{fname}\'.') - - image = (image.permute(0, 2, 3, 1) * 255).clamp(0, 255) - image = image.squeeze(0).numpy() # --> tensor to numpy, (H,W,C) - # avoid super high res img - if image.shape[0] >= 2000: # height ### for shein image - ratio = image.shape[0]/1200 # height - dim = (int(image.shape[1]/ratio), 1200) # (width, height) - image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA) - image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - - # create segmentation - # mybg = cv2.imread('mybg.png') - comb, segmentation, bg, ori_img = human_seg.run(image, None) # mybg) - # cv2.imwrite('comb.png',comb) # [0,255] - # cv2.imwrite('alpha.png',segmentation*255) # segmentation [0,1] --> [0.255] - # cv2.imwrite('bg.png',bg) #[0,255] - # cv2.imwrite('ori_img.png',ori_img) # [0,255] - - masks_np = (segmentation * 255) # .byte().cpu().numpy() #1024,512,1 - mask0_np = masks_np[:, :, 0].astype(np.uint8) # [0, :, :] - contours = cv2.findContours( - mask0_np, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) - cnts = imutils.grab_contours(contours) - c = max(cnts, key=cv2.contourArea) - extTop = tuple(c[c[:, :, 1].argmin()][0]) - extBot = tuple(c[c[:, :, 1].argmax()][0]) - extBot = list(extBot) - extTop = list(extTop) - pad_range = int((extBot[1]-extTop[1])*0.05) - # seg mask already reaches to the edge - if (int(extTop[1]) <= 5 and int(extTop[1]) > 0) and (comb.shape[0] > int(extBot[1]) and int(extBot[1]) >= comb.shape[0]-5): - # pad with pure white, top 100 px, bottom 100 px - comb = cv2.copyMakeBorder( - comb, pad_range+5, pad_range+5, 0, 0, cv2.BORDER_CONSTANT, value=[255, 255, 255]) - elif int(extTop[1]) <= 0 or int(extBot[1]) >= comb.shape[0]: - print('PAD: body out of boundary', fname) # should not happened - return {} - else: - # 105 instead of 100: give some extra space - comb = cv2.copyMakeBorder( - comb, pad_range+5, pad_range+5, 0, 0, cv2.BORDER_REPLICATE) - extBot[1] = extBot[1] + pad_range+5 - extTop[1] = extTop[1] + pad_range+5 - - extLeft = tuple(c[c[:, :, 0].argmin()][0]) - extRight = tuple(c[c[:, :, 0].argmax()][0]) - extLeft = list(extLeft) - extRight = list(extRight) - person_ymin = int(extTop[1])-pad_range # 100 - person_ymax = int(extBot[1])+pad_range # 100 #height - if person_ymin < 0 or person_ymax > comb.shape[0]: # out of range - return {} - person_xmin = int(extLeft[0]) - person_xmax = int(extRight[0]) - rect = [person_xmin, person_xmax, person_ymin, person_ymax] - # recimg = copy.deepcopy(comb) - # cv2.rectangle(recimg,(person_xmin,person_ymin),(person_xmax,person_ymax),(0,255,0),2) - # cv2.imwrite(f'{args.output_folder}/middle_result/{fname}_rec.png',recimg) - - # detect keypoints - keypoints, subset = body_estimation(comb) - # print(keypoints, subset, len(subset)) - if len(subset) != 1 or (len(subset) == 1 and subset[0][-1] < 15): - print( - f'Processing \'{fname}\'. Please import image contains one person only. Also can check segmentation mask. ') - continue - - # canvas = copy.deepcopy(comb) - # canvas = util.draw_bodypose(canvas, keypoints, subset, show_number=True) - # cv2.imwrite(f'{args.output_folder}/middle_result/{fname}_keypoints.png',canvas) - - comb = crop_img_with_padding(comb, keypoints, rect) - - cv2.imwrite(f'{args.output_folder}/{fname}.png', comb) - print(f' -- Finished processing \'{fname}\'. --') - # except: - # print(f'Processing \'{fname}\'. Not satisfied the alignment strategy.') - - -if __name__ == '__main__': - torch.backends.cudnn.benchmark = True - torch.backends.cudnn.deterministic = False - - t1 = time.time() - arg_formatter = argparse.ArgumentDefaultsHelpFormatter - description = 'StyleGAN-Human data process' - parser = argparse.ArgumentParser(formatter_class=arg_formatter, - description=description) - parser.add_argument('--image-folder', type=str, dest='image_folder') - parser.add_argument('--output-folder', - dest='output_folder', default='results', type=str) - # parser.add_argument('--cfg', dest='cfg for segmentation', default='PP_HumanSeg/export_model/ppseg_lite_portrait_398x224_with_softmax/deploy.yaml', type=str) - - print('parsing arguments') - cmd_args = parser.parse_args() - run(cmd_args) - - print('total time elapsed: ', str(time.time() - t1)) diff --git a/spaces/hamacojr/CAT-Seg/plain_train_net.py b/spaces/hamacojr/CAT-Seg/plain_train_net.py deleted file mode 100644 index 5412d18d729b6ec37d01d5232c62d5116e9992b0..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/plain_train_net.py +++ /dev/null @@ -1,536 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -MaskFormer Training Script. - -This script is a simplified version of the training script in detectron2/tools. -""" -import copy -import itertools -import logging -import os -from collections import OrderedDict -from typing import Any, Dict, List, Set - -import torch - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import MetadataCatalog, build_detection_train_loader -from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch -from detectron2.evaluation import CityscapesInstanceEvaluator, CityscapesSemSegEvaluator, \ - COCOEvaluator, COCOPanopticEvaluator, DatasetEvaluators, SemSegEvaluator, verify_results, \ - DatasetEvaluator - -from detectron2.projects.deeplab import add_deeplab_config, build_lr_scheduler -from detectron2.solver.build import maybe_add_gradient_clipping -from detectron2.utils.logger import setup_logger - -from detectron2.utils.file_io import PathManager -import numpy as np -from PIL import Image -import glob - -import pycocotools.mask as mask_util - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.comm import all_gather, is_main_process, synchronize -import json -from torch.nn.parallel import DistributedDataParallel -from detectron2.engine.train_loop import AMPTrainer, SimpleTrainer, TrainerBase, HookBase -import weakref -from detectron2.utils.events import EventStorage -from detectron2.utils.logger import _log_api_usage - -# from detectron2.evaluation import SemSegGzeroEvaluator -# from mask_former.evaluation.sem_seg_evaluation_gzero import SemSegGzeroEvaluator - -class SemSegGzeroEvaluator(DatasetEvaluator): - """ - Evaluate semantic segmentation metrics. - """ - - def __init__( - self, dataset_name, distributed, output_dir=None, *, num_classes=None, ignore_label=None - ): - """ - Args: - dataset_name (str): name of the dataset to be evaluated. - distributed (True): if True, will collect results from all ranks for evaluation. - Otherwise, will evaluate the results in the current process. - output_dir (str): an output directory to dump results. - num_classes, ignore_label: deprecated argument - """ - self._logger = logging.getLogger(__name__) - if num_classes is not None: - self._logger.warn( - "SemSegEvaluator(num_classes) is deprecated! It should be obtained from metadata." - ) - if ignore_label is not None: - self._logger.warn( - "SemSegEvaluator(ignore_label) is deprecated! It should be obtained from metadata." - ) - self._dataset_name = dataset_name - self._distributed = distributed - self._output_dir = output_dir - - self._cpu_device = torch.device("cpu") - - self.input_file_to_gt_file = { - dataset_record["file_name"]: dataset_record["sem_seg_file_name"] - for dataset_record in DatasetCatalog.get(dataset_name) - } - - meta = MetadataCatalog.get(dataset_name) - # Dict that maps contiguous training ids to COCO category ids - try: - c2d = meta.stuff_dataset_id_to_contiguous_id - self._contiguous_id_to_dataset_id = {v: k for k, v in c2d.items()} - except AttributeError: - self._contiguous_id_to_dataset_id = None - self._class_names = meta.stuff_classes - self._val_extra_classes = meta.val_extra_classes - self._num_classes = len(meta.stuff_classes) - if num_classes is not None: - assert self._num_classes == num_classes, f"{self._num_classes} != {num_classes}" - self._ignore_label = ignore_label if ignore_label is not None else meta.ignore_label - - def reset(self): - self._conf_matrix = np.zeros((self._num_classes + 1, self._num_classes + 1), dtype=np.int64) - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a model. - It is a list of dicts. Each dict corresponds to an image and - contains keys like "height", "width", "file_name". - outputs: the outputs of a model. It is either list of semantic segmentation predictions - (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic - segmentation prediction in the same format. - """ - for input, output in zip(inputs, outputs): - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device) - pred = np.array(output, dtype=np.int) - with PathManager.open(self.input_file_to_gt_file[input["file_name"]], "rb") as f: - gt = np.array(Image.open(f), dtype=np.int) - - gt[gt == self._ignore_label] = self._num_classes - - self._conf_matrix += np.bincount( - (self._num_classes + 1) * pred.reshape(-1) + gt.reshape(-1), - minlength=self._conf_matrix.size, - ).reshape(self._conf_matrix.shape) - - self._predictions.extend(self.encode_json_sem_seg(pred, input["file_name"])) - - def evaluate(self): - """ - Evaluates standard semantic segmentation metrics (http://cocodataset.org/#stuff-eval): - - * Mean intersection-over-union averaged across classes (mIoU) - * Frequency Weighted IoU (fwIoU) - * Mean pixel accuracy averaged across classes (mACC) - * Pixel Accuracy (pACC) - """ - if self._distributed: - synchronize() - conf_matrix_list = all_gather(self._conf_matrix) - self._predictions = all_gather(self._predictions) - self._predictions = list(itertools.chain(*self._predictions)) - if not is_main_process(): - return - - self._conf_matrix = np.zeros_like(self._conf_matrix) - for conf_matrix in conf_matrix_list: - self._conf_matrix += conf_matrix - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "sem_seg_predictions.json") - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(self._predictions)) - - acc = np.full(self._num_classes, np.nan, dtype=np.float) - iou = np.full(self._num_classes, np.nan, dtype=np.float) - tp = self._conf_matrix.diagonal()[:-1].astype(np.float) - pos_gt = np.sum(self._conf_matrix[:-1, :-1], axis=0).astype(np.float) - class_weights = pos_gt / np.sum(pos_gt) - pos_pred = np.sum(self._conf_matrix[:-1, :-1], axis=1).astype(np.float) - acc_valid = pos_gt > 0 - acc[acc_valid] = tp[acc_valid] / pos_gt[acc_valid] - iou_valid = (pos_gt + pos_pred) > 0 - union = pos_gt + pos_pred - tp - iou[acc_valid] = tp[acc_valid] / union[acc_valid] - macc = np.sum(acc[acc_valid]) / np.sum(acc_valid) - miou = np.sum(iou[acc_valid]) / np.sum(iou_valid) - fiou = np.sum(iou[acc_valid] * class_weights[acc_valid]) - pacc = np.sum(tp) / np.sum(pos_gt) - seen_IoU = 0 - unseen_IoU = 0 - seen_acc = 0 - unseen_acc = 0 - res = {} - res["mIoU"] = 100 * miou - res["fwIoU"] = 100 * fiou - for i, name in enumerate(self._class_names): - res["IoU-{}".format(name)] = 100 * iou[i] - if name in self._val_extra_classes: - unseen_IoU = unseen_IoU + 100 * iou[i] - else: - seen_IoU = seen_IoU + 100 * iou[i] - unseen_IoU = unseen_IoU / len(self._val_extra_classes) - seen_IoU = seen_IoU / (self._num_classes - len(self._val_extra_classes)) - res["mACC"] = 100 * macc - res["pACC"] = 100 * pacc - for i, name in enumerate(self._class_names): - res["ACC-{}".format(name)] = 100 * acc[i] - if name in self._val_extra_classes: - unseen_acc = unseen_acc + 100 * iou[i] - else: - seen_acc = seen_acc + 100 * iou[i] - unseen_acc = unseen_acc / len(self._val_extra_classes) - seen_acc = seen_acc / (self._num_classes - len(self._val_extra_classes)) - res["seen_IoU"] = seen_IoU - res["unseen_IoU"] = unseen_IoU - res["harmonic mean"] = 2 * seen_IoU * unseen_IoU / (seen_IoU + unseen_IoU) - # res["unseen_acc"] = unseen_acc - # res["seen_acc"] = seen_acc - if self._output_dir: - file_path = os.path.join(self._output_dir, "sem_seg_evaluation.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(res, f) - results = OrderedDict({"sem_seg": res}) - self._logger.info(results) - return results - - def encode_json_sem_seg(self, sem_seg, input_file_name): - """ - Convert semantic segmentation to COCO stuff format with segments encoded as RLEs. - See http://cocodataset.org/#format-results - """ - json_list = [] - for label in np.unique(sem_seg): - if self._contiguous_id_to_dataset_id is not None: - # import ipdb; ipdb.set_trace() - assert ( - label in self._contiguous_id_to_dataset_id - ), "Label {} is not in the metadata info for {}".format(label, self._dataset_name) - dataset_id = self._contiguous_id_to_dataset_id[label] - else: - dataset_id = int(label) - mask = (sem_seg == label).astype(np.uint8) - mask_rle = mask_util.encode(np.array(mask[:, :, None], order="F"))[0] - mask_rle["counts"] = mask_rle["counts"].decode("utf-8") - json_list.append( - {"file_name": input_file_name, "category_id": dataset_id, "segmentation": mask_rle} - ) - return json_list - - -# MaskFormer -from cat_seg import ( - DETRPanopticDatasetMapper, - MaskFormerPanopticDatasetMapper, - MaskFormerSemanticDatasetMapper, - SemanticSegmentorWithTTA, - add_mask_former_config, -) - - -def create_ddp_model(model, *, fp16_compression=False, **kwargs): - """ - Create a DistributedDataParallel model if there are >1 processes. - - Args: - model: a torch.nn.Module - fp16_compression: add fp16 compression hooks to the ddp object. - See more at https://pytorch.org/docs/stable/ddp_comm_hooks.html#torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook - kwargs: other arguments of :module:`torch.nn.parallel.DistributedDataParallel`. - """ # noqa - if comm.get_world_size() == 1: - return model - if "device_ids" not in kwargs: - kwargs["device_ids"] = [comm.get_local_rank()] - ddp = DistributedDataParallel(model, **kwargs) - if fp16_compression: - from torch.distributed.algorithms.ddp_comm_hooks import default as comm_hooks - - ddp.register_comm_hook(state=None, hook=comm_hooks.fp16_compress_hook) - return ddp - -class Trainer(DefaultTrainer): - """ - Extension of the Trainer class adapted to DETR. - """ - - def __init__(self, cfg): - # super().__init__(cfg) - self._hooks: List[HookBase] = [] - self.iter: int = 0 - self.start_iter: int = 0 - self.max_iter: int - self.storage: EventStorage - _log_api_usage("trainer." + self.__class__.__name__) - - logger = logging.getLogger("detectron2") - if not logger.isEnabledFor(logging.INFO): # setup_logger is not called for d2 - setup_logger() - cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size()) - - # Assume these objects must be constructed in this order. - model = self.build_model(cfg) - optimizer = self.build_optimizer(cfg, model) - data_loader = self.build_train_loader(cfg) - - model = create_ddp_model(model, broadcast_buffers=False, find_unused_parameters=True) - self._trainer = (AMPTrainer if cfg.SOLVER.AMP.ENABLED else SimpleTrainer)( - model, data_loader, optimizer - ) - - self.scheduler = self.build_lr_scheduler(cfg, optimizer) - self.checkpointer = DetectionCheckpointer( - # Assume you want to save checkpoints together with logs/statistics - model, - cfg.OUTPUT_DIR, - trainer=weakref.proxy(self), - ) - self.start_iter = 0 - self.max_iter = cfg.SOLVER.MAX_ITER - self.cfg = cfg - - self.register_hooks(self.build_hooks()) - - @classmethod - def build_evaluator(cls, cfg, dataset_name, output_folder=None): - """ - Create evaluator(s) for a given dataset. - This uses the special metadata "evaluator_type" associated with each - builtin dataset. For your own dataset, you can simply create an - evaluator manually in your script and do not have to worry about the - hacky if-else logic here. - """ - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type in ["sem_seg", "ade20k_panoptic_seg"]: - evaluator_list.append( - SemSegEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - # import pdb; pdb.set_trace() - if evaluator_type == "sem_seg_gzero": - - evaluator_list.append( - SemSegGzeroEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - if evaluator_type == "coco": - evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder)) - if evaluator_type in [ - "coco_panoptic_seg", - "ade20k_panoptic_seg", - "cityscapes_panoptic_seg", - ]: - evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) - if evaluator_type == "cityscapes_instance": - assert ( - torch.cuda.device_count() >= comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesInstanceEvaluator(dataset_name) - if evaluator_type == "cityscapes_sem_seg": - assert ( - torch.cuda.device_count() >= comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesSemSegEvaluator(dataset_name) - if evaluator_type == "cityscapes_panoptic_seg": - assert ( - torch.cuda.device_count() >= comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - evaluator_list.append(CityscapesSemSegEvaluator(dataset_name)) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format( - dataset_name, evaluator_type - ) - ) - elif len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - @classmethod - def build_train_loader(cls, cfg): - # Semantic segmentation dataset mapper - if cfg.INPUT.DATASET_MAPPER_NAME == "mask_former_semantic": - mapper = MaskFormerSemanticDatasetMapper(cfg, True) - # Panoptic segmentation dataset mapper - elif cfg.INPUT.DATASET_MAPPER_NAME == "mask_former_panoptic": - mapper = MaskFormerPanopticDatasetMapper(cfg, True) - # DETR-style dataset mapper for COCO panoptic segmentation - elif cfg.INPUT.DATASET_MAPPER_NAME == "detr_panoptic": - mapper = DETRPanopticDatasetMapper(cfg, True) - else: - mapper = None - return build_detection_train_loader(cfg, mapper=mapper) - - @classmethod - def build_lr_scheduler(cls, cfg, optimizer): - """ - It now calls :func:`detectron2.solver.build_lr_scheduler`. - Overwrite it if you'd like a different scheduler. - """ - return build_lr_scheduler(cfg, optimizer) - - @classmethod - def build_optimizer(cls, cfg, model): - weight_decay_norm = cfg.SOLVER.WEIGHT_DECAY_NORM - weight_decay_embed = cfg.SOLVER.WEIGHT_DECAY_EMBED - - defaults = {} - defaults["lr"] = cfg.SOLVER.BASE_LR - defaults["weight_decay"] = cfg.SOLVER.WEIGHT_DECAY - - norm_module_types = ( - torch.nn.BatchNorm1d, - torch.nn.BatchNorm2d, - torch.nn.BatchNorm3d, - torch.nn.SyncBatchNorm, - # NaiveSyncBatchNorm inherits from BatchNorm2d - torch.nn.GroupNorm, - torch.nn.InstanceNorm1d, - torch.nn.InstanceNorm2d, - torch.nn.InstanceNorm3d, - torch.nn.LayerNorm, - torch.nn.LocalResponseNorm, - ) - - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - for module_name, module in model.named_modules(): - for module_param_name, value in module.named_parameters(recurse=False): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - - hyperparams = copy.copy(defaults) - if "backbone" in module_name: - hyperparams["lr"] = hyperparams["lr"] * cfg.SOLVER.BACKBONE_MULTIPLIER - if ( - "relative_position_bias_table" in module_param_name - or "absolute_pos_embed" in module_param_name - ): - print(module_param_name) - hyperparams["weight_decay"] = 0.0 - if isinstance(module, norm_module_types): - hyperparams["weight_decay"] = weight_decay_norm - if isinstance(module, torch.nn.Embedding): - hyperparams["weight_decay"] = weight_decay_embed - params.append({"params": [value], **hyperparams}) - - def maybe_add_full_model_gradient_clipping(optim): - # detectron2 doesn't have full model gradient clipping now - clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE - enable = ( - cfg.SOLVER.CLIP_GRADIENTS.ENABLED - and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model" - and clip_norm_val > 0.0 - ) - - class FullModelGradientClippingOptimizer(optim): - def step(self, closure=None): - all_params = itertools.chain(*[x["params"] for x in self.param_groups]) - torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val) - super().step(closure=closure) - - return FullModelGradientClippingOptimizer if enable else optim - - optimizer_type = cfg.SOLVER.OPTIMIZER - if optimizer_type == "SGD": - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)( - params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM - ) - elif optimizer_type == "ADAMW": - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)( - params, cfg.SOLVER.BASE_LR - ) - else: - raise NotImplementedError(f"no optimizer type {optimizer_type}") - if not cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model": - optimizer = maybe_add_gradient_clipping(cfg, optimizer) - return optimizer - - @classmethod - def test_with_TTA(cls, cfg, model): - logger = logging.getLogger("detectron2.trainer") - # In the end of training, run an evaluation with TTA. - logger.info("Running inference with test-time augmentation ...") - model = SemanticSegmentorWithTTA(cfg, model) - evaluators = [ - cls.build_evaluator( - cfg, name, output_folder=os.path.join(cfg.OUTPUT_DIR, "inference_TTA") - ) - for name in cfg.DATASETS.TEST - ] - res = cls.test(cfg, model, evaluators) - res = OrderedDict({k + "_TTA": v for k, v in res.items()}) - return res - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - # for poly lr schedule - add_deeplab_config(cfg) - add_mask_former_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - # Setup logger for "mask_former" module - setup_logger(output=cfg.OUTPUT_DIR, distributed_rank=comm.get_rank(), name="mask_former") - return cfg - - -def main(args): - cfg = setup(args) - - if args.eval_only: - model = Trainer.build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - res = Trainer.test(cfg, model) - if cfg.TEST.AUG.ENABLED: - res.update(Trainer.test_with_TTA(cfg, model)) - if comm.is_main_process(): - verify_results(cfg, res) - return res - - trainer = Trainer(cfg) - trainer.resume_or_load(resume=args.resume) - return trainer.train() - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/harisansarkhan/CatFaceLandmarks/1_image.py b/spaces/harisansarkhan/CatFaceLandmarks/1_image.py deleted file mode 100644 index 8810951091112fae356fe83691742b8ca58e192f..0000000000000000000000000000000000000000 --- a/spaces/harisansarkhan/CatFaceLandmarks/1_image.py +++ /dev/null @@ -1,62 +0,0 @@ -import cv2 -import tensorflow as tf -import numpy as np -import gradio as gr - -# Load the trained model from the saved file -loaded_model = tf.keras.models.load_model('CatFaceFeatures_Resnet50_2.h5') - -# Function to predict facial landmarks on new images -def predict_landmarks(image_input): - # Convert Gradio image object to numpy array - image = image_input.astype('uint8') - - # Define the image size for resizing - image_size = (224, 224) - - # Convert to RGB before resizing - image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - resized_image = cv2.resize(image_rgb, image_size) - input_image = np.expand_dims(resized_image, axis=0) - - # Make predictions using the trained model - predictions = loaded_model.predict(input_image) - - # Rescale the predictions to the original image size - scale_y = image.shape[0] / image_size[0] - scale_x = image.shape[1] / image_size[1] - resized_predictions = [int(value * scale_x) if i % 2 == 0 else int( - value * scale_y) for i, value in enumerate(predictions[0])] - - # Calculate the radius of the circles based on image dimensions - image_height, image_width, _ = image.shape - max_dim = max(image_height, image_width) - radius_scale = max_dim / 1500 # Adjust this scale factor as needed - - # Draw circles (dots) on the original image at the predicted landmark locations - for i in range(0, len(resized_predictions), 2): - x, y = resized_predictions[i], resized_predictions[i + 1] - color = (255, 0, 0) - radius = int(8 * radius_scale) # Adjust the base radius value as needed - thickness = -1 - cv2.circle(image, (x, y), radius, color, thickness) - - return image - -# Create the Gradio interface -demo = gr.Interface( - predict_landmarks, - inputs = "image", - outputs = "image", - title = "Cat Facial Landmark Predictor", - description="Upload an image of a cat's face to predict its facial landmarks.", - cache_examples=True, - theme="default", - allow_flagging="manual", - flagging_options=["incorrect", "inaccurate"], - analytics_enabled=True, - batch=False, - max_batch_size=4, - allow_duplication=False -) -demo.launch() diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/demo/predictor.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/demo/predictor.py deleted file mode 100644 index 689fa85436d928858e652df665f5e7460a1f3154..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/demo/predictor.py +++ /dev/null @@ -1,220 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import atexit -import bisect -import multiprocessing as mp -from collections import deque -import cv2 -import torch - -from detectron2.data import MetadataCatalog -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.video_visualizer import VideoVisualizer -from detectron2.utils.visualizer import ColorMode, Visualizer - - -class VisualizationDemo(object): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False): - """ - Args: - cfg (CfgNode): - instance_mode (ColorMode): - parallel (bool): whether to run the model in different processes from visualization. - Useful since the visualization logic can be slow. - """ - self.metadata = MetadataCatalog.get( - cfg.DATASETS.TEST[0] if len(cfg.DATASETS.TEST) else "__unused" - ) - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.parallel = parallel - if parallel: - num_gpu = torch.cuda.device_count() - self.predictor = AsyncPredictor(cfg, num_gpus=num_gpu) - else: - self.predictor = DefaultPredictor(cfg) - - def run_on_image(self, image): - """ - Args: - image (np.ndarray): an image of shape (H, W, C) (in BGR order). - This is the format used by OpenCV. - - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - vis_output = None - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer(image, self.metadata, instance_mode=self.instance_mode) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_output = visualizer.draw_panoptic_seg_predictions( - panoptic_seg.to(self.cpu_device), segments_info - ) - else: - if "sem_seg" in predictions: - vis_output = visualizer.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - if "instances" in predictions: - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output - - def _frame_from_video(self, video): - while video.isOpened(): - success, frame = video.read() - if success: - yield frame - else: - break - - def run_on_video(self, video): - """ - Visualizes predictions on frames of the input video. - - Args: - video (cv2.VideoCapture): a :class:`VideoCapture` object, whose source can be - either a webcam or a video file. - - Yields: - ndarray: BGR visualizations of each video frame. - """ - video_visualizer = VideoVisualizer(self.metadata, self.instance_mode) - - def process_predictions(frame, predictions): - frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_frame = video_visualizer.draw_panoptic_seg_predictions( - frame, panoptic_seg.to(self.cpu_device), segments_info - ) - elif "instances" in predictions: - predictions = predictions["instances"].to(self.cpu_device) - vis_frame = video_visualizer.draw_instance_predictions(frame, predictions) - elif "sem_seg" in predictions: - vis_frame = video_visualizer.draw_sem_seg( - frame, predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - - # Converts Matplotlib RGB format to OpenCV BGR format - vis_frame = cv2.cvtColor(vis_frame.get_image(), cv2.COLOR_RGB2BGR) - return vis_frame - - frame_gen = self._frame_from_video(video) - if self.parallel: - buffer_size = self.predictor.default_buffer_size - - frame_data = deque() - - for cnt, frame in enumerate(frame_gen): - frame_data.append(frame) - self.predictor.put(frame) - - if cnt >= buffer_size: - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - - while len(frame_data): - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - else: - for frame in frame_gen: - yield process_predictions(frame, self.predictor(frame)) - - -class AsyncPredictor: - """ - A predictor that runs the model asynchronously, possibly on >1 GPUs. - Because rendering the visualization takes considerably amount of time, - this helps improve throughput when rendering videos. - """ - - class _StopToken: - pass - - class _PredictWorker(mp.Process): - def __init__(self, cfg, task_queue, result_queue): - self.cfg = cfg - self.task_queue = task_queue - self.result_queue = result_queue - super().__init__() - - def run(self): - predictor = DefaultPredictor(self.cfg) - - while True: - task = self.task_queue.get() - if isinstance(task, AsyncPredictor._StopToken): - break - idx, data = task - result = predictor(data) - self.result_queue.put((idx, result)) - - def __init__(self, cfg, num_gpus: int = 1): - """ - Args: - cfg (CfgNode): - num_gpus (int): if 0, will run on CPU - """ - num_workers = max(num_gpus, 1) - self.task_queue = mp.Queue(maxsize=num_workers * 3) - self.result_queue = mp.Queue(maxsize=num_workers * 3) - self.procs = [] - for gpuid in range(max(num_gpus, 1)): - cfg = cfg.clone() - cfg.defrost() - cfg.MODEL.DEVICE = "cuda:{}".format(gpuid) if num_gpus > 0 else "cpu" - self.procs.append( - AsyncPredictor._PredictWorker(cfg, self.task_queue, self.result_queue) - ) - - self.put_idx = 0 - self.get_idx = 0 - self.result_rank = [] - self.result_data = [] - - for p in self.procs: - p.start() - atexit.register(self.shutdown) - - def put(self, image): - self.put_idx += 1 - self.task_queue.put((self.put_idx, image)) - - def get(self): - self.get_idx += 1 # the index needed for this request - if len(self.result_rank) and self.result_rank[0] == self.get_idx: - res = self.result_data[0] - del self.result_data[0], self.result_rank[0] - return res - - while True: - # make sure the results are returned in the correct order - idx, res = self.result_queue.get() - if idx == self.get_idx: - return res - insert = bisect.bisect(self.result_rank, idx) - self.result_rank.insert(insert, idx) - self.result_data.insert(insert, res) - - def __len__(self): - return self.put_idx - self.get_idx - - def __call__(self, image): - self.put(image) - return self.get() - - def shutdown(self): - for _ in self.procs: - self.task_queue.put(AsyncPredictor._StopToken()) - - @property - def default_buffer_size(self): - return len(self.procs) * 5 diff --git a/spaces/hstrejoluna/dreambooth-training/train_dreambooth.py b/spaces/hstrejoluna/dreambooth-training/train_dreambooth.py deleted file mode 100644 index f4ff135e549f0d6c72f733092f3df817cb178e01..0000000000000000000000000000000000000000 --- a/spaces/hstrejoluna/dreambooth-training/train_dreambooth.py +++ /dev/null @@ -1,889 +0,0 @@ -import argparse -import itertools -import math -import os -from pathlib import Path -from typing import Optional -import subprocess -import sys -import gc -import random - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.utils.import_utils import is_xformers_available -from diffusers.optimization import get_scheduler -from huggingface_hub import HfFolder, Repository, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - - -logger = get_logger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - #required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - #required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - #required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default="", - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - - parser.add_argument( - "--save_n_steps", - type=int, - default=1, - help=("Save the model every n global_steps"), - ) - - - parser.add_argument( - "--save_starting_step", - type=int, - default=1, - help=("The step from which it starts saving intermediary checkpoints"), - ) - - parser.add_argument( - "--stop_text_encoder_training", - type=int, - default=1000000, - help=("The step at which the text_encoder is no longer trained"), - ) - - - parser.add_argument( - "--image_captions_filename", - action="store_true", - help="Get captions from filename", - ) - - - parser.add_argument( - "--dump_only_text_encoder", - action="store_true", - default=False, - help="Dump only text encoder", - ) - - parser.add_argument( - "--train_only_unet", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--cache_latents", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--Session_dir", - type=str, - default="", - help="Current session directory", - ) - - - - - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - #if args.instance_data_dir is None: - # raise ValueError("You must specify a train data directory.") - - #if args.with_prior_preservation: - # if args.class_data_dir is None: - # raise ValueError("You must specify a data directory for class images.") - # if args.class_prompt is None: - # raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - args, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - self.image_captions_filename = None - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if args.image_captions_filename: - self.image_captions_filename = True - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - random.shuffle(self.class_images_path) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - path = self.instance_images_path[index % self.num_instance_images] - instance_image = Image.open(path) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - - instance_prompt = self.instance_prompt - - if self.image_captions_filename: - filename = Path(path).stem - pt=''.join([i for i in filename if not i.isdigit()]) - pt=pt.replace("_"," ") - pt=pt.replace("(","") - pt=pt.replace(")","") - pt=pt.replace("-","") - instance_prompt = pt - sys.stdout.write(" " +instance_prompt+" ") - sys.stdout.flush() - - - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - -class LatentsDataset(Dataset): - def __init__(self, latents_cache, text_encoder_cache): - self.latents_cache = latents_cache - self.text_encoder_cache = text_encoder_cache - - def __len__(self): - return len(self.latents_cache) - - def __getitem__(self, index): - return self.latents_cache[index], self.text_encoder_cache[index] - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - -def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict: - """ - Starts from base starting dict and then adds the remaining key values from updater replacing the values from - the first starting/base dict with the second updater dict. - - For later: how does d = {**d1, **d2} replace collision? - - :param starting_dict: - :param updater_dict: - :return: - """ - new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict - new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict - return new_dict - -def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace: - """ - - ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x - :param args1: - :param args2: - :return: - """ - # - the merged args - # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}. - merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2)) - args = argparse.Namespace(**merged_key_values_for_namespace) - return args - -def run_training(args_imported): - args_default = parse_args() - args = merge_args(args_default, args_imported) - print(args) - logging_dir = Path(args.output_dir, args.logging_dir) - i=args.save_starting_step - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - with torch.autocast("cuda"): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg") - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - if args.train_only_unet: - if os.path.exists(str(args.output_dir+"/text_encoder_trained")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained") - elif os.path.exists(str(args.output_dir+"/text_encoder")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - if is_xformers_available(): - try: - print("Enabling memory efficient attention with xformers...") - unet.enable_xformers_memory_efficient_attention() - except Exception as e: - logger.warning( - f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}" - ) - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - args=args, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - - if args.cache_latents: - latents_cache = [] - text_encoder_cache = [] - for batch in tqdm(train_dataloader, desc="Caching latents"): - with torch.no_grad(): - batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype) - batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True) - latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist) - if args.train_text_encoder: - text_encoder_cache.append(batch["input_ids"]) - else: - text_encoder_cache.append(text_encoder(batch["input_ids"])[0]) - train_dataset = LatentsDataset(latents_cache, text_encoder_cache) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True) - - del vae - #if not args.train_text_encoder: - # del text_encoder - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - def bar(prg): - br='|'+'█' * prg + ' ' * (25-prg)+'|' - return br - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - global_step = 0 - - for epoch in range(args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - for step, batch in enumerate(train_dataloader): - with accelerator.accumulate(unet): - # Convert images to latent space - with torch.no_grad(): - if args.cache_latents: - latents_dist = batch[0][0] - else: - latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist - latents = latents_dist.sample() * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - if(args.cache_latents): - if args.train_text_encoder: - encoder_hidden_states = text_encoder(batch[0][1])[0] - else: - encoder_hidden_states = batch[0][1] - else: - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - fll=round((global_step*100)/args.max_train_steps) - fll=round(fll/4) - pr=bar(fll) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - progress_bar.set_description_str("Progress:"+pr) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30: - if accelerator.is_main_process: - print(" " +" Freezing the text_encoder ..."+" ") - frz_dir=args.output_dir + "/text_encoder_frozen" - if os.path.exists(frz_dir): - subprocess.call('rm -r '+ frz_dir, shell=True) - os.mkdir(frz_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(frz_dir) - - if args.save_n_steps >= 200: - if global_step < args.max_train_steps and global_step+1==i: - ckpt_name = "_step_" + str(global_step+1) - save_dir = Path(args.output_dir+ckpt_name) - save_dir=str(save_dir) - save_dir=save_dir.replace(" ", "_") - if not os.path.exists(save_dir): - os.mkdir(save_dir) - inst=save_dir[16:] - inst=inst.replace(" ", "_") - print(" SAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt") - # Create the pipeline using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(save_dir) - frz_dir=args.output_dir + "/text_encoder_frozen" - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True) - subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True) - chkpth=args.Session_dir+"/"+inst+".ckpt" - subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True) - subprocess.call('rm -r '+ save_dir, shell=True) - i=i+args.save_n_steps - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.dump_only_text_encoder: - txt_dir=args.output_dir + "/text_encoder_trained" - if not os.path.exists(txt_dir): - os.mkdir(txt_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(txt_dir) - - elif args.train_only_unet: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - txt_dir=args.output_dir + "/text_encoder_trained" - subprocess.call('rm -r '+txt_dir, shell=True) - - else: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - frz_dir=args.output_dir + "/text_encoder_frozen" - pipeline.save_pretrained(args.output_dir) - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True) - subprocess.call('rm -r '+ frz_dir, shell=True) - - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - del pipeline - torch.cuda.empty_cache() - gc.collect() -if __name__ == "__main__": - pass - #main() - diff --git a/spaces/huggan/sefa/models/__init__.py b/spaces/huggan/sefa/models/__init__.py deleted file mode 100644 index 537286b4969c2464e9cbdbaff3cb272332223875..0000000000000000000000000000000000000000 --- a/spaces/huggan/sefa/models/__init__.py +++ /dev/null @@ -1,114 +0,0 @@ -# python3.7 -"""Collects all available models together.""" - -from .model_zoo import MODEL_ZOO -from .pggan_generator import PGGANGenerator -from .pggan_discriminator import PGGANDiscriminator -from .stylegan_generator import StyleGANGenerator -from .stylegan_discriminator import StyleGANDiscriminator -from .stylegan2_generator import StyleGAN2Generator -from .stylegan2_discriminator import StyleGAN2Discriminator - -__all__ = [ - 'MODEL_ZOO', 'PGGANGenerator', 'PGGANDiscriminator', 'StyleGANGenerator', - 'StyleGANDiscriminator', 'StyleGAN2Generator', 'StyleGAN2Discriminator', - 'build_generator', 'build_discriminator', 'build_model' -] - -_GAN_TYPES_ALLOWED = ['pggan', 'stylegan', 'stylegan2'] -_MODULES_ALLOWED = ['generator', 'discriminator'] - - -def build_generator(gan_type, resolution, **kwargs): - """Builds generator by GAN type. - - Args: - gan_type: GAN type to which the generator belong. - resolution: Synthesis resolution. - **kwargs: Additional arguments to build the generator. - - Raises: - ValueError: If the `gan_type` is not supported. - NotImplementedError: If the `gan_type` is not implemented. - """ - if gan_type not in _GAN_TYPES_ALLOWED: - raise ValueError(f'Invalid GAN type: `{gan_type}`!\n' - f'Types allowed: {_GAN_TYPES_ALLOWED}.') - - if gan_type == 'pggan': - return PGGANGenerator(resolution, **kwargs) - if gan_type == 'stylegan': - return StyleGANGenerator(resolution, **kwargs) - if gan_type == 'stylegan2': - return StyleGAN2Generator(resolution, **kwargs) - raise NotImplementedError(f'Unsupported GAN type `{gan_type}`!') - - -def build_discriminator(gan_type, resolution, **kwargs): - """Builds discriminator by GAN type. - - Args: - gan_type: GAN type to which the discriminator belong. - resolution: Synthesis resolution. - **kwargs: Additional arguments to build the discriminator. - - Raises: - ValueError: If the `gan_type` is not supported. - NotImplementedError: If the `gan_type` is not implemented. - """ - if gan_type not in _GAN_TYPES_ALLOWED: - raise ValueError(f'Invalid GAN type: `{gan_type}`!\n' - f'Types allowed: {_GAN_TYPES_ALLOWED}.') - - if gan_type == 'pggan': - return PGGANDiscriminator(resolution, **kwargs) - if gan_type == 'stylegan': - return StyleGANDiscriminator(resolution, **kwargs) - if gan_type == 'stylegan2': - return StyleGAN2Discriminator(resolution, **kwargs) - raise NotImplementedError(f'Unsupported GAN type `{gan_type}`!') - - -def build_model(gan_type, module, resolution, **kwargs): - """Builds a GAN module (generator/discriminator/etc). - - Args: - gan_type: GAN type to which the model belong. - module: GAN module to build, such as generator or discrimiantor. - resolution: Synthesis resolution. - **kwargs: Additional arguments to build the discriminator. - - Raises: - ValueError: If the `module` is not supported. - NotImplementedError: If the `module` is not implemented. - """ - if module not in _MODULES_ALLOWED: - raise ValueError(f'Invalid module: `{module}`!\n' - f'Modules allowed: {_MODULES_ALLOWED}.') - - if module == 'generator': - return build_generator(gan_type, resolution, **kwargs) - if module == 'discriminator': - return build_discriminator(gan_type, resolution, **kwargs) - raise NotImplementedError(f'Unsupported module `{module}`!') - - -def parse_gan_type(module): - """Parses GAN type of a given module. - - Args: - module: The module to parse GAN type from. - - Returns: - A string, indicating the GAN type. - - Raises: - ValueError: If the GAN type is unknown. - """ - if isinstance(module, (PGGANGenerator, PGGANDiscriminator)): - return 'pggan' - if isinstance(module, (StyleGANGenerator, StyleGANDiscriminator)): - return 'stylegan' - if isinstance(module, (StyleGAN2Generator, StyleGAN2Discriminator)): - return 'stylegan2' - raise ValueError(f'Unable to parse GAN type from type `{type(module)}`!') diff --git a/spaces/humeur/Swedish-Whisper-from-Youtube/app.py b/spaces/humeur/Swedish-Whisper-from-Youtube/app.py deleted file mode 100644 index 4c8b34cb86d58765b8c115bea5aa47b8b02a3b04..0000000000000000000000000000000000000000 --- a/spaces/humeur/Swedish-Whisper-from-Youtube/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -from pytube import YouTube -from transformers import pipeline - -class GradioInference(): - def __init__(self): - self.transcribe_model = pipeline(model='humeur/lab2_id2223') - self.translate_model = pipeline("translation_SV_to_EN", model="Helsinki-NLP/opus-mt-sv-en") - self.yt = None - - def __call__(self, link): - if self.yt is None: - self.yt = YouTube(link) - path = self.yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - results = self.transcribe_model(path) - results = self.translate_model(results["text"]) - return results[0]['translation_text'] - - def populate_metadata(self, link): - self.yt = YouTube(link) - return self.yt.thumbnail_url, self.yt.title - -gio = GradioInference() -title="SWED->EN Youtube Transcriber (Whisper)" -description="Speech to text transcription of Youtube videos using OpenAI's Whisper finetunned for Swedish to English translation" - -block = gr.Blocks() -with block: - gr.HTML( - f""" -
    -
    -

    {title}

    -
    -

    - {description} -

    -
    - """ - ) - with gr.Group(): - with gr.Box(): - link = gr.Textbox(label="YouTube Link") - title = gr.Label(label="Video Title") - with gr.Row().style(equal_height=True): - img = gr.Image(label="Thumbnail") - text = gr.Textbox(label="Transcription", placeholder="Transcription Output", lines=10) - with gr.Row().style(equal_height=True): - btn = gr.Button("Transcribe") - btn.click(gio, inputs=[link], outputs=[text]) - link.change(gio.populate_metadata, inputs=[link], outputs=[img, title]) -block.launch() diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_32gpu_r50.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_32gpu_r50.py deleted file mode 100644 index a44a5d771e17ecbeffe3437f3500e9d0c9dcc105..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_32gpu_r50.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.3 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.4 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = config.num_epoch // 10 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/iamstolas/STOLAS/src/components/providers.tsx b/spaces/iamstolas/STOLAS/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/imseldrith/BotX/Uploader/commands.py b/spaces/imseldrith/BotX/Uploader/commands.py deleted file mode 100644 index 72bcac187ca81081d25fe79d31f39206c049f2e9..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/BotX/Uploader/commands.py +++ /dev/null @@ -1,66 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Hash Minner - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE - -import os - -from pyrogram import Client, filters -from pyrogram.types import Message -from Uploader.script import Translation - -if bool(os.environ.get("WEBHOOK")): - from Uploader.config import Config -else: - from sample_config import Config - - -@Client.on_message( - filters.command("start") & filters.private, -) -async def start_bot(_, m: Message): - return await m.reply_text( - Translation.START_TEXT.format(m.from_user.first_name), - reply_markup=Translation.START_BUTTONS, - disable_web_page_preview=True, - quote=True, - ) - - -@Client.on_message( - filters.command("help") & filters.private, -) -async def help_bot(_, m: Message): - return await m.reply_text( - Translation.HELP_TEXT, - reply_markup=Translation.HELP_BUTTONS, - disable_web_page_preview=True, - ) - - -@Client.on_message( - filters.command("about") & filters.private, -) -async def aboutme(_, m: Message): - return await m.reply_text( - Translation.ABOUT_TEXT, - reply_markup=Translation.ABOUT_BUTTONS, - disable_web_page_preview=True, - ) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/3D-Album Commercial Suite 3 Incl License Key __LINK__.md b/spaces/inplisQlawa/anything-midjourney-v4-1/3D-Album Commercial Suite 3 Incl License Key __LINK__.md deleted file mode 100644 index a27c2851aebf3435cabb23147889f03dbd1f0f45..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/3D-Album Commercial Suite 3 Incl License Key __LINK__.md +++ /dev/null @@ -1,19 +0,0 @@ -
    -

    3D-Album Commercial Suite 3: A Creative and Profitable Way to Make Photo Albums

    -

    If you are looking for a software that can help you create amazing digital albums with 3D effects, look no further than 3D-Album Commercial Suite 3. This software lets you import your photos and videos, edit them, add backgrounds, music, text, and transitions, and render them into stunning 3D presentations that can be viewed on any computer, TV, or DVD player. You can also use this software to make slide shows, screen savers, and web galleries.

    -

    3D-Album Commercial Suite 3 is not just a fun and easy tool for personal use. It is also a powerful and profitable tool for professional use. You can use this software to create photo albums for your clients, such as portrait or wedding photographers, realtors, or corporate executives. You can also sell your albums online or offline, as you can lock and encrypt them with your own key and password. You can also add your own logo and contact information to promote your business.

    -

    3D-Album Commercial Suite 3 Incl License Key


    Download Zip === https://urlin.us/2uExBU



    -

    3D-Album Commercial Suite 3 comes with a variety of templates and styles that you can use as a starting point for your albums. You can choose from different themes, such as wedding, travel, sports, art, or fantasy. You can also customize your albums with your own creativity and imagination. You can change the camera angles, lighting effects, animations, and interactions. You can also add voiceovers, sound effects, and scrolling text headlines to make your albums more engaging and informative.

    -

    3D-Album Commercial Suite 3 requires a Windows system with a video card that supports 3D graphics. It is compatible with both 32-bit and 64-bit operating systems. It costs $299 and you can download it from the official website: www.3d-album.com. You can also watch some sample albums and tutorials on YouTube to see how this software works.

    -

    3D-Album Commercial Suite 3 is a software that will let you create photo albums that are not only beautiful but also profitable. It is a software that will let you unleash your creativity and impress your clients and viewers. It is a software that will let you enjoy the art of making photo albums in a whole new dimension.

    If you want to learn more about 3D-Album Commercial Suite 3, here are some tips and tricks that you can use to make your albums more attractive and professional.

    -
      -
    • Use high-quality photos and videos. The better the quality of your source material, the better the result of your 3D presentation. Avoid using blurry, noisy, or distorted images and videos. You can also use the image editor in the software to crop, rotate, resize, adjust brightness and contrast, and apply filters to your photos.
    • -
    • Choose a suitable template and style. The software offers a wide range of templates and styles that you can use for different purposes and occasions. You can browse through them and preview them before applying them to your album. You can also mix and match different styles within the same album to create a more dynamic and diverse presentation.
    • -
    • Customize your album settings. You can change various settings of your album, such as the resolution, frame rate, quality, duration, background music, and transition speed. You can also add your own logo and contact information to the album. You can access these settings by clicking on the Album tab in the software.
    • -
    • Add interactivity and narration. You can make your album more interactive and informative by adding voiceovers, sound effects, text bubbles, and scrolling headlines. You can also enable the viewer to pause, play, skip, or rewind the presentation. You can access these features by clicking on the Edit tab in the software.
    • -
    • Render and share your album. Once you are satisfied with your album, you can render it into a video file that can be played on any computer, TV, or DVD player. You can also upload it to YouTube or other online platforms. You can also burn it to a CD or DVD disc. You can access these options by clicking on the Output tab in the software.
    • -
    -

    These are some of the basic steps that you can follow to create a 3D album with 3D-Album Commercial Suite 3. Of course, you can also experiment with different features and options to create your own unique and original albums. The only limit is your imagination.

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Battle Of Empires 1914-1918 Honor Of The Empire RePack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Battle Of Empires 1914-1918 Honor Of The Empire RePack.md deleted file mode 100644 index 62dd99c9b8a16f15fb91cf9629769c5ea3ae1674..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Battle Of Empires 1914-1918 Honor Of The Empire RePack.md +++ /dev/null @@ -1,94 +0,0 @@ - ----> ServiceClient failure for DeepLeo[/ERROR]

    -

    Battle Of Empires 1914-1918 Honor Of The Empire RePack


    Download 🔗 https://urlin.us/2uEyDc



    - - -If you are a fan of historical strategy games, you might want to check out Battle Of Empires 1914-1918 Honor Of The Empire RePack. This is a downloadable content (DLC) for the base game Battle Of Empires 1914-1918, which is a story-driven real-time tactics game that re-lives some of the bloodiest battles of the First World War. - -## What is Battle Of Empires 1914-1918 Honor Of The Empire RePack? - -Battle Of Empires 1914-1918 Honor Of The Empire RePack is a DLC that adds a new campaign focused on the Austro-Hungarian Empire, one of the central powers that fought against the Entente countries in WWI. The DLC features 10 new missions that cover different aspects of the war, such as the siege of Przemyśl, the battles on the Isonzo river, the Tyrolean front, and the Romanian campaign. You will command various units of the Austro-Hungarian army, including infantry, cavalry, artillery, and armored cars, and face off against enemies like Russia, Serbia, Italy, and Romania. - -## What are the features of Battle Of Empires 1914-1918 Honor Of The Empire RePack? - -Battle Of Empires 1914-1918 Honor Of The Empire RePack offers a realistic and immersive WWI experience, with detailed graphics, authentic sounds, and historical accuracy. The game features: - -- A dynamic and challenging gameplay that requires you to use different tactics and strategies depending on the terrain, weather, and enemy behavior. -- A variety of weapons and equipment that were used in WWI, such as rifles, machine guns, grenades, gas masks, barbed wire, mines, and flamethrowers. -- A realistic damage system that affects both your units and the environment. You can destroy buildings, bridges, trees, and vehicles with your firepower or explosives. -- A multiplayer mode that allows you to play with or against other players online or via LAN. You can choose from different modes and maps, and customize your army with different skins and flags. - -## How to download and install Battle Of Empires 1914-1918 Honor Of The Empire RePack? - -To download and install Battle Of Empires 1914-1918 Honor Of The Empire RePack, you need to have the base game Battle Of Empires 1914-1918 installed on your PC. You can buy the base game from Steam for $8.99 or from other online platforms. You can also buy the Deluxe Bundle that includes the base game and all the DLCs for $44.95. - -To install Battle Of Empires 1914-1918 Honor Of The Empire RePack, you need to follow these steps: - -1. Download the RePack file from a trusted source. You can find it on Repacklab.com or other websites that offer free pre-installed video games. -2. Extract the RePack file using WinRAR or 7-Zip. -3. Run the setup.exe file and follow the instructions. -4. Enjoy playing Battle Of Empires 1914-1918 Honor Of The Empire RePack! - -## Conclusion - -Battle Of Empires 1914-1918 Honor Of The Empire RePack is a DLC that adds a new campaign focused on the Austro-Hungarian Empire in WWI. It offers a realistic and immersive WWI experience, with dynamic and challenging gameplay, a variety of weapons and equipment, a realistic damage system, and a multiplayer mode. If you are a fan of historical strategy games, you might want to check out Battle Of Empires 1914-1918 Honor Of The Empire RePack. - - -Battle Of Empires 1914-1918 Honor Of The Empire RePack is a DLC that has many positive aspects, but also some drawbacks. Here are some of the pros and cons of the game: - -### Pros - -- The game offers a unique perspective on WWI, focusing on the Austro-Hungarian Empire and its role in the war. -- The game has a high level of historical accuracy and realism, depicting the weapons, equipment, uniforms, and tactics of the period. -- The game has a diverse and challenging gameplay, with different scenarios, objectives, and enemies to face. -- The game has a multiplayer mode that allows you to play with or against other players online or via LAN. -- The game has a low price and a high replay value, with many missions, modes, and expansions to choose from. - -### Cons - -- The game has some technical issues and bugs that may affect the performance and stability of the game. -- The game has some outdated graphics and animations that may not appeal to some players. -- The game has a steep learning curve and a high difficulty level that may frustrate some players. -- The game has a limited voice acting and sound effects that may reduce the immersion and atmosphere of the game. -- The game has a niche appeal and may not interest players who are not fans of WWI or strategy games. - - -Battle Of Empires 1914-1918 Honor Of The Empire RePack is a DLC that requires the base game Battle Of Empires 1914-1918 to play. You can launch the game from Steam or from the desktop shortcut. Once you start the game, you can choose from different modes and options: - -- Singleplayer: This mode allows you to play the campaign missions, the skirmish missions, or the editor mode. The campaign missions follow the historical events of WWI from the perspective of the Austro-Hungarian Empire. The skirmish missions are standalone scenarios that you can customize with different settings and objectives. The editor mode allows you to create your own maps and missions using the game's tools and assets. -- Multiplayer: This mode allows you to play with or against other players online or via LAN. You can join or host a server, and choose from different modes and maps. The modes include deathmatch, team deathmatch, capture the flag, and cooperative. You can also customize your army with different skins and flags. -- Options: This menu allows you to adjust the game's settings, such as graphics, sound, controls, and language. - -To play the game, you need to use your mouse and keyboard to control your units and camera. You can select one or more units by clicking on them or dragging a box around them. You can also use hotkeys to select specific types of units, such as infantry, cavalry, artillery, etc. You can issue orders to your units by right-clicking on a location or an enemy. You can also use the command panel at the bottom of the screen to access more options, such as formations, stances, abilities, etc. - -The game's interface shows you various information and indicators, such as: - -- Minimap: This shows you a small overview of the map and your units' positions. You can click on it to move your camera to a specific location. -- Unit panel: This shows you the selected unit's name, health, morale, ammo, and status effects. You can also see the unit's inventory and equipment by clicking on the backpack icon. -- Resource panel: This shows you your available resources, such as manpower, fuel, ammo, and grenades. You can use these resources to reinforce your units or call in support. -- Message panel: This shows you important messages and objectives during the game. You can also access the pause menu by clicking on the gear icon. - -The game's mechanics are based on realism and historical accuracy. You need to consider various factors when playing the game, such as: - -- Terrain: The terrain affects your units' movement speed, visibility, cover, and accuracy. You need to use different tactics depending on the terrain type, such as hills, forests, rivers, bridges, etc. -- Weather: The weather affects your units' performance and visibility. You need to adapt to different weather conditions, such as rain, snow, fog, etc. -- Damage: The damage system is realistic and dynamic. Your units can suffer from different types of injuries and wounds that affect their health and abilities. You can heal your units with medics or bandages. Your units can also die from bleeding out or critical hits. -- Morale: The morale system affects your units' behavior and effectiveness. Your units' morale depends on various factors, such as casualties, enemy fire, friendly fire, etc. Your units can panic or rout if their morale is too low. You can boost your units' morale with officers or flags. -- Stealth: The stealth system allows you to use different tactics to surprise or avoid your enemies. You can use camouflage or cover to hide your units from enemy sight. You can also use sabotage or diversion to distract or disable your enemies. - -Battle Of Empires 1914-1918 Honor Of The Empire RePack is a DLC that offers a realistic and immersive WWI experience. You need to use different tactics and strategies depending on the situation and scenario. You need to manage your resources and units carefully and efficiently. You need to overcome various challenges and obstacles that the war presents. You need to fight for the glory of Austria-Hungary in WWI. - - -Battle Of Empires 1914-1918 Honor Of The Empire RePack is a DLC that requires skill and strategy to play. You need to master the game's mechanics and features to succeed in the missions and scenarios. Here are some of the best tips and tricks for the game: - -- Use cover and concealment: The game's realism and damage system make your units vulnerable to enemy fire. You need to use cover and concealment to protect your units from harm. Cover reduces the damage you take from bullets and explosions, while concealment reduces the chance of being spotted by enemies. You can use buildings, trees, trenches, craters, and other terrain features as cover and concealment. You can also use camouflage or smoke grenades to hide your units from enemy sight. -- Use different formations and stances: The game's gameplay and tactics depend on the terrain, weather, and enemy behavior. You need to use different formations and stances to adapt to different situations. Formations affect your units' movement speed, visibility, and accuracy. You can choose from line, column, wedge, or square formations. Stances affect your units' posture, visibility, and accuracy. You can choose from stand, crouch, or prone stances. You can also use the command panel to access more options, such as advance, retreat, hold fire, etc. -- Use different weapons and equipment: The game's variety of weapons and equipment allow you to use different tactics and strategies depending on the situation and scenario. You can use rifles, machine guns, grenades, gas masks, barbed wire, mines, and flamethrowers. You can also use artillery and armored cars for support. You need to manage your ammo and grenades carefully and efficiently. You can resupply your units with ammo crates or trucks. You can also loot enemy corpses or vehicles for weapons and equipment. -- Use sabotage and stealth: The game's stealth system allows you to use different tactics to surprise or avoid your enemies. You can use sabotage or diversion to distract or disable your enemies. You can use explosives or fire to destroy enemy buildings, bridges, trees, or vehicles. You can also use wire cutters or shovels to cut through enemy barbed wire or dig under enemy trenches. You can also use snipers or scouts to spot or eliminate enemy targets from a distance. -- Use officers and flags: The game's morale system affects your units' behavior and effectiveness. Your units' morale depends on various factors, such as casualties, enemy fire, friendly fire, etc. Your units can panic or rout if their morale is too low. You can boost your units' morale with officers or flags. Officers can inspire your units with speeches or orders. Flags can increase your units' loyalty and pride. You can also use officers or flags to rally your units if they are fleeing or scattered. - -Battle Of Empires 1914-1918 Honor Of The Empire RePack is a DLC that offers a realistic and immersive WWI experience. You need to use different tactics and strategies depending on the situation and scenario. You need to manage your resources and units carefully and efficiently. You need to overcome various challenges and obstacles that the war presents. You need to fight for the glory of Austria-Hungary in WWI. - - ---> ServiceClient failure for DeepLeo[/ERROR]

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Bonespro4crackkeygenfulldownload.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Bonespro4crackkeygenfulldownload.md deleted file mode 100644 index 40ed1f081c92772eb245ea4f033e159e9a4ec44d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Bonespro4crackkeygenfulldownload.md +++ /dev/null @@ -1,6 +0,0 @@ -

    bonespro4crackkeygenfulldownload


    Download Zip ->>> https://urlin.us/2uExPh



    - -bonespro4crackkeygenfulldownload · TalkEnglishOfflineVersionFullDownloadFree · busou shinki battle masters 2 dlc event data 1 7z · Adroit Photo Forensics ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/CTGP Revolution V1020003 RMCEG2 WBFS NTSC USA ISO.md b/spaces/inplisQlawa/anything-midjourney-v4-1/CTGP Revolution V1020003 RMCEG2 WBFS NTSC USA ISO.md deleted file mode 100644 index 21ce5c405bd009b8feba416590b44e597c1b41d6..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/CTGP Revolution V1020003 RMCEG2 WBFS NTSC USA ISO.md +++ /dev/null @@ -1,6 +0,0 @@ -

    CTGP Revolution V1020003 RMCEG2 WBFS NTSC USA ISO


    Download File ———>>> https://urlin.us/2uEwX3



    -
    -Movie info: A Flying Jatt is a upcoming Bollywood Superhero film directed by Remo D'Souza starring Tiger Shroff, ... Magnet Download; Torrent Download. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Do Filme Speed Racer Dublado Via Torrent [REPACK].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Download Do Filme Speed Racer Dublado Via Torrent [REPACK].md deleted file mode 100644 index 92e5ba6f617a309d0d4743358ef571d387d738f6..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Do Filme Speed Racer Dublado Via Torrent [REPACK].md +++ /dev/null @@ -1,12 +0,0 @@ -

    Download Do Filme Speed Racer Dublado Via Torrent


    Download File ---> https://urlin.us/2uEyrB



    -
    -irc - - alguem pode me ajudar - - jhonny please stop flooding - - -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Minecraft Windows 10 Edition Crack --bfdcm.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Download Minecraft Windows 10 Edition Crack --bfdcm.md deleted file mode 100644 index df037706b09a4a41fce17f17a36162bb57c4854e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Minecraft Windows 10 Edition Crack --bfdcm.md +++ /dev/null @@ -1,13 +0,0 @@ -

    Download Minecraft Windows 10 Edition Crack --bfdcm


    Download ::: https://urlin.us/2uEylC



    - -. . . ..7 -Everyone says - the soul is not eternal, -And I'll tell you something like this: -She is eternal. That's for sure. -Eternal, like life. -And life is not endless. -But if you live -Eternal is the soul. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77.md deleted file mode 100644 index 1fd3962f59645dd9602adf5c4be8dcfd77732d01..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77.md +++ /dev/null @@ -1,8 +0,0 @@ - -

    TomTom Central and Eastern Europe 910.4892.zip. Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77 release date 2016. https://cleanwebsoftware.org/ati-9.2-linux-32bit-download-file.html. Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77 release date 2016. 3ds max 2014 keygen full version. 3ds max 2014 keygen cracked full premium serial key with crack direct download from links provided by search engines.

    -

    https://cleanwebsoftware.org/ati-9.2-linux-32bit-download-file.html. Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77 release date 2016. 3ds max 2014 keygen full version. 3ds max 2014 keygen cracked full premium serial key with crack direct download from links provided by search engines.

    -

    Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77


    Download Zip ✶✶✶ https://urlin.us/2uEy05



    -

    reghei 538a28228e https://coub.com/stories/4239728-rar-kiriku-e-gli-animali-selvaggi-tntvillageita-xvid-mp3-77-free-keygen-pro-64bit. download lagu mp3 aku bukan jodohnya tri suaka. http://gestkenddis.yolasite.com/resources/Kiriku-E-Gli-Animali-Selvaggi-TntvillageIta-Xvid-Mp3-77.pdf.

    -

    Iron Mage Casino Slots is the latest casino slot from the reputable Aristocrat casino software company. It will make the player feel as if they have discovered an old-fashioned slot machine. It is designed to make the player feel like this is the slot machine itself. The main goal of the game is to connect three or more scatter symbols on the reels and the player will be rewarded with free spins, a cash prize and an exciting bonus game. Kiriku E Gli Animali Selvaggi TntvillageIta Xvid Mp3 77

    Download free xxx - rar-kiriku-e-gli-animali-selvaggi-tntvillageita-xvid-mp3-77-free-keygen-pro-64bit.rar.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py deleted file mode 100644 index 69b6d1c4b5724a3ef61f8bc3d64fc45c5e51e270..0000000000000000000000000000000000000000 --- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - #unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - unnormalized_derivatives_ = torch.zeros((1, 1, unnormalized_derivatives.size(2), unnormalized_derivatives.size(3)+2)) - unnormalized_derivatives_[...,1:-1] = unnormalized_derivatives - unnormalized_derivatives = unnormalized_derivatives_ - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/jackyliang42/code-as-policies/md_logger.py b/spaces/jackyliang42/code-as-policies/md_logger.py deleted file mode 100644 index 5e496c5d3200381a7bacfd4fc1456bbc9541b0a7..0000000000000000000000000000000000000000 --- a/spaces/jackyliang42/code-as-policies/md_logger.py +++ /dev/null @@ -1,16 +0,0 @@ -class MarkdownLogger: - - def __init__(self): - self._log = '' - - def log_text(self, text): - self._log += '\n' + text + '\n' - - def log_code(self, code): - self._log += f'\n```python\n{code}\n```\n' - - def clear(self): - self._log = '' - - def get_log(self): - return self._log \ No newline at end of file diff --git a/spaces/james-oldfield/PandA/app.py b/spaces/james-oldfield/PandA/app.py deleted file mode 100644 index 2d8107a4d99fd8224aba5e916e4ca092a2904adb..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import numpy as np -import gradio as gr -import torch -from PIL import Image -from model import Model as Model -from annotated_directions import annotated_directions -device = torch.device('cpu') - -torch.set_grad_enabled(False) -model_name = "stylegan2_ffhq1024" - -directions = list(annotated_directions[model_name].keys()) - - -def inference(seed, direction): - layer = annotated_directions[model_name][direction]['layer'] - M = Model(model_name, trunc_psi=1.0, device=device, layer=layer) - M.ranks = annotated_directions[model_name][direction]['ranks'] - - # load the checkpoint - try: - M.Us = torch.Tensor(np.load(annotated_directions[model_name][direction]['checkpoints_path'][0])).to(device) - M.Uc = torch.Tensor(np.load(annotated_directions[model_name][direction]['checkpoints_path'][1])).to(device) - except KeyError: - raise KeyError('ERROR: No directions specified in ./annotated_directions.py for this model') - - part, appearance, lam = annotated_directions[model_name][direction]['parameters'] - - Z, image, image2, part_img = M.edit_at_layer([[part]], [appearance], [lam], t=seed, Uc=M.Uc, Us=M.Us, noise=None) - - dif = np.tile(((np.mean((image - image2)**2, -1)))[:,:,None], [1,1,3]).astype(np.uint8) - - return Image.fromarray(np.concatenate([image, image2, dif], 1)) - - -demo = gr.Interface( - fn=inference, - inputs=[gr.Slider(0, 1000, value=64), gr.Dropdown(directions, value='no_eyebrows')], - outputs=[gr.Image(type="pil", value='./default.png', label="original | edited | mean-squared difference")], - title="PandA (ICLR'23) - FFHQ edit zoo", - description="Provides a quick interface to manipulate pre-annotated directions with pre-trained global parts and appearances factors. Note that we use the free CPU tier, so synthesis takes about 10 seconds.", - article="Check out the full demo and paper at: https://github.com/james-oldfield/PandA" -) -demo.launch() \ No newline at end of file diff --git a/spaces/jbetker/tortoise/tortoise/models/cvvp.py b/spaces/jbetker/tortoise/tortoise/models/cvvp.py deleted file mode 100644 index d094649f3fb3386ec7c78da3d9ead34eebea4968..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/tortoise/models/cvvp.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import einsum -from torch.utils.checkpoint import checkpoint - -from tortoise.models.arch_util import AttentionBlock -from tortoise.models.xtransformers import ContinuousTransformerWrapper, Encoder - - -def exists(val): - return val is not None - - -def masked_mean(t, mask): - t = t.masked_fill(~mask, 0.) - return t.sum(dim = 1) / mask.sum(dim = 1) - - -class CollapsingTransformer(nn.Module): - def __init__(self, model_dim, output_dims, heads, dropout, depth, mask_percentage=0, **encoder_kwargs): - super().__init__() - self.transformer = ContinuousTransformerWrapper( - max_seq_len=-1, - use_pos_emb=False, - attn_layers=Encoder( - dim=model_dim, - depth=depth, - heads=heads, - ff_dropout=dropout, - ff_mult=1, - attn_dropout=dropout, - use_rmsnorm=True, - ff_glu=True, - rotary_pos_emb=True, - **encoder_kwargs, - )) - self.pre_combiner = nn.Sequential(nn.Conv1d(model_dim, output_dims, 1), - AttentionBlock(output_dims, num_heads=heads, do_checkpoint=False), - nn.Conv1d(output_dims, output_dims, 1)) - self.mask_percentage = mask_percentage - - def forward(self, x, **transformer_kwargs): - h = self.transformer(x, **transformer_kwargs) - h = h.permute(0,2,1) - h = checkpoint(self.pre_combiner, h).permute(0,2,1) - if self.training: - mask = torch.rand_like(h.float()) > self.mask_percentage - else: - mask = torch.ones_like(h.float()).bool() - return masked_mean(h, mask) - - -class ConvFormatEmbedding(nn.Module): - def __init__(self, *args, **kwargs): - super().__init__() - self.emb = nn.Embedding(*args, **kwargs) - - def forward(self, x): - y = self.emb(x) - return y.permute(0,2,1) - - -class CVVP(nn.Module): - def __init__( - self, - model_dim=512, - transformer_heads=8, - dropout=.1, - conditioning_enc_depth=8, - cond_mask_percentage=0, - mel_channels=80, - mel_codes=None, - speech_enc_depth=8, - speech_mask_percentage=0, - latent_multiplier=1, - ): - super().__init__() - latent_dim = latent_multiplier*model_dim - self.temperature = nn.Parameter(torch.tensor(1.)) - - self.cond_emb = nn.Sequential(nn.Conv1d(mel_channels, model_dim//2, kernel_size=5, stride=2, padding=2), - nn.Conv1d(model_dim//2, model_dim, kernel_size=3, stride=2, padding=1)) - self.conditioning_transformer = CollapsingTransformer(model_dim, model_dim, transformer_heads, dropout, conditioning_enc_depth, cond_mask_percentage) - self.to_conditioning_latent = nn.Linear(latent_dim, latent_dim, bias=False) - - if mel_codes is None: - self.speech_emb = nn.Conv1d(mel_channels, model_dim, kernel_size=5, padding=2) - else: - self.speech_emb = ConvFormatEmbedding(mel_codes, model_dim) - self.speech_transformer = CollapsingTransformer(model_dim, latent_dim, transformer_heads, dropout, speech_enc_depth, speech_mask_percentage) - self.to_speech_latent = nn.Linear(latent_dim, latent_dim, bias=False) - - def get_grad_norm_parameter_groups(self): - return { - 'conditioning': list(self.conditioning_transformer.parameters()), - 'speech': list(self.speech_transformer.parameters()), - } - - def forward( - self, - mel_cond, - mel_input, - return_loss=False - ): - cond_emb = self.cond_emb(mel_cond).permute(0,2,1) - enc_cond = self.conditioning_transformer(cond_emb) - cond_latents = self.to_conditioning_latent(enc_cond) - - speech_emb = self.speech_emb(mel_input).permute(0,2,1) - enc_speech = self.speech_transformer(speech_emb) - speech_latents = self.to_speech_latent(enc_speech) - - - cond_latents, speech_latents = map(lambda t: F.normalize(t, p=2, dim=-1), (cond_latents, speech_latents)) - temp = self.temperature.exp() - - if not return_loss: - sim = einsum('n d, n d -> n', cond_latents, speech_latents) * temp - return sim - - sim = einsum('i d, j d -> i j', cond_latents, speech_latents) * temp - labels = torch.arange(cond_latents.shape[0], device=mel_input.device) - loss = (F.cross_entropy(sim, labels) + F.cross_entropy(sim.t(), labels)) / 2 - - return loss - - -if __name__ == '__main__': - clvp = CVVP() - clvp(torch.randn(2,80,100), - torch.randn(2,80,95), - return_loss=True) \ No newline at end of file diff --git a/spaces/jbilcke-hf/MusicGen/audiocraft/quantization/__init__.py b/spaces/jbilcke-hf/MusicGen/audiocraft/quantization/__init__.py deleted file mode 100644 index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/MusicGen/audiocraft/quantization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .vq import ResidualVectorQuantizer -from .base import BaseQuantizer, DummyQuantizer, QuantizedResult diff --git a/spaces/jbrinkma/deepmind-pushworld/demo/puzzle_player.css b/spaces/jbrinkma/deepmind-pushworld/demo/puzzle_player.css deleted file mode 100644 index ab6dc26008e98259aca843d699714223a704b90c..0000000000000000000000000000000000000000 --- a/spaces/jbrinkma/deepmind-pushworld/demo/puzzle_player.css +++ /dev/null @@ -1,410 +0,0 @@ -/*source code copied from https://deepmind-pushworld.github.io/play/*/ - -/* Overwrite the default to keep the scrollbar always visible */ - -::-webkit-scrollbar { --webkit-appearance: none; -width: 8px; -} - -::-webkit-scrollbar-thumb { -border-radius: 4px; -background-color: rgba(0,0,0,.5); --webkit-box-shadow: 0 0 1px rgba(255,255,255,.5); -} - -.pushworld_puzzles { -border:1px solid #aaa; -border-radius: 6px; -margin: 8px auto; -overflow: hidden; -width: 100%; -} - -.select-difficulty { -background-color: #f6f6f6; -padding-top: 125px; -height: 100%; -} - -.previews { -overflow-y: auto; -} - -.pushworld_puzzles { -min-width: 340px; -} - -.pushworld_puzzles .panel { -height: 100%; -} - -#play_canvas { -height: 500px; -} - -/* Non-mobile */ -@media (hover: hover) and (pointer: fine) { -.pushworld_puzzles { - height: 604px; -} -.previews { - height: 517px; -} -#touch-keypad-container { - display: none; -} -} - -/* Mobile devices */ -@media (hover: none), (pointer: coarse) { -.pushworld_puzzles { - height: 756px; -} -.previews { - height: 669px; -} -#touch-keypad-container { - display: block; -} -} - -/* On mobile, try to keep the puzzle player within the height of the screen. */ -@media (hover: none) and (max-height: 770px) and (min-height: 720px), - (pointer: coarse) and (max-height: 770px) and (min-height: 720px) { -.pushworld_puzzles { - height: 706px; -} -.previews { - height: 619px; -} -#play_canvas { - height: 450px; -} -} -@media (hover: none) and (max-height: 720px) and (min-height: 670px), - (pointer: coarse) and (max-height: 720px) and (min-height: 670px) { -.pushworld_puzzles { - height: 656px; -} -.previews { - height: 569px; -} -#play_canvas { - height: 400px; -} -} -@media (hover: none) and (max-height: 670px) and (min-height: 620px), - (pointer: coarse) and (max-height: 670px) and (min-height: 620px) { -.pushworld_puzzles { - height: 606px; -} -.previews { - height: 519px; -} -#play_canvas { - height: 350px; -} -} -@media (hover: none) and (max-height: 620px) and (min-height: 570px), - (pointer: coarse) and (max-height: 620px) and (min-height: 570px) { -.pushworld_puzzles { - height: 556px; -} -.previews { - height: 469px; -} -#play_canvas { - height: 300px; -} -} -@media (hover: none) and (max-height: 570px), - (pointer: coarse) and (max-height: 570px) { -.pushworld_puzzles { - height: 506px; -} -.previews { - height: 419px; -} -#play_canvas { - height: 250px; -} -} - -.pushworld_preview { -float: left; -border: 2px solid transparent; -border-radius: 8px; -margin: 4px 0px 0px 4px; -cursor: pointer; -overflow: hidden; -width: 150px; -height: 150px; -position: relative; -} - -.pushworld_preview .puzzle-index { -position: absolute; -height: 26px; -background-color: rgba(240, 240, 240, 0.9); -color: black; -width: 26px; -font-size: 16px; -top: 2px; -left: 2px; -vertical-align: middle; -text-align: center; -border-radius: 12px; -border: 1px solid #777; -} - -.pushworld_preview:hover, .pushworld_preview:focus { -border-color: #ddd; -background-color: #eee; -} - -.pushworld_preview:active { -border-color: #aaa; -background-color: #eee; -} - -.pushworld_preview * { -cursor: pointer; -} - -.pushworld_preview canvas { -margin: 4px; -} - -.pushworld_puzzles .btn { -margin-left: 4px; -margin-right: 4px; -} - -.btn .icon { -font-size: 19px; -} - -/* -.pushworld_preview canvas { -margin-left: 8px; -margin-right: 8px; -} - -.pushworld_preview .name { -text-align: center; -font-weight: 500; -font-size: 13px; -margin: 5px; -overflow: hidden; -white-space: nowrap; -} -*/ - -.pushworld_puzzles .puzzle_panel { -display: none; -} - -.pushworld_puzzles .puzzle { -position: relative; -} - -.preview_list { -position: relative; -} - -#pw-preview-template { -display: none; -} - -#play_canvas { -margin-top: 10px; -margin-bottom: 4px; -width: 100%; -} - -.pw-puzzle-loading { -margin: 62px; -} - -.pw-puzzle-list-loading { -margin: 80px auto; -display: block; -} - -.pushworld_preview canvas { -display: none; -} - - -/* Modals */ - -/* The Modal (background) */ -.modal { -display: none; /* Hidden by default */ -position: absolute; /* Stay in place */ -z-index: 1; /* Sit on top */ -left: 0; -top: 0; -width: 100%; /* Full width */ -height: 100%; /* Full height */ -overflow: auto; /* Enable scroll if needed */ -background-color: rgb(0,0,0); /* Fallback color */ -background-color: rgba(0,0,0,0.4); /* Black w/ opacity */ -} - -/* Modal Content */ -.modal-content { -background-color: #fefefe; -margin: auto; -padding: 20px; -border: 1px solid #999; -text-align: center; -border-radius: 10px; -} - -.arrow-keys { -margin: auto; -width: 132px; -} - -.hidden { -visibility: hidden; -} - -#instructions_modal .text { -clear: both; -padding: 10px; -} - -#instructions_modal button { -font-size: 17px; -} - -.arrow-keys .key { -padding: 0px; -border: 1px solid #888; -border-radius: 6px; -margin: 2px; -float: left; -background-color: #eee; -height: 33px; -width: 40px; -font-size: 20px; -text-align: center; -} - -#touch-keypad-container { -width: 100%; -touch-action: manipulation; -} - -#touch-keypad { -width: 282px; -height: 152px; -} - -#touch-keypad.arrow-keys .key { -height: 70px; -width: 90px; -font-size: 56px; -writing-mode: vertical-rl; /* improves vertical alignment */ -} - -#touch-keypad .up-down-group { -float: left; -width: 94px; -} - -#touch-keypad .key { -touch-action: manipulation; -} - -#touch-keypad .left { -margin-top: 37px; -} -#touch-keypad .right { -margin-top: 37px; -} - -.loading .modal-content { -font-size: 30px; -width: 200px; -margin-top: 190px; -} - -#solved_modal .modal-content { -font-size: 30px; -width: 200px; -margin-top: 190px; -} - -#instructions_modal .modal-content { -font-size: 20px; -max-width: 350px; -margin-top: 95px; -} - -#instructions_modal a { -color: red; -} - - -.pushworld_puzzles .left-buttons { -float: left; -height: 32px; -} - -.pushworld_puzzles .right-buttons { -float: right; -} - -/* Photon overrides */ - -.pushworld_puzzles .btn { -font-size: 16px; -cursor: pointer; -touch-action: manipulation; -} - -.pushworld_puzzles .title { -font-size: 16px; -color: #333; -font-weight: bold; -height: 100%; -text-align: center; -margin: 0px auto; -} - -.pushworld_puzzles .toolbar.title-toolbar { - height: 38px; -} - -.vcenter { -vertical-align: middle; - -/* Internet Explorer 10 */ -display:-ms-flexbox; --ms-flex-pack:center; --ms-flex-align:center; - -/* Firefox */ -display:-moz-box; --moz-box-pack:center; --moz-box-align:center; - -/* Safari, Opera, and Chrome */ -display:-webkit-box; --webkit-box-pack:center; --webkit-box-align:center; - -/* W3C */ -display:box; -box-pack:center; -box-align:center; -} - -.pushworld_puzzles .toolbar { -border-bottom: 1px solid #aaa; -height: 45px; -} diff --git a/spaces/jhwen/bingo/src/pages/api/create.ts b/spaces/jhwen/bingo/src/pages/api/create.ts deleted file mode 100644 index e44581b1865576e73a32bc819534617d2575c8c9..0000000000000000000000000000000000000000 --- a/spaces/jhwen/bingo/src/pages/api/create.ts +++ /dev/null @@ -1,47 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders, randomIP } from '@/lib/utils' -import { sleep } from '@/lib/bots/bing/utils' - -const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -// const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - let count = 0 - const headers = createHeaders(req.cookies) - do { - headers['x-forwarded-for'] = headers['x-forwarded-for'] || randomIP() - debug(`try ${count+1}`, headers['x-forwarded-for']) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - if (response.status === 200) { - res.setHeader('set-cookie', [headers.cookie, `BING_IP=${headers['x-forwarded-for']}`] - .map(cookie => `${cookie}; Max-Age=${86400 * 30}; Path=/; SameSite=None; Secure`)) - debug('headers', headers) - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(await response.text()) - return - } - await sleep(2000) - headers['x-forwarded-for'] = '' - } while(count++ < 10) - res.end(JSON.stringify({ - result: { - value: 'TryLater', - message: `Please try again after a while` - } - })) - } catch (e) { - console.log('error', e) - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/jmourad/TXT2IMG-MJ-Desc/app.py b/spaces/jmourad/TXT2IMG-MJ-Desc/app.py deleted file mode 100644 index 74103b6bc32502f8b812d01552c2d4ac5523b241..0000000000000000000000000000000000000000 --- a/spaces/jmourad/TXT2IMG-MJ-Desc/app.py +++ /dev/null @@ -1,77 +0,0 @@ -# Importar bibliotecas -import torch -import re -import random -import requests -import shutil -from clip_interrogator import Config, Interrogator -from transformers import pipeline, set_seed, AutoTokenizer, AutoModelForSeq2SeqLM -from PIL import Image -import gradio as gr - -# Configurar CLIP -config = Config() -config.device = 'cuda' if torch.cuda.is_available() else 'cpu' -config.blip_offload = False if torch.cuda.is_available() else True -config.chunk_size = 2048 -config.flavor_intermediate_count = 512 -config.blip_num_beams = 64 -config.clip_model_name = "ViT-H-14/laion2b_s32b_b79k" -ci = Interrogator(config) - -# Función para generar prompt desde imagen -def get_prompt_from_image(image, mode): - image = image.convert('RGB') - if mode == 'best': - prompt = ci.interrogate(image) - elif mode == 'classic': - prompt = ci.interrogate_classic(image) - elif mode == 'fast': - prompt = ci.interrogate_fast(image) - elif mode == 'negative': - prompt = ci.interrogate_negative(image) - return prompt - -# Función para generar texto -text_pipe = pipeline('text-generation', model='succinctly/text2image-prompt-generator') - -def text_generate(input): - seed = random.randint(100, 1000000) - set_seed(seed) - for count in range(6): - sequences = text_pipe(input, max_length=random.randint(60, 90), num_return_sequences=8) - list = [] - for sequence in sequences: - line = sequence['generated_text'].strip() - if line != input and len(line) > (len(input) + 4) and line.endswith((':', '-', '—')) is False: - list.append(line) - - result = "\n".join(list) - result = re.sub('[^ ]+\.[^ ]+','', result) - result = result.replace('<', '').replace('>', '') - if result != '': - return result - if count == 5: - return result - -# Crear interfaz gradio -with gr.Blocks() as block: - with gr.Column(): - gr.HTML('

    MidJourney / SD2 Helper Tool

    ') - with gr.Tab('Generate from Image'): - with gr.Row(): - input_image = gr.Image(type='pil') - with gr.Column(): - input_mode = gr.Radio(['best', 'fast', 'classic', 'negative'], value='best', label='Mode') - img_btn = gr.Button('Discover Image Prompt') - output_image = gr.Textbox(lines=6, label='Generated Prompt') - - with gr.Tab('Generate from Text'): - input_text = gr.Textbox(lines=6, label='Your Idea', placeholder='Enter your content here...') - output_text = gr.Textbox(lines=6, label='Generated Prompt') - text_btn = gr.Button('Generate Prompt') - - img_btn.click(fn=get_prompt_from_image, inputs=[input_image, input_mode], outputs=output_image) - text_btn.click(fn=text_generate, inputs=input_text, outputs=output_text) - -block.queue(max_size=64).launch(show_api=False, enable_queue=True, debug=True, share=False, server_name='0.0.0.0') \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GimpPaletteFile.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GimpPaletteFile.py deleted file mode 100644 index d388928945a0f6711de2b1c8d1ed50ce192a8219..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GimpPaletteFile.py +++ /dev/null @@ -1,56 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read GIMP palette files -# -# History: -# 1997-08-23 fl Created -# 2004-09-07 fl Support GIMP 2.0 palette files. -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1997-2004. -# -# See the README file for information on usage and redistribution. -# - -import re - -from ._binary import o8 - - -class GimpPaletteFile: - """File handler for GIMP's palette format.""" - - rawmode = "RGB" - - def __init__(self, fp): - self.palette = [o8(i) * 3 for i in range(256)] - - if fp.readline()[:12] != b"GIMP Palette": - msg = "not a GIMP palette file" - raise SyntaxError(msg) - - for i in range(256): - s = fp.readline() - if not s: - break - - # skip fields and comment lines - if re.match(rb"\w+:|#", s): - continue - if len(s) > 100: - msg = "bad palette file" - raise SyntaxError(msg) - - v = tuple(map(int, s.split()[:3])) - if len(v) != 3: - msg = "bad palette entry" - raise ValueError(msg) - - self.palette[i] = o8(v[0]) + o8(v[1]) + o8(v[2]) - - self.palette = b"".join(self.palette) - - def getpalette(self): - return self.palette, self.rawmode diff --git a/spaces/johnberg/CLIPInverter/adapter/clipadapter.py b/spaces/johnberg/CLIPInverter/adapter/clipadapter.py deleted file mode 100644 index 68e3fe538febbdd7fad9a1b898c7bc42c2a8ecb0..0000000000000000000000000000000000000000 --- a/spaces/johnberg/CLIPInverter/adapter/clipadapter.py +++ /dev/null @@ -1,60 +0,0 @@ -from torch import nn -from models.stylegan2.model import PixelNorm -from torch.nn import Linear, LayerNorm, LeakyReLU, Sequential, Module, Conv2d, GroupNorm - -class TextModulationModule(Module): - def __init__(self, in_channels): - super(TextModulationModule, self).__init__() - self.conv = Conv2d(in_channels, in_channels, 3, stride=1, padding=1, bias=False) - self.norm = GroupNorm(32, in_channels) - self.gamma_function = Sequential(Linear(512, 512), LayerNorm([512]), LeakyReLU(), Linear(512, in_channels)) - self.beta_function = Sequential(Linear(512, 512), LayerNorm([512]), LeakyReLU(), Linear(512, in_channels)) - self.leakyrelu = LeakyReLU() - - def forward(self, x, embedding): - x = self.conv(x) - x = self.norm(x) - log_gamma = self.gamma_function(embedding.float()) - gamma = log_gamma.exp().unsqueeze(2).unsqueeze(3) - beta = self.beta_function(embedding.float()).unsqueeze(2).unsqueeze(3) - out = x * (1 + gamma) + beta - out = self.leakyrelu(out) - return out - -class SubTextMapper(Module): - def __init__(self, opts, in_channels): - super(SubTextMapper, self).__init__() - self.opts = opts - self.pixelnorm = PixelNorm() - self.modulation_module_list = nn.ModuleList([TextModulationModule(in_channels) for _ in range(1)]) - - def forward(self, x, embedding): - x = self.pixelnorm(x) - for modulation_module in self.modulation_module_list: - x = modulation_module(x, embedding) - return x - -class CLIPAdapter(Module): - def __init__(self, opts): - super(CLIPAdapter, self).__init__() - self.opts = opts - - if not opts.no_coarse_mapper: - self.coarse_mapping = SubTextMapper(opts, 512) - if not opts.no_medium_mapper: - self.medium_mapping = SubTextMapper(opts, 256) - if not opts.no_fine_mapper: - self.fine_mapping = SubTextMapper(opts, 128) - - - def forward(self, features, txt_embed): - txt_embed = txt_embed.detach() - c1, c2, c3 = features - - if not self.opts.no_coarse_mapper: - c3 = self.coarse_mapping(c3, txt_embed) - if not self.opts.no_medium_mapper: - c2 = self.medium_mapping(c2, txt_embed) - if not self.opts.no_fine_mapper: - c1 = self.fine_mapping(c1, txt_embed) - return (c1,c2,c3) \ No newline at end of file diff --git a/spaces/jone/Music_Source_Separation/bytesep/data/samplers.py b/spaces/jone/Music_Source_Separation/bytesep/data/samplers.py deleted file mode 100644 index 6b3cf99ecdf7f5e392da7b0cd2cc88ee4b8c90b5..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/bytesep/data/samplers.py +++ /dev/null @@ -1,188 +0,0 @@ -import pickle -from typing import Dict, List, NoReturn - -import numpy as np -import torch.distributed as dist - - -class SegmentSampler: - def __init__( - self, - indexes_path: str, - segment_samples: int, - mixaudio_dict: Dict, - batch_size: int, - steps_per_epoch: int, - random_seed=1234, - ): - r"""Sample training indexes of sources. - - Args: - indexes_path: str, path of indexes dict - segment_samplers: int - mixaudio_dict, dict, including hyper-parameters for mix-audio data - augmentation, e.g., {'voclas': 2, 'accompaniment': 2} - batch_size: int - steps_per_epoch: int, #steps_per_epoch is called an `epoch` - random_seed: int - """ - self.segment_samples = segment_samples - self.mixaudio_dict = mixaudio_dict - self.batch_size = batch_size - self.steps_per_epoch = steps_per_epoch - - self.meta_dict = pickle.load(open(indexes_path, "rb")) - # E.g., { - # 'vocals': [ - # {'hdf5_path': 'songA.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 0, 'end_sample': 132300}, - # {'hdf5_path': 'songB.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 4410, 'end_sample': 445410}, - # ... - # ], - # 'accompaniment': [ - # {'hdf5_path': 'songA.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 0, 'end_sample': 132300}, - # {'hdf5_path': 'songB.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 4410, 'end_sample': 445410}, - # ... - # ] - # } - - self.source_types = self.meta_dict.keys() - # E.g., ['vocals', 'accompaniment'] - - self.pointers_dict = {source_type: 0 for source_type in self.source_types} - # E.g., {'vocals': 0, 'accompaniment': 0} - - self.indexes_dict = { - source_type: np.arange(len(self.meta_dict[source_type])) - for source_type in self.source_types - } - # E.g. { - # 'vocals': [0, 1, ..., 225751], - # 'accompaniment': [0, 1, ..., 225751] - # } - - self.random_state = np.random.RandomState(random_seed) - - # Shuffle indexes. - for source_type in self.source_types: - self.random_state.shuffle(self.indexes_dict[source_type]) - print("{}: {}".format(source_type, len(self.indexes_dict[source_type]))) - - def __iter__(self) -> List[Dict]: - r"""Yield a batch of meta info. - - Returns: - batch_meta_list: (batch_size,) e.g., when mix-audio is 2, looks like [ - {'vocals': [ - {'hdf5_path': 'songA.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 13406400, 'end_sample': 13538700}, - {'hdf5_path': 'songB.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 4440870, 'end_sample': 4573170}] - 'accompaniment': [ - {'hdf5_path': 'songE.h5', 'key_in_hdf5': 'accompaniment', 'begin_sample': 14579460, 'end_sample': 14711760}, - {'hdf5_path': 'songF.h5', 'key_in_hdf5': 'accompaniment', 'begin_sample': 3995460, 'end_sample': 4127760}] - } - ... - ] - """ - batch_size = self.batch_size - - while True: - batch_meta_dict = {source_type: [] for source_type in self.source_types} - - for source_type in self.source_types: - # E.g., ['vocals', 'accompaniment'] - - # Loop until get a mini-batch. - while len(batch_meta_dict[source_type]) != batch_size: - - largest_index = ( - len(self.indexes_dict[source_type]) - - self.mixaudio_dict[source_type] - ) - # E.g., 225750 = 225752 - 2 - - if self.pointers_dict[source_type] > largest_index: - - # Reset pointer, and shuffle indexes. - self.pointers_dict[source_type] = 0 - self.random_state.shuffle(self.indexes_dict[source_type]) - - source_metas = [] - mix_audios_num = self.mixaudio_dict[source_type] - - for _ in range(mix_audios_num): - - pointer = self.pointers_dict[source_type] - # E.g., 1 - - index = self.indexes_dict[source_type][pointer] - # E.g., 12231 - - self.pointers_dict[source_type] += 1 - - source_meta = self.meta_dict[source_type][index] - # E.g., ['song_A.h5', 198450, 330750] - - # source_metas.append(new_source_meta) - source_metas.append(source_meta) - - batch_meta_dict[source_type].append(source_metas) - # When mix-audio is 2, batch_meta_dict looks like: { - # 'vocals': [ - # [{'hdf5_path': 'songA.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 13406400, 'end_sample': 13538700}, - # {'hdf5_path': 'songB.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 4440870, 'end_sample': 4573170}], - # [{'hdf5_path': 'songC.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 1186290, 'end_sample': 1318590}, - # {'hdf5_path': 'songD.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 8462790, 'end_sample': 8595090}] - # ] - # 'accompaniment': [ - # [{'hdf5_path': 'songE.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 24232950, 'end_sample': 24365250}, - # {'hdf5_path': 'songF.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 1569960, 'end_sample': 1702260}], - # [{'hdf5_path': 'songG.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 2795940, 'end_sample': 2928240}, - # {'hdf5_path': 'songH.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 10923570, 'end_sample': 11055870}] - # ] - # } - - batch_meta_list = [ - { - source_type: batch_meta_dict[source_type][i] - for source_type in self.source_types - } - for i in range(batch_size) - ] - # When mix-audio is 2, batch_meta_list looks like: [ - # {'vocals': [ - # {'hdf5_path': 'songA.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 13406400, 'end_sample': 13538700}, - # {'hdf5_path': 'songB.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 4440870, 'end_sample': 4573170}] - # 'accompaniment': [ - # {'hdf5_path': 'songE.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 14579460, 'end_sample': 14711760}, - # {'hdf5_path': 'songF.h5', 'key_in_hdf5': 'vocals', 'begin_sample': 3995460, 'end_sample': 4127760}] - # } - # ... - # ] - - yield batch_meta_list - - def __len__(self) -> int: - return self.steps_per_epoch - - def state_dict(self) -> Dict: - state = {'pointers_dict': self.pointers_dict, 'indexes_dict': self.indexes_dict} - return state - - def load_state_dict(self, state) -> NoReturn: - self.pointers_dict = state['pointers_dict'] - self.indexes_dict = state['indexes_dict'] - - -class DistributedSamplerWrapper: - def __init__(self, sampler): - r"""Distributed wrapper of sampler.""" - self.sampler = sampler - - def __iter__(self): - num_replicas = dist.get_world_size() - rank = dist.get_rank() - - for indices in self.sampler: - yield indices[rank::num_replicas] - - def __len__(self) -> int: - return len(self.sampler) diff --git a/spaces/keanteng/job/backend/functions.py b/spaces/keanteng/job/backend/functions.py deleted file mode 100644 index 0daf279c2ddb43f76106249f6979cdc7e56c353b..0000000000000000000000000000000000000000 --- a/spaces/keanteng/job/backend/functions.py +++ /dev/null @@ -1,513 +0,0 @@ -""" -MIT License - -Copyright (c) 2023 Khor Kean Teng, Ang Zhi Nuo, Connie Hui Kang Yi, Ling Sing Cheng, Tan Yu Jing - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - -""" - -import streamlit as st -import pandas as pd -import numpy as np -import geopandas as gpd -from geopy.geocoders import Nominatim -import leafmap.foliumap as leafmap -from shapely.geometry import Polygon, MultiPolygon -import google.generativeai as palm -try: - from backend.configs import * -except ImportError: - pass -import re -from sklearn.feature_extraction.text import TfidfVectorizer -from sklearn.metrics.pairwise import sigmoid_kernel -from sklearn.metrics.pairwise import cosine_similarity -import fuzzywuzzy -from fuzzywuzzy import process -from nltk.tokenize import word_tokenize -from nltk.corpus import stopwords -from nltk.stem import PorterStemmer -import nltk -nltk.download("punkt") -nltk.download("stopwords") - -# forecasting -from datetime import * -from statsmodels.tsa.holtwinters import ExponentialSmoothing -import matplotlib.pyplot as ax1 -from statsmodels.tsa.holtwinters import SimpleExpSmoothing -from statsmodels.tsa.stattools import adfuller -from pmdarima import auto_arima -from sklearn.metrics import mean_squared_error, mean_absolute_error -import matplotlib.pyplot as plt - - -# convert to lat long -def geocoder(location_input): - """ - Perform geocoding on location input. The input should be an address. - - Args: - location_input (string): string input - - Returns: - dataframe: a geodataframe with latitude and longitude and geometry - """ - - try: - geolocator = Nominatim(user_agent="my_app") - location = geolocator.geocode(location_input) - - location_df = pd.DataFrame( - { - "City": location_input, - "Latitude": [location.latitude], - "Longitude": [location.longitude], - } - ) - - location_df = gpd.GeoDataFrame( - location_df, - geometry=gpd.points_from_xy(location_df.Longitude, location_df.Latitude), - ) - - return location_df - - except: - location_df = pd.DataFrame( - { - "City": location_input, - "Latitude": [None], - "Longitude": [None], - } - ) - return location_df - - -# check intersection -def intersection_check(location_df, df): - """ - Perform intersection check between location_df and df. The input should be a geodataframe. - - Args: - location_df (geodataframe): a geodataframe, the location input of your choice - df (geodataframe): a geodataframe, the shapefile of Malaysia - - Returns: - geodataframe: a geodataframe with the intersected area. It contains the geometry and the state and district name. - """ - - for index in range(len(df)): - if df["geometry"][index].contains(location_df["geometry"][0]) == True: - df = df.loc[index, :].to_frame().transpose() - df = df[["nam", "laa", "geometry"]] - df = gpd.GeoDataFrame(df, geometry=df.geometry, crs="EPSG:4326") - df.reset_index() - df["geometry"] = MultiPolygon([df["geometry"].to_numpy()[0]]) - return df - - -# for testing -# df = gpd.read_file("data/shapefile/polbnda_mys.shp") -# a = intersection_check(geocoder('Johor Bahru'), df) -# print(a) - - -# palm model -# configure API -@st.cache_resource -def configure_api(api_key = "Your API Key"): - """ - Configure the API key for palm model. - """ - palm.configure(api_key = api_key) - - -# generate palm model -def skill_suggest_model(location_input): - """ - Suggest skills based on user location input. - - Args: - location_input (string): a string, the location input of your choice - - Returns: - list: a list of skills based on the location input - """ - models = [ - m - for m in palm.list_models() - if "generateText" in m.supported_generation_methods - ] - model = models[0].name # using text-bison-001 - - text = """ - You will now only response by giving a list as such [element1, element2, element3, ...]. - - You will be prompted with name of city around the world. For example, - Example1: Paris - - You will then response with a list where each element in the list will either be a noun or an - adjective. These words will describe the skills needed to work in that city. For example, - - Example1: Silicon Valley - Answer: [programming, python, deep learning, web development] - - Please answer the following questions: - """ - - prompt = text + "" + location_input - - # generate text - completion = palm.generate_text( - model=model, - prompt=prompt, - temperature=0, - # The maximum length of the response - max_output_tokens=800, - ) - - return completion.result - - -# testing -# configure_api() -# a = skill_suggest_model('Kuala Lumpur') -# print(a) - - -def job_suggest_model(skills_list): - """ - Suggest jobs based on a list of skills. - - Args: - list of skills (list): a list, the skills input of your choice - - Returns: - list: a list of jobs based on the skills input - """ - models = [ - m - for m in palm.list_models() - if "generateText" in m.supported_generation_methods - ] - model = models[0].name # using text-bison-001 - - text = """ - You will now only response by giving a list of only 3 elements as such [element1, element2, element3]. - - You are an expert hiring manager. You will be prompted with skills related to a person job. For example, - Example1: [finance, accounting, banking] - - You will then response with a list where each element is the possible job roles. For example, - - Example1: [finance, accounting, banking] - Answer: [financial analyst, accountant, investmetn banking associate] - - Please answer the following questions: - """ - - prompt = text + "" + skills_list - - # generate text - completion = palm.generate_text( - model=model, - prompt=prompt, - temperature=0, - # The maximum length of the response - max_output_tokens=800, - ) - - return completion.result - - -# testing -# configure_api() -# skill = ['programming', 'python', 'deep learning', 'web development'] -# a = job_suggest_model(str(skill)) -# print(a) - - -def list_cleaning(list_input): - """ - Remove first and last character of the string - - Args: - string_input (string): a string - - Returns: - string: a string with the first and last character removed - """ - list_input = list_input[:-1] - list_input = list_input[1:] - - return list_input - - -# job prediction model -# text cleaning -def text_cleaning(text): - """ - Clean the text by removing special characters. - - Args: - text (dataframe): a dataframe with text - - Returns: - dataframe: a dataframe with text cleaned - """ - text = re.sub(r""", "", text) - text = re.sub(r".hack//", "", text) - text = re.sub(r"'", "", text) - text = re.sub(r"A's", "", text) - text = re.sub(r"I'", "I'", text) - text = re.sub(r"&", "and", text) - - return text - - -# recommendation engine -def give_rec(titlename, sig, jobdata): - """ - Give recommendation based on the job title. - - Args: - titlename (string): a string, the job title of your choice - sig (model): a model, the sigmoid kernel model - jobdata (dataframe): a dataframe, the job posting dataset - - Returns: - dataframe: a dataframe with the top 5 recommended jobs - """ - # Get the index corresponding to original_title - indices = pd.Series(jobdata.index, index=jobdata["title"]).drop_duplicates() - indices_frame = pd.DataFrame(indices).reset_index().drop_duplicates(subset="title") - - # rename the columns - indices_frame.columns = ["title", "views"] - indices_frame["work type"] = jobdata["work_type"] - indices_frame["location"] = jobdata["location"] - - # Get the index corresponding to original_title - idx = indices_frame[indices_frame["title"] == titlename]["views"] - idx = idx.index.to_numpy()[0] - - # Get the pairwsie similarity scores - sig_scores = list(enumerate(sig[idx])) - - # Sort the movies - sig_scores = sorted(sig_scores, key=lambda x: x[1], reverse=True) - - # Scores of the 10 most similar movies - sig_scores = sig_scores[1:11] - - # Movie indices - anime_indices = [i[0] for i in sig_scores] - - # Top 10 most similar movies - return pd.DataFrame( - { - "Job Title": jobdata["title"].iloc[anime_indices].values, - "View": jobdata["views"].iloc[anime_indices].values, - "Work Type": jobdata["work_type"].iloc[anime_indices].values, - "Location": jobdata["location"].iloc[anime_indices].values, - } - ) - - -# job recommendation engine workflow -@st.cache_resource -def job_recom_engine(jobdata): - """ - Job recommendation engine workflow. - - Args: - jobdata (dataframe): a dataframe, the job posting dataset - - Returns: - dataframe: a dataframe with the top 5 recommended jobs - """ - # filter views less than 5 - jobdata = jobdata[jobdata["views"] >= 5] - - # data cleaning - jobdata["description"] = jobdata["description"].apply(text_cleaning) - jobdata["tokenized_Description"] = jobdata["tokenized_Description"].apply( - text_cleaning - ) - - # load model - tfv = TfidfVectorizer( - min_df=3, - max_features=None, - strip_accents="unicode", - analyzer="word", - token_pattern=r"\w{1,}", - ngram_range=(1, 3), - stop_words="english", - ) - - # Filling NaNs with empty string - jobdata["tokenized_Description"] = jobdata["tokenized_Description"].fillna("") - genres_str = jobdata["tokenized_Description"].str.split(",").astype(str) - tfv_matrix = tfv.fit_transform(genres_str) - - # sigmoid kernel - sig = sigmoid_kernel(tfv_matrix, tfv_matrix) - - # recommended_jobs = ( - # give_rec(titlename=job_key, sig=sig, jobdata=jobdata) - # .sort_values(by="View", ascending=False) - # ) - - return sig - - -# testing -# jobdata = pd.read_csv('data/job_posting_clean.csv') -# a = job_recom_engine(jobdata, 'Data Scientist') -# print(a) - - -# fuzzy matching -def job_matcher(jobdata, column="title", string_to_match=None, min_ratio=85): - # get a list of unique strings - jobdata = jobdata[jobdata["views"] >= 5] - strings = jobdata[column].unique() - - # get the top 10 closest matches to our input string - matches = fuzzywuzzy.process.extract( - string_to_match, - strings, - limit=10, - scorer=fuzzywuzzy.fuzz.token_sort_ratio, - ) - - # only get matches with a ratio > 88 - close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio] - - # get the rows of all the close matches in our dataframe - rows_with_matches = jobdata[column].isin(close_matches) - - match_data = jobdata.loc[rows_with_matches]["title"] - - return match_data.to_frame().head().reset_index() - - -# testing -# jobdata = pd.read_csv('data/job_posting_clean.csv') -# a = job_matcher(jobdata = jobdata, column = 'title', string_to_match = 'financial analyst') -# print(a['title'][0]) - - -# resume analysis model -def preprocess_text(text_input): - """ - Preprocess text by removing special characters, stop words, and stemming. - - Args: - text_input (string): a string, the text input of your choice - - Returns: - string: a string with the special characters, stop words, and stemming removed - """ - tokens = text_input.lower() - tokens = word_tokenize(tokens) - tokens = [word.lower() for word in tokens if word.isalnum()] - - # remove stop words - stop_words = set(stopwords.words("english")) - tokens = [word for word in tokens if not word in stop_words] - stemmer = PorterStemmer() - tokens = [stemmer.stem(word) for word in tokens] - - return " ".join(tokens) - - -def compare_skills(user_skills, sector_skills): - """ - Compare skills between user and sector. - - Args: - user_skills (string): a string, the user skills input of your choice - sector_skills (string): a string, the sector skills input of your choice - - Returns: - numeric: a numeric value, the cosine similarity between the user and sector skills - """ - # preprocess text - user_skills = preprocess_text(user_skills) - sector_skills = preprocess_text(sector_skills) - - # vectorize text and calculate cosine similarity - vectorizer = TfidfVectorizer(stop_words="english", analyzer="word") - tfidf_matrix = vectorizer.fit_transform([user_skills, sector_skills]) - cosine_sim = cosine_similarity(tfidf_matrix[0], tfidf_matrix[1]) - - return cosine_sim[0][0] - - -# forecasting model -def ets_fore_a(data, i): - # ets test - ets_model = ExponentialSmoothing( - data, trend="add", seasonal="add", seasonal_periods=i - ) - ets_model_fit = ets_model.fit() - ets_pred = ets_model_fit.forecast(8) - return ets_pred - - -def ets_fore_b(data, i): - # ets test - ets_model = ExponentialSmoothing( - data, trend="add", seasonal="mul", seasonal_periods=i - ) - ets_model_fit = ets_model.fit() - ets_pred = ets_model_fit.forecast(8) - return ets_pred - - -def ets_fore_c(data, i): - # ets test - ets_model = ExponentialSmoothing( - data, trend="mul", seasonal="add", seasonal_periods=i - ) - ets_model_fit = ets_model.fit() - ets_pred = ets_model_fit.forecast(8) - return ets_pred - - -def ets_fore_d(data, i): - # ets test - ets_model = ExponentialSmoothing( - data, trend="mul", seasonal="mul", seasonal_periods=i - ) - ets_model_fit = ets_model.fit() - ets_pred = ets_model_fit.forecast(8) - return ets_pred - - -def ARIMA_fore(data): - # arima test - stepwise_model = auto_arima(data, start_p=1, start_q=1, stepwise=True) - stepwise_model.fit(data) - arima_pred = stepwise_model.predict(n_periods=8) - return arima_pred diff --git a/spaces/ken4005/Uhi-ChatGPT/modules/openai_func.py b/spaces/ken4005/Uhi-ChatGPT/modules/openai_func.py deleted file mode 100644 index fb07b16235476360ccc48849f5f9e761630efec3..0000000000000000000000000000000000000000 --- a/spaces/ken4005/Uhi-ChatGPT/modules/openai_func.py +++ /dev/null @@ -1,82 +0,0 @@ -import requests -import logging -from modules.presets import ( - timeout_all, - USAGE_API_URL, - BALANCE_API_URL, - standard_error_msg, - connection_timeout_prompt, - error_retrieve_prompt, - read_timeout_prompt -) - -from modules import shared -from modules.utils import get_proxies -import os, datetime - -def get_billing_data(openai_api_key, billing_url): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}" - } - - timeout = timeout_all - proxies = get_proxies() - response = requests.get( - billing_url, - headers=headers, - timeout=timeout, - proxies=proxies, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception(f"API request failed with status code {response.status_code}: {response.text}") - - -def get_usage(openai_api_key): - try: - balance_data=get_billing_data(openai_api_key, BALANCE_API_URL) - logging.debug(balance_data) - try: - balance = balance_data["total_available"] if balance_data["total_available"] else 0 - total_used = balance_data["total_used"] if balance_data["total_used"] else 0 - usage_percent = round(total_used / (total_used+balance) * 100, 2) - except Exception as e: - logging.error(f"API使用情况解析失败:"+str(e)) - balance = 0 - total_used=0 - return f"**API使用情况解析失败**" - if balance == 0: - last_day_of_month = datetime.datetime.now().strftime("%Y-%m-%d") - first_day_of_month = datetime.datetime.now().replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{USAGE_API_URL}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = get_billing_data(openai_api_key, usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:"+str(e)) - return f"**获取API使用情况失败**" - return f"**本月使用金额** \u3000 ${usage_data['total_usage'] / 100}" - - # return f"**免费额度**(已用/余额)\u3000${total_used} / ${balance}" - return f"""\ - 免费额度使用情况 -
    -
    - {usage_percent}% -
    -
    -
    已用 ${total_used}可用 ${balance}
    - """ - - except requests.exceptions.ConnectTimeout: - status_text = standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - return status_text - except requests.exceptions.ReadTimeout: - status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt - return status_text - except Exception as e: - logging.error(f"获取API使用情况失败:"+str(e)) - return standard_error_msg + error_retrieve_prompt diff --git a/spaces/keras-io/ocr-for-captcha/app.py b/spaces/keras-io/ocr-for-captcha/app.py deleted file mode 100644 index e8e06188b734ba1d46c08a7d3a6dd0eda19236c5..0000000000000000000000000000000000000000 --- a/spaces/keras-io/ocr-for-captcha/app.py +++ /dev/null @@ -1,70 +0,0 @@ -import tensorflow as tf -from tensorflow import keras -from tensorflow.keras import layers - -from huggingface_hub import from_pretrained_keras - -import numpy as np -import gradio as gr - -max_length = 5 -img_width = 200 -img_height = 50 - -model = from_pretrained_keras("keras-io/ocr-for-captcha", compile=False) - -prediction_model = keras.models.Model( - model.get_layer(name="image").input, model.get_layer(name="dense2").output -) - -with open("vocab.txt", "r") as f: - vocab = f.read().splitlines() - -# Mapping integers back to original characters -num_to_char = layers.StringLookup( - vocabulary=vocab, mask_token=None, invert=True -) - -def decode_batch_predictions(pred): - input_len = np.ones(pred.shape[0]) * pred.shape[1] - # Use greedy search. For complex tasks, you can use beam search - results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][ - :, :max_length - ] - # Iterate over the results and get back the text - output_text = [] - for res in results: - res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8") - output_text.append(res) - return output_text - -def classify_image(img_path): - # 1. Read image - img = tf.io.read_file(img_path) - # 2. Decode and convert to grayscale - img = tf.io.decode_png(img, channels=1) - # 3. Convert to float32 in [0, 1] range - img = tf.image.convert_image_dtype(img, tf.float32) - # 4. Resize to the desired size - img = tf.image.resize(img, [img_height, img_width]) - # 5. Transpose the image because we want the time - # dimension to correspond to the width of the image. - img = tf.transpose(img, perm=[1, 0, 2]) - img = tf.expand_dims(img, axis=0) - preds = prediction_model.predict(img) - pred_text = decode_batch_predictions(preds) - return pred_text[0] - -image = gr.inputs.Image(type='filepath') -text = gr.outputs.Textbox() - -iface = gr.Interface(classify_image,image,text, - title="OCR for CAPTCHA", - description = "Keras Implementation of OCR model for reading captcha 🤖🦹🏻", - article = "Author: Anurag Singh. Based on the keras example from A_K_Nain", - examples = ["dd764.png","3p4nn.png"] -) - - -iface.launch() - diff --git a/spaces/kevinwang676/Bark-with-Voice-Cloning/training/training_prepare.py b/spaces/kevinwang676/Bark-with-Voice-Cloning/training/training_prepare.py deleted file mode 100644 index da4b30622d096fe636a0db358c43336eeef4d959..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-with-Voice-Cloning/training/training_prepare.py +++ /dev/null @@ -1,73 +0,0 @@ -import random -import uuid -import numpy -import os -import random -import fnmatch - -from tqdm.auto import tqdm -from scipy.io import wavfile - -from bark.generation import load_model, SAMPLE_RATE -from bark.api import semantic_to_waveform - -from bark import text_to_semantic -from bark.generation import load_model - -from training.data import load_books, random_split_chunk - -output = 'training/data/output' -output_wav = 'training/data/output_wav' - - -def prepare_semantics_from_text(num_generations): - loaded_data = load_books(True) - - print('Loading semantics model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='text') - - if not os.path.isdir(output): - os.mkdir(output) - - loop = 1 - while 1: - filename = uuid.uuid4().hex + '.npy' - file_name = os.path.join(output, filename) - text = '' - while not len(text) > 0: - text = random_split_chunk(loaded_data) # Obtain a short chunk of text - text = text.strip() - print(f'{loop} Generating semantics for text:', text) - loop+=1 - semantics = text_to_semantic(text, temp=round(random.uniform(0.6, 0.8), ndigits=2)) - numpy.save(file_name, semantics) - - -def prepare_wavs_from_semantics(): - if not os.path.isdir(output): - raise Exception('No \'output\' folder, make sure you run create_data.py first!') - if not os.path.isdir(output_wav): - os.mkdir(output_wav) - - print('Loading coarse model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='coarse') - print('Loading fine model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='fine') - - files = fnmatch.filter(os.listdir(output), '*.npy') - current = 1 - total = len(files) - - for i, f in tqdm(enumerate(files), total=len(files)): - real_name = '.'.join(f.split('.')[:-1]) # Cut off the extension - file_name = os.path.join(output, f) - out_file = os.path.join(output_wav, f'{real_name}.wav') - if not os.path.isfile(out_file) and os.path.isfile(file_name): # Don't process files that have already been processed, to be able to continue previous generations - print(f'Processing ({i+1}/{total}) -> {f}') - wav = semantic_to_waveform(numpy.load(file_name), temp=round(random.uniform(0.6, 0.8), ndigits=2)) - # Change to PCM16 - # wav = (wav * 32767).astype(np.int16) - wavfile.write(out_file, SAMPLE_RATE, wav) - - print('Done!') - diff --git a/spaces/kevinwang676/VITS2-Mandarin/text/english.py b/spaces/kevinwang676/VITS2-Mandarin/text/english.py deleted file mode 100644 index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/text/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/train.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/train.py deleted file mode 100644 index 55eca2d0ad9463415970e09bccab8b722e496704..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/train.py +++ /dev/null @@ -1,141 +0,0 @@ -import argparse -import logging -import os - -import torch -import torch.distributed as dist -import torch.nn.functional as F -import torch.utils.data.distributed -from torch.nn.utils import clip_grad_norm_ - -import losses -from backbones import get_model -from dataset import MXFaceDataset, SyntheticDataset, DataLoaderX -from partial_fc import PartialFC -from utils.utils_amp import MaxClipGradScaler -from utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackModelCheckpoint -from utils.utils_config import get_config -from utils.utils_logging import AverageMeter, init_logging - - -def main(args): - cfg = get_config(args.config) - try: - world_size = int(os.environ['WORLD_SIZE']) - rank = int(os.environ['RANK']) - dist.init_process_group('nccl') - except KeyError: - world_size = 1 - rank = 0 - dist.init_process_group(backend='nccl', init_method="tcp://127.0.0.1:12584", rank=rank, world_size=world_size) - - local_rank = args.local_rank - torch.cuda.set_device(local_rank) - os.makedirs(cfg.output, exist_ok=True) - init_logging(rank, cfg.output) - - if cfg.rec == "synthetic": - train_set = SyntheticDataset(local_rank=local_rank) - else: - train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank) - - train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True) - train_loader = DataLoaderX( - local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size, - sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True) - backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank) - - if cfg.resume: - try: - backbone_pth = os.path.join(cfg.output, "backbone.pth") - backbone.load_state_dict(torch.load(backbone_pth, map_location=torch.device(local_rank))) - if rank == 0: - logging.info("backbone resume successfully!") - except (FileNotFoundError, KeyError, IndexError, RuntimeError): - if rank == 0: - logging.info("resume fail, backbone init successfully!") - - backbone = torch.nn.parallel.DistributedDataParallel( - module=backbone, broadcast_buffers=False, device_ids=[local_rank]) - backbone.train() - margin_softmax = losses.get_loss(cfg.loss) - module_partial_fc = PartialFC( - rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume, - batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes, - sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output) - - opt_backbone = torch.optim.SGD( - params=[{'params': backbone.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - opt_pfc = torch.optim.SGD( - params=[{'params': module_partial_fc.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - - num_image = len(train_set) - total_batch_size = cfg.batch_size * world_size - cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch - cfg.total_step = num_image // total_batch_size * cfg.num_epoch - - def lr_step_func(current_step): - cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch] - if current_step < cfg.warmup_step: - return current_step / cfg.warmup_step - else: - return 0.1 ** len([m for m in cfg.decay_step if m <= current_step]) - - scheduler_backbone = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_backbone, lr_lambda=lr_step_func) - scheduler_pfc = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_pfc, lr_lambda=lr_step_func) - - for key, value in cfg.items(): - num_space = 25 - len(key) - logging.info(": " + key + " " * num_space + str(value)) - - val_target = cfg.val_targets - callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec) - callback_logging = CallBackLogging(50, rank, cfg.total_step, cfg.batch_size, world_size, None) - callback_checkpoint = CallBackModelCheckpoint(rank, cfg.output) - - loss = AverageMeter() - start_epoch = 0 - global_step = 0 - grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None - for epoch in range(start_epoch, cfg.num_epoch): - train_sampler.set_epoch(epoch) - for step, (img, label) in enumerate(train_loader): - global_step += 1 - features = F.normalize(backbone(img)) - x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc) - if cfg.fp16: - features.backward(grad_amp.scale(x_grad)) - grad_amp.unscale_(opt_backbone) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - grad_amp.step(opt_backbone) - grad_amp.update() - else: - features.backward(x_grad) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - opt_backbone.step() - - opt_pfc.step() - module_partial_fc.update() - opt_backbone.zero_grad() - opt_pfc.zero_grad() - loss.update(loss_v, 1) - callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp) - callback_verification(global_step, backbone) - scheduler_backbone.step() - scheduler_pfc.step() - callback_checkpoint(global_step, backbone, module_partial_fc) - dist.destroy_process_group() - - -if __name__ == "__main__": - torch.backends.cudnn.benchmark = True - parser = argparse.ArgumentParser(description='PyTorch ArcFace Training') - parser.add_argument('config', type=str, help='py config file') - parser.add_argument('--local_rank', type=int, default=0, help='local_rank') - main(parser.parse_args()) diff --git a/spaces/kevinwang676/vits-fast-finetuning-pcr/text/sanskrit.py b/spaces/kevinwang676/vits-fast-finetuning-pcr/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/vits-fast-finetuning-pcr/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/khanrc/tcl/app.py b/spaces/khanrc/tcl/app.py deleted file mode 100644 index b87ed47668ab4748597e3d05c73f2aa7367825eb..0000000000000000000000000000000000000000 --- a/spaces/khanrc/tcl/app.py +++ /dev/null @@ -1,188 +0,0 @@ -# burrow some code from https://huggingface.co/spaces/xvjiarui/ODISE/tree/main -import os -import sys -from importlib.util import find_spec - -print("Prepare demo ...") -if not os.path.exists("tcl.pth"): - print("Download TCL checkpoint ...") - os.system("wget -q https://github.com/kakaobrain/tcl/releases/download/v1.0.0/tcl.pth") - -if not (find_spec("mmcv") and find_spec("mmseg")): - print("Install mmcv & mmseg ...") - os.system("mim install mmcv-full==1.6.2 mmsegmentation==0.27.0") - -if not find_spec("detectron2"): - print("Install detectron ...") - os.system("pip install git+https://github.com/facebookresearch/detectron2.git") - -sys.path.insert(0, "./tcl/") - -print(" -- done.") - -import json -from contextlib import ExitStack -import gradio as gr -import torch - -from detectron2.evaluation import inference_context - -from predictor import build_demo_model - - -model = build_demo_model() -if torch.cuda.is_available(): - device = torch.device("cuda") -else: - device = torch.device("cpu") - -print(f"device: {device}") -model.to(device) - - -title = "TCL: Text-grounded Contrastive Learning" -title2 = "for Unsupervised Open-world Semantic Segmentation" -title = title + "
    " + title2 -description_head = """ -

    Paper | Code

    -""" - -description_body = f""" -Gradio Demo for "Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs". - -Explore TCL's capability to perform open-world semantic segmentation **without any mask annotations**. Choose from provided examples or upload your own image. Use the query format `bg; class1; class2; ...`, with `;` as the separator, and the `bg` background query being optional (as in the third example). - -This demo highlights the strengths and limitations of unsupervised open-world segmentation methods. Although TCL can handle arbitrary concepts, accurately capturing object boundaries without mask annotation remains a challenge. -""" - -if device.type == "cpu": - description_body += f"\nThis demo is running on a free CPU device. Inference times may take around 5-10 seconds." - -description = description_head + description_body - -article = """ -

    Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs | Github Repo

    -""" - -voc_examples = [ - ["examples/voc_59.jpg", "bg; cat; dog"], - ["examples/voc_97.jpg", "bg; car"], - ["examples/voc_266.jpg", "bg; dog"], - ["examples/voc_294.jpg", "bg; bird"], - ["examples/voc_864.jpg", "bg; cat"], - ["examples/voc_1029.jpg", "bg; bus"], -] - -examples = [ - [ - "examples/dogs.jpg", - "bg; corgi; shepherd", - ], - [ - "examples/dogs.jpg", - "bg; dog", - ], - [ - "examples/dogs.jpg", - "corgi; shepherd; lawn, trees, and fallen leaves", - ], - [ - "examples/banana.jpg", - "bg; banana", - ], - [ - "examples/banana.jpg", - "bg; red banana; green banana; yellow banana", - ], - [ - "examples/frodo_sam_gollum.jpg", - "bg; frodo; gollum; samwise", - ], - [ - "examples/frodo_sam_gollum.jpg", - "bg; rocks; monster; boys with cape" - ], - [ - "examples/mb_mj.jpg", - "bg; marlon brando; michael jackson", - ], -] - -examples = examples + voc_examples - - -def inference(img, query): - query = query.split(";") - query = [v.strip() for v in query] - - with ExitStack() as stack: - stack.enter_context(inference_context(model)) - stack.enter_context(torch.no_grad()) - - if device.type == "cuda": - stack.enter_context(torch.autocast("cuda")) - - visualized_output = model.forward_vis(img, query) - - return visualized_output - - -theme = gr.themes.Soft(text_size=gr.themes.sizes.text_md, primary_hue="teal") -with gr.Blocks(title=title, theme=theme) as demo: - gr.Markdown("

    " + title + "

    ") - # gr.Markdown("

    " + title2 + "

    ") - gr.Markdown(description) - input_components = [] - output_components = [] - - with gr.Row(): - with gr.Column(scale=4, variant="panel"): - output_image_gr = gr.outputs.Image(label="Segmentation", type="pil").style(height=300) - output_components.append(output_image_gr) - - with gr.Row(): - input_gr = gr.inputs.Image(type="pil") - query_gr = gr.inputs.Textbox(default="", label="Query") - input_components.extend([input_gr, query_gr]) - - with gr.Row(): - clear_btn = gr.Button("Clear") - submit_btn = gr.Button("Submit", variant="primary") - - inputs = [c for c in input_components if not isinstance(c, gr.State)] - outputs = [c for c in output_components if not isinstance(c, gr.State)] - with gr.Column(scale=2): - examples_handler = gr.Examples( - examples=examples, - inputs=inputs, - outputs=outputs, - fn=inference, - # cache_examples=True, - examples_per_page=7, - ) - - gr.Markdown(article) - - submit_btn.click( - inference, - input_components, - output_components, - scroll_to_output=True, - ) - - clear_btn.click( - None, - [], - (input_components + output_components), - _js=f"""() => {json.dumps( - [component.cleared_value if hasattr(component, "cleared_value") else None - for component in input_components + output_components] + ( - [gr.Column.update(visible=True)] - ) - + ([gr.Column.update(visible=False)]) - )} - """, - ) - -demo.launch() -# demo.launch(server_name="0.0.0.0", server_port=9718) diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py deleted file mode 100644 index be777123a886503172a95fe0719e956a147bbd68..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py +++ /dev/null @@ -1,48 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='EncHead', - in_channels=[512, 1024, 2048], - in_index=(1, 2, 3), - channels=512, - num_codes=32, - use_se_loss=True, - add_lateral=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_se_decode=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.2)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/klcqy/anime-ai-detect/README.md b/spaces/klcqy/anime-ai-detect/README.md deleted file mode 100644 index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000 --- a/spaces/klcqy/anime-ai-detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Ai Detect -emoji: 🤖 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: saltacc/anime-ai-detect ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/paraphraser/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/paraphraser/README.md deleted file mode 100644 index 3810311f30f99f0a07fd8e5d3723bffeba9948c3..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/paraphraser/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# Paraphrasing with round-trip translation and mixture of experts - -Machine translation models can be used to paraphrase text by translating it to -an intermediate language and back (round-trip translation). - -This example shows how to paraphrase text by first passing it to an -English-French translation model, followed by a French-English [mixture of -experts translation model](/examples/translation_moe). - -##### 0. Setup - -Clone fairseq from source and install necessary dependencies: -```bash -git clone https://github.com/pytorch/fairseq.git -cd fairseq -pip install --editable . -pip install sacremoses sentencepiece -``` - -##### 1. Download models -```bash -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.en-fr.tar.gz -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.fr-en.hMoEup.tar.gz -tar -xzvf paraphraser.en-fr.tar.gz -tar -xzvf paraphraser.fr-en.hMoEup.tar.gz -``` - -##### 2. Paraphrase -```bash -python examples/paraphraser/paraphrase.py \ - --en2fr paraphraser.en-fr \ - --fr2en paraphraser.fr-en.hMoEup -# Example input: -# The new date for the Games, postponed for a year in response to the coronavirus pandemic, gives athletes time to recalibrate their training schedules. -# Example outputs: -# Delayed one year in response to the coronavirus pandemic, the new date of the Games gives athletes time to rebalance their training schedule. -# The new date of the Games, which was rescheduled one year in response to the coronavirus (CV) pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, provides athletes with time to rebalance their training schedule. -# The Games' new date, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new Games date, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, which was postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to re-balance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their schedule of training. -# The new date of the Games, postponed one year in response to the pandemic of coronavirus, gives the athletes time to rebalance their training schedule. -``` diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/click/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/click/__init__.py deleted file mode 100644 index e3ef423b61f87b03d689ffc6d56fc30495a30228..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/click/__init__.py +++ /dev/null @@ -1,73 +0,0 @@ -""" -Click is a simple Python module inspired by the stdlib optparse to make -writing command line scripts fun. Unlike other modules, it's based -around a simple API that does not come with too much magic and is -composable. -""" -from .core import Argument as Argument -from .core import BaseCommand as BaseCommand -from .core import Command as Command -from .core import CommandCollection as CommandCollection -from .core import Context as Context -from .core import Group as Group -from .core import MultiCommand as MultiCommand -from .core import Option as Option -from .core import Parameter as Parameter -from .decorators import argument as argument -from .decorators import command as command -from .decorators import confirmation_option as confirmation_option -from .decorators import group as group -from .decorators import help_option as help_option -from .decorators import make_pass_decorator as make_pass_decorator -from .decorators import option as option -from .decorators import pass_context as pass_context -from .decorators import pass_obj as pass_obj -from .decorators import password_option as password_option -from .decorators import version_option as version_option -from .exceptions import Abort as Abort -from .exceptions import BadArgumentUsage as BadArgumentUsage -from .exceptions import BadOptionUsage as BadOptionUsage -from .exceptions import BadParameter as BadParameter -from .exceptions import ClickException as ClickException -from .exceptions import FileError as FileError -from .exceptions import MissingParameter as MissingParameter -from .exceptions import NoSuchOption as NoSuchOption -from .exceptions import UsageError as UsageError -from .formatting import HelpFormatter as HelpFormatter -from .formatting import wrap_text as wrap_text -from .globals import get_current_context as get_current_context -from .parser import OptionParser as OptionParser -from .termui import clear as clear -from .termui import confirm as confirm -from .termui import echo_via_pager as echo_via_pager -from .termui import edit as edit -from .termui import getchar as getchar -from .termui import launch as launch -from .termui import pause as pause -from .termui import progressbar as progressbar -from .termui import prompt as prompt -from .termui import secho as secho -from .termui import style as style -from .termui import unstyle as unstyle -from .types import BOOL as BOOL -from .types import Choice as Choice -from .types import DateTime as DateTime -from .types import File as File -from .types import FLOAT as FLOAT -from .types import FloatRange as FloatRange -from .types import INT as INT -from .types import IntRange as IntRange -from .types import ParamType as ParamType -from .types import Path as Path -from .types import STRING as STRING -from .types import Tuple as Tuple -from .types import UNPROCESSED as UNPROCESSED -from .types import UUID as UUID -from .utils import echo as echo -from .utils import format_filename as format_filename -from .utils import get_app_dir as get_app_dir -from .utils import get_binary_stream as get_binary_stream -from .utils import get_text_stream as get_text_stream -from .utils import open_file as open_file - -__version__ = "8.1.3" diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_I_N_G_.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_I_N_G_.py deleted file mode 100644 index 7420da7e5dcec81b835ab0e8e2c775dbce860cbd..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_I_N_G_.py +++ /dev/null @@ -1,93 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytechr, byteord, tobytes, tostr, safeEval -from . import DefaultTable - -SINGFormat = """ - > # big endian - tableVersionMajor: H - tableVersionMinor: H - glyphletVersion: H - permissions: h - mainGID: H - unitsPerEm: H - vertAdvance: h - vertOrigin: h - uniqueName: 28s - METAMD5: 16s - nameLength: 1s -""" -# baseGlyphName is a byte string which follows the record above. - - -class table_S_I_N_G_(DefaultTable.DefaultTable): - - dependencies = [] - - def decompile(self, data, ttFont): - dummy, rest = sstruct.unpack2(SINGFormat, data, self) - self.uniqueName = self.decompileUniqueName(self.uniqueName) - self.nameLength = byteord(self.nameLength) - assert len(rest) == self.nameLength - self.baseGlyphName = tostr(rest) - - rawMETAMD5 = self.METAMD5 - self.METAMD5 = "[" + hex(byteord(self.METAMD5[0])) - for char in rawMETAMD5[1:]: - self.METAMD5 = self.METAMD5 + ", " + hex(byteord(char)) - self.METAMD5 = self.METAMD5 + "]" - - def decompileUniqueName(self, data): - name = "" - for char in data: - val = byteord(char) - if val == 0: - break - if (val > 31) or (val < 128): - name += chr(val) - else: - octString = oct(val) - if len(octString) > 3: - octString = octString[1:] # chop off that leading zero. - elif len(octString) < 3: - octString.zfill(3) - name += "\\" + octString - return name - - def compile(self, ttFont): - d = self.__dict__.copy() - d["nameLength"] = bytechr(len(self.baseGlyphName)) - d["uniqueName"] = self.compilecompileUniqueName(self.uniqueName, 28) - METAMD5List = eval(self.METAMD5) - d["METAMD5"] = b"" - for val in METAMD5List: - d["METAMD5"] += bytechr(val) - assert len(d["METAMD5"]) == 16, "Failed to pack 16 byte MD5 hash in SING table" - data = sstruct.pack(SINGFormat, d) - data = data + tobytes(self.baseGlyphName) - return data - - def compilecompileUniqueName(self, name, length): - nameLen = len(name) - if length <= nameLen: - name = name[: length - 1] + "\000" - else: - name += (nameLen - length) * "\000" - return name - - def toXML(self, writer, ttFont): - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - formatstring, names, fixes = sstruct.getformat(SINGFormat) - for name in names: - value = getattr(self, name) - writer.simpletag(name, value=value) - writer.newline() - writer.simpletag("baseGlyphName", value=self.baseGlyphName) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - value = attrs["value"] - if name in ["uniqueName", "METAMD5", "baseGlyphName"]: - setattr(self, name, value) - else: - setattr(self, name, safeEval(value)) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/_type1font.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/_type1font.py deleted file mode 100644 index 0413cb0016a0973e1564bff299acb6a64a46ea00..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/_type1font.py +++ /dev/null @@ -1,877 +0,0 @@ -""" -A class representing a Type 1 font. - -This version reads pfa and pfb files and splits them for embedding in -pdf files. It also supports SlantFont and ExtendFont transformations, -similarly to pdfTeX and friends. There is no support yet for subsetting. - -Usage:: - - font = Type1Font(filename) - clear_part, encrypted_part, finale = font.parts - slanted_font = font.transform({'slant': 0.167}) - extended_font = font.transform({'extend': 1.2}) - -Sources: - -* Adobe Technical Note #5040, Supporting Downloadable PostScript - Language Fonts. - -* Adobe Type 1 Font Format, Adobe Systems Incorporated, third printing, - v1.1, 1993. ISBN 0-201-57044-0. -""" - -import binascii -import functools -import logging -import re -import string -import struct - -import numpy as np - -from matplotlib.cbook import _format_approx -from . import _api - -_log = logging.getLogger(__name__) - - -class _Token: - """ - A token in a PostScript stream. - - Attributes - ---------- - pos : int - Position, i.e. offset from the beginning of the data. - raw : str - Raw text of the token. - kind : str - Description of the token (for debugging or testing). - """ - __slots__ = ('pos', 'raw') - kind = '?' - - def __init__(self, pos, raw): - _log.debug('type1font._Token %s at %d: %r', self.kind, pos, raw) - self.pos = pos - self.raw = raw - - def __str__(self): - return f"<{self.kind} {self.raw} @{self.pos}>" - - def endpos(self): - """Position one past the end of the token""" - return self.pos + len(self.raw) - - def is_keyword(self, *names): - """Is this a name token with one of the names?""" - return False - - def is_slash_name(self): - """Is this a name token that starts with a slash?""" - return False - - def is_delim(self): - """Is this a delimiter token?""" - return False - - def is_number(self): - """Is this a number token?""" - return False - - def value(self): - return self.raw - - -class _NameToken(_Token): - kind = 'name' - - def is_slash_name(self): - return self.raw.startswith('/') - - def value(self): - return self.raw[1:] - - -class _BooleanToken(_Token): - kind = 'boolean' - - def value(self): - return self.raw == 'true' - - -class _KeywordToken(_Token): - kind = 'keyword' - - def is_keyword(self, *names): - return self.raw in names - - -class _DelimiterToken(_Token): - kind = 'delimiter' - - def is_delim(self): - return True - - def opposite(self): - return {'[': ']', ']': '[', - '{': '}', '}': '{', - '<<': '>>', '>>': '<<' - }[self.raw] - - -class _WhitespaceToken(_Token): - kind = 'whitespace' - - -class _StringToken(_Token): - kind = 'string' - _escapes_re = re.compile(r'\\([\\()nrtbf]|[0-7]{1,3})') - _replacements = {'\\': '\\', '(': '(', ')': ')', 'n': '\n', - 'r': '\r', 't': '\t', 'b': '\b', 'f': '\f'} - _ws_re = re.compile('[\0\t\r\f\n ]') - - @classmethod - def _escape(cls, match): - group = match.group(1) - try: - return cls._replacements[group] - except KeyError: - return chr(int(group, 8)) - - @functools.lru_cache() - def value(self): - if self.raw[0] == '(': - return self._escapes_re.sub(self._escape, self.raw[1:-1]) - else: - data = self._ws_re.sub('', self.raw[1:-1]) - if len(data) % 2 == 1: - data += '0' - return binascii.unhexlify(data) - - -class _BinaryToken(_Token): - kind = 'binary' - - def value(self): - return self.raw[1:] - - -class _NumberToken(_Token): - kind = 'number' - - def is_number(self): - return True - - def value(self): - if '.' not in self.raw: - return int(self.raw) - else: - return float(self.raw) - - -def _tokenize(data: bytes, skip_ws: bool): - """ - A generator that produces _Token instances from Type-1 font code. - - The consumer of the generator may send an integer to the tokenizer to - indicate that the next token should be _BinaryToken of the given length. - - Parameters - ---------- - data : bytes - The data of the font to tokenize. - - skip_ws : bool - If true, the generator will drop any _WhitespaceTokens from the output. - """ - - text = data.decode('ascii', 'replace') - whitespace_or_comment_re = re.compile(r'[\0\t\r\f\n ]+|%[^\r\n]*') - token_re = re.compile(r'/{0,2}[^]\0\t\r\f\n ()<>{}/%[]+') - instring_re = re.compile(r'[()\\]') - hex_re = re.compile(r'^<[0-9a-fA-F\0\t\r\f\n ]*>$') - oct_re = re.compile(r'[0-7]{1,3}') - pos = 0 - next_binary = None - - while pos < len(text): - if next_binary is not None: - n = next_binary - next_binary = (yield _BinaryToken(pos, data[pos:pos+n])) - pos += n - continue - match = whitespace_or_comment_re.match(text, pos) - if match: - if not skip_ws: - next_binary = (yield _WhitespaceToken(pos, match.group())) - pos = match.end() - elif text[pos] == '(': - # PostScript string rules: - # - parentheses must be balanced - # - backslashes escape backslashes and parens - # - also codes \n\r\t\b\f and octal escapes are recognized - # - other backslashes do not escape anything - start = pos - pos += 1 - depth = 1 - while depth: - match = instring_re.search(text, pos) - if match is None: - raise ValueError( - f'Unterminated string starting at {start}') - pos = match.end() - if match.group() == '(': - depth += 1 - elif match.group() == ')': - depth -= 1 - else: # a backslash - char = text[pos] - if char in r'\()nrtbf': - pos += 1 - else: - octal = oct_re.match(text, pos) - if octal: - pos = octal.end() - else: - pass # non-escaping backslash - next_binary = (yield _StringToken(start, text[start:pos])) - elif text[pos:pos + 2] in ('<<', '>>'): - next_binary = (yield _DelimiterToken(pos, text[pos:pos + 2])) - pos += 2 - elif text[pos] == '<': - start = pos - try: - pos = text.index('>', pos) + 1 - except ValueError as e: - raise ValueError(f'Unterminated hex string starting at {start}' - ) from e - if not hex_re.match(text[start:pos]): - raise ValueError(f'Malformed hex string starting at {start}') - next_binary = (yield _StringToken(pos, text[start:pos])) - else: - match = token_re.match(text, pos) - if match: - raw = match.group() - if raw.startswith('/'): - next_binary = (yield _NameToken(pos, raw)) - elif match.group() in ('true', 'false'): - next_binary = (yield _BooleanToken(pos, raw)) - else: - try: - float(raw) - next_binary = (yield _NumberToken(pos, raw)) - except ValueError: - next_binary = (yield _KeywordToken(pos, raw)) - pos = match.end() - else: - next_binary = (yield _DelimiterToken(pos, text[pos])) - pos += 1 - - -class _BalancedExpression(_Token): - pass - - -def _expression(initial, tokens, data): - """ - Consume some number of tokens and return a balanced PostScript expression. - - Parameters - ---------- - initial : _Token - The token that triggered parsing a balanced expression. - tokens : iterator of _Token - Following tokens. - data : bytes - Underlying data that the token positions point to. - - Returns - ------- - _BalancedExpression - """ - delim_stack = [] - token = initial - while True: - if token.is_delim(): - if token.raw in ('[', '{'): - delim_stack.append(token) - elif token.raw in (']', '}'): - if not delim_stack: - raise RuntimeError(f"unmatched closing token {token}") - match = delim_stack.pop() - if match.raw != token.opposite(): - raise RuntimeError( - f"opening token {match} closed by {token}" - ) - if not delim_stack: - break - else: - raise RuntimeError(f'unknown delimiter {token}') - elif not delim_stack: - break - token = next(tokens) - return _BalancedExpression( - initial.pos, - data[initial.pos:token.endpos()].decode('ascii', 'replace') - ) - - -class Type1Font: - """ - A class representing a Type-1 font, for use by backends. - - Attributes - ---------- - parts : tuple - A 3-tuple of the cleartext part, the encrypted part, and the finale of - zeros. - - decrypted : bytes - The decrypted form of ``parts[1]``. - - prop : dict[str, Any] - A dictionary of font properties. Noteworthy keys include: - - - FontName: PostScript name of the font - - Encoding: dict from numeric codes to glyph names - - FontMatrix: bytes object encoding a matrix - - UniqueID: optional font identifier, dropped when modifying the font - - CharStrings: dict from glyph names to byte code - - Subrs: array of byte code subroutines - - OtherSubrs: bytes object encoding some PostScript code - """ - __slots__ = ('parts', 'decrypted', 'prop', '_pos', '_abbr') - # the _pos dict contains (begin, end) indices to parts[0] + decrypted - # so that they can be replaced when transforming the font; - # but since sometimes a definition appears in both parts[0] and decrypted, - # _pos[name] is an array of such pairs - # - # _abbr maps three standard abbreviations to their particular names in - # this font (e.g. 'RD' is named '-|' in some fonts) - - def __init__(self, input): - """ - Initialize a Type-1 font. - - Parameters - ---------- - input : str or 3-tuple - Either a pfb file name, or a 3-tuple of already-decoded Type-1 - font `~.Type1Font.parts`. - """ - if isinstance(input, tuple) and len(input) == 3: - self.parts = input - else: - with open(input, 'rb') as file: - data = self._read(file) - self.parts = self._split(data) - - self.decrypted = self._decrypt(self.parts[1], 'eexec') - self._abbr = {'RD': 'RD', 'ND': 'ND', 'NP': 'NP'} - self._parse() - - def _read(self, file): - """Read the font from a file, decoding into usable parts.""" - rawdata = file.read() - if not rawdata.startswith(b'\x80'): - return rawdata - - data = b'' - while rawdata: - if not rawdata.startswith(b'\x80'): - raise RuntimeError('Broken pfb file (expected byte 128, ' - 'got %d)' % rawdata[0]) - type = rawdata[1] - if type in (1, 2): - length, = struct.unpack('> 8)) - key = ((key+byte) * 52845 + 22719) & 0xffff - - return bytes(plaintext[ndiscard:]) - - @staticmethod - def _encrypt(plaintext, key, ndiscard=4): - """ - Encrypt plaintext using the Type-1 font algorithm. - - The algorithm is described in Adobe's "Adobe Type 1 Font Format". - The key argument can be an integer, or one of the strings - 'eexec' and 'charstring', which map to the key specified for the - corresponding part of Type-1 fonts. - - The ndiscard argument should be an integer, usually 4. That - number of bytes is prepended to the plaintext before encryption. - This function prepends NUL bytes for reproducibility, even though - the original algorithm uses random bytes, presumably to avoid - cryptanalysis. - """ - - key = _api.check_getitem({'eexec': 55665, 'charstring': 4330}, key=key) - ciphertext = [] - for byte in b'\0' * ndiscard + plaintext: - c = byte ^ (key >> 8) - ciphertext.append(c) - key = ((key + c) * 52845 + 22719) & 0xffff - - return bytes(ciphertext) - - def _parse(self): - """ - Find the values of various font properties. This limited kind - of parsing is described in Chapter 10 "Adobe Type Manager - Compatibility" of the Type-1 spec. - """ - # Start with reasonable defaults - prop = {'Weight': 'Regular', 'ItalicAngle': 0.0, 'isFixedPitch': False, - 'UnderlinePosition': -100, 'UnderlineThickness': 50} - pos = {} - data = self.parts[0] + self.decrypted - - source = _tokenize(data, True) - while True: - # See if there is a key to be assigned a value - # e.g. /FontName in /FontName /Helvetica def - try: - token = next(source) - except StopIteration: - break - if token.is_delim(): - # skip over this - we want top-level keys only - _expression(token, source, data) - if token.is_slash_name(): - key = token.value() - keypos = token.pos - else: - continue - - # Some values need special parsing - if key in ('Subrs', 'CharStrings', 'Encoding', 'OtherSubrs'): - prop[key], endpos = { - 'Subrs': self._parse_subrs, - 'CharStrings': self._parse_charstrings, - 'Encoding': self._parse_encoding, - 'OtherSubrs': self._parse_othersubrs - }[key](source, data) - pos.setdefault(key, []).append((keypos, endpos)) - continue - - try: - token = next(source) - except StopIteration: - break - - if isinstance(token, _KeywordToken): - # constructs like - # FontDirectory /Helvetica known {...} {...} ifelse - # mean the key was not really a key - continue - - if token.is_delim(): - value = _expression(token, source, data).raw - else: - value = token.value() - - # look for a 'def' possibly preceded by access modifiers - try: - kw = next( - kw for kw in source - if not kw.is_keyword('readonly', 'noaccess', 'executeonly') - ) - except StopIteration: - break - - # sometimes noaccess def and readonly def are abbreviated - if kw.is_keyword('def', self._abbr['ND'], self._abbr['NP']): - prop[key] = value - pos.setdefault(key, []).append((keypos, kw.endpos())) - - # detect the standard abbreviations - if value == '{noaccess def}': - self._abbr['ND'] = key - elif value == '{noaccess put}': - self._abbr['NP'] = key - elif value == '{string currentfile exch readstring pop}': - self._abbr['RD'] = key - - # Fill in the various *Name properties - if 'FontName' not in prop: - prop['FontName'] = (prop.get('FullName') or - prop.get('FamilyName') or - 'Unknown') - if 'FullName' not in prop: - prop['FullName'] = prop['FontName'] - if 'FamilyName' not in prop: - extras = ('(?i)([ -](regular|plain|italic|oblique|(semi)?bold|' - '(ultra)?light|extra|condensed))+$') - prop['FamilyName'] = re.sub(extras, '', prop['FullName']) - # Decrypt the encrypted parts - ndiscard = prop.get('lenIV', 4) - cs = prop['CharStrings'] - for key, value in cs.items(): - cs[key] = self._decrypt(value, 'charstring', ndiscard) - if 'Subrs' in prop: - prop['Subrs'] = [ - self._decrypt(value, 'charstring', ndiscard) - for value in prop['Subrs'] - ] - - self.prop = prop - self._pos = pos - - def _parse_subrs(self, tokens, _data): - count_token = next(tokens) - if not count_token.is_number(): - raise RuntimeError( - f"Token following /Subrs must be a number, was {count_token}" - ) - count = count_token.value() - array = [None] * count - next(t for t in tokens if t.is_keyword('array')) - for _ in range(count): - next(t for t in tokens if t.is_keyword('dup')) - index_token = next(tokens) - if not index_token.is_number(): - raise RuntimeError( - "Token following dup in Subrs definition must be a " - f"number, was {index_token}" - ) - nbytes_token = next(tokens) - if not nbytes_token.is_number(): - raise RuntimeError( - "Second token following dup in Subrs definition must " - f"be a number, was {nbytes_token}" - ) - token = next(tokens) - if not token.is_keyword(self._abbr['RD']): - raise RuntimeError( - f"Token preceding subr must be {self._abbr['RD']}, " - f"was {token}" - ) - binary_token = tokens.send(1+nbytes_token.value()) - array[index_token.value()] = binary_token.value() - - return array, next(tokens).endpos() - - @staticmethod - def _parse_charstrings(tokens, _data): - count_token = next(tokens) - if not count_token.is_number(): - raise RuntimeError( - "Token following /CharStrings must be a number, " - f"was {count_token}" - ) - count = count_token.value() - charstrings = {} - next(t for t in tokens if t.is_keyword('begin')) - while True: - token = next(t for t in tokens - if t.is_keyword('end') or t.is_slash_name()) - if token.raw == 'end': - return charstrings, token.endpos() - glyphname = token.value() - nbytes_token = next(tokens) - if not nbytes_token.is_number(): - raise RuntimeError( - f"Token following /{glyphname} in CharStrings definition " - f"must be a number, was {nbytes_token}" - ) - next(tokens) # usually RD or |- - binary_token = tokens.send(1+nbytes_token.value()) - charstrings[glyphname] = binary_token.value() - - @staticmethod - def _parse_encoding(tokens, _data): - # this only works for encodings that follow the Adobe manual - # but some old fonts include non-compliant data - we log a warning - # and return a possibly incomplete encoding - encoding = {} - while True: - token = next(t for t in tokens - if t.is_keyword('StandardEncoding', 'dup', 'def')) - if token.is_keyword('StandardEncoding'): - return _StandardEncoding, token.endpos() - if token.is_keyword('def'): - return encoding, token.endpos() - index_token = next(tokens) - if not index_token.is_number(): - _log.warning( - f"Parsing encoding: expected number, got {index_token}" - ) - continue - name_token = next(tokens) - if not name_token.is_slash_name(): - _log.warning( - f"Parsing encoding: expected slash-name, got {name_token}" - ) - continue - encoding[index_token.value()] = name_token.value() - - @staticmethod - def _parse_othersubrs(tokens, data): - init_pos = None - while True: - token = next(tokens) - if init_pos is None: - init_pos = token.pos - if token.is_delim(): - _expression(token, tokens, data) - elif token.is_keyword('def', 'ND', '|-'): - return data[init_pos:token.endpos()], token.endpos() - - def transform(self, effects): - """ - Return a new font that is slanted and/or extended. - - Parameters - ---------- - effects : dict - A dict with optional entries: - - - 'slant' : float, default: 0 - Tangent of the angle that the font is to be slanted to the - right. Negative values slant to the left. - - 'extend' : float, default: 1 - Scaling factor for the font width. Values less than 1 condense - the glyphs. - - Returns - ------- - `Type1Font` - """ - fontname = self.prop['FontName'] - italicangle = self.prop['ItalicAngle'] - - array = [ - float(x) for x in (self.prop['FontMatrix'] - .lstrip('[').rstrip(']').split()) - ] - oldmatrix = np.eye(3, 3) - oldmatrix[0:3, 0] = array[::2] - oldmatrix[0:3, 1] = array[1::2] - modifier = np.eye(3, 3) - - if 'slant' in effects: - slant = effects['slant'] - fontname += '_Slant_%d' % int(1000 * slant) - italicangle = round( - float(italicangle) - np.arctan(slant) / np.pi * 180, - 5 - ) - modifier[1, 0] = slant - - if 'extend' in effects: - extend = effects['extend'] - fontname += '_Extend_%d' % int(1000 * extend) - modifier[0, 0] = extend - - newmatrix = np.dot(modifier, oldmatrix) - array[::2] = newmatrix[0:3, 0] - array[1::2] = newmatrix[0:3, 1] - fontmatrix = ( - '[%s]' % ' '.join(_format_approx(x, 6) for x in array) - ) - replacements = ( - [(x, '/FontName/%s def' % fontname) - for x in self._pos['FontName']] - + [(x, '/ItalicAngle %a def' % italicangle) - for x in self._pos['ItalicAngle']] - + [(x, '/FontMatrix %s readonly def' % fontmatrix) - for x in self._pos['FontMatrix']] - + [(x, '') for x in self._pos.get('UniqueID', [])] - ) - - data = bytearray(self.parts[0]) - data.extend(self.decrypted) - len0 = len(self.parts[0]) - for (pos0, pos1), value in sorted(replacements, reverse=True): - data[pos0:pos1] = value.encode('ascii', 'replace') - if pos0 < len(self.parts[0]): - if pos1 >= len(self.parts[0]): - raise RuntimeError( - f"text to be replaced with {value} spans " - "the eexec boundary" - ) - len0 += len(value) - pos1 + pos0 - - data = bytes(data) - return Type1Font(( - data[:len0], - self._encrypt(data[len0:], 'eexec'), - self.parts[2] - )) - - -_StandardEncoding = { - **{ord(letter): letter for letter in string.ascii_letters}, - 0: '.notdef', - 32: 'space', - 33: 'exclam', - 34: 'quotedbl', - 35: 'numbersign', - 36: 'dollar', - 37: 'percent', - 38: 'ampersand', - 39: 'quoteright', - 40: 'parenleft', - 41: 'parenright', - 42: 'asterisk', - 43: 'plus', - 44: 'comma', - 45: 'hyphen', - 46: 'period', - 47: 'slash', - 48: 'zero', - 49: 'one', - 50: 'two', - 51: 'three', - 52: 'four', - 53: 'five', - 54: 'six', - 55: 'seven', - 56: 'eight', - 57: 'nine', - 58: 'colon', - 59: 'semicolon', - 60: 'less', - 61: 'equal', - 62: 'greater', - 63: 'question', - 64: 'at', - 91: 'bracketleft', - 92: 'backslash', - 93: 'bracketright', - 94: 'asciicircum', - 95: 'underscore', - 96: 'quoteleft', - 123: 'braceleft', - 124: 'bar', - 125: 'braceright', - 126: 'asciitilde', - 161: 'exclamdown', - 162: 'cent', - 163: 'sterling', - 164: 'fraction', - 165: 'yen', - 166: 'florin', - 167: 'section', - 168: 'currency', - 169: 'quotesingle', - 170: 'quotedblleft', - 171: 'guillemotleft', - 172: 'guilsinglleft', - 173: 'guilsinglright', - 174: 'fi', - 175: 'fl', - 177: 'endash', - 178: 'dagger', - 179: 'daggerdbl', - 180: 'periodcentered', - 182: 'paragraph', - 183: 'bullet', - 184: 'quotesinglbase', - 185: 'quotedblbase', - 186: 'quotedblright', - 187: 'guillemotright', - 188: 'ellipsis', - 189: 'perthousand', - 191: 'questiondown', - 193: 'grave', - 194: 'acute', - 195: 'circumflex', - 196: 'tilde', - 197: 'macron', - 198: 'breve', - 199: 'dotaccent', - 200: 'dieresis', - 202: 'ring', - 203: 'cedilla', - 205: 'hungarumlaut', - 206: 'ogonek', - 207: 'caron', - 208: 'emdash', - 225: 'AE', - 227: 'ordfeminine', - 232: 'Lslash', - 233: 'Oslash', - 234: 'OE', - 235: 'ordmasculine', - 241: 'ae', - 245: 'dotlessi', - 248: 'lslash', - 249: 'oslash', - 250: 'oe', - 251: 'germandbls', -} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_pdf_ps.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_pdf_ps.py deleted file mode 100644 index 30d952e7fe34eeb56a5d3220306a7cc2fffdd7e2..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_pdf_ps.py +++ /dev/null @@ -1,140 +0,0 @@ -""" -Common functionality between the PDF and PS backends. -""" - -from io import BytesIO -import functools - -from fontTools import subset - -import matplotlib as mpl -from .. import font_manager, ft2font -from .._afm import AFM -from ..backend_bases import RendererBase - - -@functools.lru_cache(50) -def _cached_get_afm_from_fname(fname): - with open(fname, "rb") as fh: - return AFM(fh) - - -def get_glyphs_subset(fontfile, characters): - """ - Subset a TTF font - - Reads the named fontfile and restricts the font to the characters. - Returns a serialization of the subset font as file-like object. - - Parameters - ---------- - symbol : str - Path to the font file - characters : str - Continuous set of characters to include in subset - """ - - options = subset.Options(glyph_names=True, recommended_glyphs=True) - - # prevent subsetting FontForge Timestamp and other tables - options.drop_tables += ['FFTM', 'PfEd', 'BDF'] - - # if fontfile is a ttc, specify font number - if fontfile.endswith(".ttc"): - options.font_number = 0 - - with subset.load_font(fontfile, options) as font: - subsetter = subset.Subsetter(options=options) - subsetter.populate(text=characters) - subsetter.subset(font) - fh = BytesIO() - font.save(fh, reorderTables=False) - return fh - - -class CharacterTracker: - """ - Helper for font subsetting by the pdf and ps backends. - - Maintains a mapping of font paths to the set of character codepoints that - are being used from that font. - """ - - def __init__(self): - self.used = {} - - def track(self, font, s): - """Record that string *s* is being typeset using font *font*.""" - char_to_font = font._get_fontmap(s) - for _c, _f in char_to_font.items(): - self.used.setdefault(_f.fname, set()).add(ord(_c)) - - def track_glyph(self, font, glyph): - """Record that codepoint *glyph* is being typeset using font *font*.""" - self.used.setdefault(font.fname, set()).add(glyph) - - -class RendererPDFPSBase(RendererBase): - # The following attributes must be defined by the subclasses: - # - _afm_font_dir - # - _use_afm_rc_name - - def __init__(self, width, height): - super().__init__() - self.width = width - self.height = height - - def flipy(self): - # docstring inherited - return False # y increases from bottom to top. - - def option_scale_image(self): - # docstring inherited - return True # PDF and PS support arbitrary image scaling. - - def option_image_nocomposite(self): - # docstring inherited - # Decide whether to composite image based on rcParam value. - return not mpl.rcParams["image.composite_image"] - - def get_canvas_width_height(self): - # docstring inherited - return self.width * 72.0, self.height * 72.0 - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - if ismath == "TeX": - return super().get_text_width_height_descent(s, prop, ismath) - elif ismath: - parse = self._text2path.mathtext_parser.parse(s, 72, prop) - return parse.width, parse.height, parse.depth - elif mpl.rcParams[self._use_afm_rc_name]: - font = self._get_font_afm(prop) - l, b, w, h, d = font.get_str_bbox_and_descent(s) - scale = prop.get_size_in_points() / 1000 - w *= scale - h *= scale - d *= scale - return w, h, d - else: - font = self._get_font_ttf(prop) - font.set_text(s, 0.0, flags=ft2font.LOAD_NO_HINTING) - w, h = font.get_width_height() - d = font.get_descent() - scale = 1 / 64 - w *= scale - h *= scale - d *= scale - return w, h, d - - def _get_font_afm(self, prop): - fname = font_manager.findfont( - prop, fontext="afm", directory=self._afm_font_dir) - return _cached_get_afm_from_fname(fname) - - def _get_font_ttf(self, prop): - fnames = font_manager.fontManager._find_fonts_by_props(prop) - font = font_manager.get_font(fnames) - font.clear() - font.set_size(prop.get_size_in_points(), 72) - return font diff --git a/spaces/leemeng/stablelm-jp-alpha/README.md b/spaces/leemeng/stablelm-jp-alpha/README.md deleted file mode 100644 index 5be7242948f9181f38e3244359425dacd17e3647..0000000000000000000000000000000000000000 --- a/spaces/leemeng/stablelm-jp-alpha/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stablelm Jp Alpha -emoji: 📊 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/legoandmars/glide-inpainting/glide_text2im/tokenizer/__init__.py b/spaces/legoandmars/glide-inpainting/glide_text2im/tokenizer/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/leurez/moss/CHANGELOG.md b/spaces/leurez/moss/CHANGELOG.md deleted file mode 100644 index fc2aaceaa8b1f1c052ea753292b9dc5c76491376..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/CHANGELOG.md +++ /dev/null @@ -1,578 +0,0 @@ -## v2.10.9 - -`2023-04-03` - -> 更新默认 `accessToken` 反代地址为 [[acheong08](https://github.com/acheong08)] 的 `https://bypass.churchless.tech/api/conversation` - -## Enhancement -- 添加 `socks5` 代理认证 [[yimiaoxiehou](https://github.com/Chanzhaoyu/chatgpt-web/pull/999)] -- 添加 `socks` 代理用户名密码的配置 [[hank-cp](https://github.com/Chanzhaoyu/chatgpt-web/pull/890)] -- 添加可选日志打印 [[zcong1993](https://github.com/Chanzhaoyu/chatgpt-web/pull/1041)] -- 更新侧边栏按钮本地化[[simonwu53](https://github.com/Chanzhaoyu/chatgpt-web/pull/911)] -- 优化代码块滚动条高度 [[Fog3211](https://github.com/Chanzhaoyu/chatgpt-web/pull/1153)] -## BugFix -- 修复 `PWA` 问题 [[bingo235](https://github.com/Chanzhaoyu/chatgpt-web/pull/807)] -- 修复 `ESM` 错误 [[kidonng](https://github.com/Chanzhaoyu/chatgpt-web/pull/826)] -- 修复反向代理开启时限流失效的问题 [[gitgitgogogo](https://github.com/Chanzhaoyu/chatgpt-web/pull/863)] -- 修复 `docker` 构建时 `.env` 可能被忽略的问题 [[zaiMoe](https://github.com/Chanzhaoyu/chatgpt-web/pull/877)] -- 修复导出异常错误 [[KingTwinkle](https://github.com/Chanzhaoyu/chatgpt-web/pull/938)] -- 修复空值异常 [[vchenpeng](https://github.com/Chanzhaoyu/chatgpt-web/pull/1103)] -- 移动端上的体验问题 - -## Other -- `Docker` 容器名字名义 [[LOVECHEN](https://github.com/Chanzhaoyu/chatgpt-web/pull/1035)] -- `kubernetes` 部署配置 [[CaoYunzhou](https://github.com/Chanzhaoyu/chatgpt-web/pull/1001)] -- 感谢 [[assassinliujie](https://github.com/Chanzhaoyu/chatgpt-web/pull/962)] 和 [[puppywang](https://github.com/Chanzhaoyu/chatgpt-web/pull/1017)] 的某些贡献 -- 更新 `kubernetes/deploy.yaml` [[idawnwon](https://github.com/Chanzhaoyu/chatgpt-web/pull/1085)] -- 文档更新 [[#yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/883)] -- 文档更新 [[weifeng12x](https://github.com/Chanzhaoyu/chatgpt-web/pull/880)] -- 依赖更新 - -## v2.10.8 - -`2023-03-23` - -如遇问题,请删除 `node_modules` 重新安装依赖。 - -## Feature -- 显示回复消息原文的选项 [[yilozt](https://github.com/Chanzhaoyu/chatgpt-web/pull/672)] -- 添加单 `IP` 每小时请求限制。环境变量: `MAX_REQUEST_PER_HOUR` [[zhuxindong ](https://github.com/Chanzhaoyu/chatgpt-web/pull/718)] -- 前端添加角色设定,仅 `API` 方式可见 [[quzard](https://github.com/Chanzhaoyu/chatgpt-web/pull/768)] -- `OPENAI_API_MODEL` 变量现在对 `ChatGPTUnofficialProxyAPI` 也生效,注意:`Token` 和 `API` 的模型命名不一致,不能直接填入 `gpt-3.5` 或者 `gpt-4` [[hncboy](https://github.com/Chanzhaoyu/chatgpt-web/pull/632)] -- 添加繁体中文 `Prompts` [[PeterDaveHello](https://github.com/Chanzhaoyu/chatgpt-web/pull/796)] - -## Enhancement -- 重置回答时滚动定位至该回答 [[shunyue1320](https://github.com/Chanzhaoyu/chatgpt-web/pull/781)] -- 当 `API` 是 `gpt-4` 时增加可用的 `Max Tokens` [[simonwu53](https://github.com/Chanzhaoyu/chatgpt-web/pull/729)] -- 判断和忽略回复字符 [[liut](https://github.com/Chanzhaoyu/chatgpt-web/pull/474)] -- 切换会话时,自动聚焦输入框 [[JS-an](https://github.com/Chanzhaoyu/chatgpt-web/pull/735)] -- 渲染的链接新窗口打开 -- 查询余额可选 `API_BASE_URL` 代理地址 -- `config` 接口添加验证防止被无限制调用 -- `PWA` 默认不开启,现在需手动修改 `.env` 文件 `VITE_GLOB_APP_PWA` 变量 -- 当网络连接时,刷新页面,`500` 错误页自动跳转到主页 - -## BugFix -- `scrollToBottom` 调回 `scrollToBottomIfAtBottom` [[shunyue1320](https://github.com/Chanzhaoyu/chatgpt-web/pull/771)] -- 重置异常的 `loading` 会话 - -## Common -- 创建 `start.cmd` 在 `windows` 下也可以运行 [vulgatecnn](https://github.com/Chanzhaoyu/chatgpt-web/pull/656)] -- 添加 `visual-studio-code` 中调试配置 [[ChandlerVer5](https://github.com/Chanzhaoyu/chatgpt-web/pull/296)] -- 修复文档中 `docker` 端口为本地 [[kilvn](https://github.com/Chanzhaoyu/chatgpt-web/pull/802)] -## Other -- 依赖更新 - - -## v2.10.7 - -`2023-03-17` - -## BugFix -- 回退 `chatgpt` 版本,原因:导致 `OPENAI_API_BASE_URL` 代理失效 -- 修复缺省状态的 `usingContext` 默认值 - -## v2.10.6 - -`2023-03-17` - -## Feature -- 显示 `API` 余额 [[pzcn](https://github.com/Chanzhaoyu/chatgpt-web/pull/582)] - -## Enhancement -- 美化滚动条样式和 `UI` 保持一致 [[haydenull](https://github.com/Chanzhaoyu/chatgpt-web/pull/617)] -- 优化移动端 `Prompt` 样式 [[CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/608)] -- 上下文开关改为全局开关,现在记录在本地缓存中 -- 配置信息按接口类型显示 - -## Perf -- 优化函数方法 [[kirklin](https://github.com/Chanzhaoyu/chatgpt-web/pull/583)] -- 字符错误 [[pdsuwwz](https://github.com/Chanzhaoyu/chatgpt-web/pull/585)] -- 文档描述错误 [[lizhongyuan3](https://github.com/Chanzhaoyu/chatgpt-web/pull/636)] - -## BugFix -- 修复 `Prompt` 导入、导出兼容性错误 -- 修复 `highlight.js` 控制台兼容性警告 - -## Other -- 依赖更新 - -## v2.10.5 - -`2023-03-13` - -更新依赖,`access_token` 默认代理为 [acheong08](https://github.com/acheong08) 的 `https://bypass.duti.tech/api/conversation` - -## Feature -- `Prompt` 商店在线导入可以导入两种 `recommend.json`里提到的模板 [simonwu53](https://github.com/Chanzhaoyu/chatgpt-web/pull/521) -- 支持 `HTTPS_PROXY` [whatwewant](https://github.com/Chanzhaoyu/chatgpt-web/pull/308) -- `Prompt` 添加查询筛选 - -## Enhancement -- 调整输入框最大行数 [yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/502) -- 优化 `docker` 打包 [whatwewant](https://github.com/Chanzhaoyu/chatgpt-web/pull/520) -- `Prompt` 添加翻译和优化布局 -- 「繁体中文」补全和审阅 [PeterDaveHello](https://github.com/Chanzhaoyu/chatgpt-web/pull/542) -- 语言选择调整为下路框形式 -- 权限输入框类型调整为密码形式 - -## BugFix -- `JSON` 导入检查 [Nothing1024](https://github.com/Chanzhaoyu/chatgpt-web/pull/523) -- 修复 `AUTH_SECRET_KEY` 模式下跨域异常并添加对 `node.js 19` 版本的支持 [yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/499) -- 确定清空上下文时不应该重置会话标题 - -## Other -- 调整文档 -- 更新依赖 - -## v2.10.4 - -`2023-03-11` - -## Feature -- 感谢 [Nothing1024](https://github.com/Chanzhaoyu/chatgpt-web/pull/268) 添加 `Prompt` 模板和 `Prompt` 商店支持 - -## Enhancement -- 设置添加关闭按钮[#495] - -## Demo - -![Prompt](https://camo.githubusercontent.com/6a51af751eb29238cb7ef4f8fbd89f63db837562f97f33273095424e62dc9194/68747470733a2f2f73312e6c6f63696d672e636f6d2f323032332f30332f30342f333036326665633163613562632e676966) - -## v2.10.3 - -`2023-03-10` - -> 声明:除 `ChatGPTUnofficialProxyAPI` 使用的非官方代理外,本项目代码包括上游引用包均开源在 `GitHub`,如果你觉得本项目有监控后门或有问题导致你的账号、API被封,那我很抱歉。我可能`BUG`写的多,但我不缺德。此次主要为前端界面调整,周末愉快。 - -## Feature -- 支持长回复 [[yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/450)][[详情](https://github.com/Chanzhaoyu/chatgpt-web/pull/450)] -- 支持 `PWA` [[chenxch](https://github.com/Chanzhaoyu/chatgpt-web/pull/452)] - -## Enhancement -- 调整移动端按钮和优化布局 -- 调整 `iOS` 上安全距离 -- 简化 `docker-compose` 部署 [[cloudGrin](https://github.com/Chanzhaoyu/chatgpt-web/pull/466)] - -## BugFix -- 修复清空会话侧边栏标题不会重置的问题 [[RyanXinOne](https://github.com/Chanzhaoyu/chatgpt-web/pull/453)] -- 修复设置文字过长时导致的设置按钮消失的问题 - -## Other -- 更新依赖 - -## v2.10.2 - -`2023-03-09` - -衔接 `2.10.1` 版本[详情](https://github.com/Chanzhaoyu/chatgpt-web/releases/tag/v2.10.1) - -## Enhancement -- 移动端下输入框获得焦点时左侧按钮隐藏 - -## BugFix -- 修复 `2.10.1` 中添加 `OPENAI_API_MODEL` 变量的判断错误,会导致默认模型指定失效,抱歉 -- 回退 `2.10.1` 中前端变量影响 `Docker` 打包 - -## v2.10.1 - -`2023-03-09` - -注意:删除了 `.env` 文件改用 `.env.example` 代替,如果是手动部署的同学现在需要手动创建 `.env` 文件并从 `.env.example` 中复制需要的变量,并且 `.env` 文件现在会在 `Git` 提交中被忽略,原因如下: - -- 在项目中添加 `.env` 从一开始就是个错误的示范 -- 如果是 `Fork` 项目进行修改测试总是会被 `Git` 修改提示给打扰 -- 感谢 [yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/395) 的提醒和修改 - - -这两天开始,官方已经开始对第三方代理进行了拉闸, `accessToken` 即将或已经开始可能会不可使用。异常 `API` 使用也开始封号,封号缘由不明,如果出现使用 `API` 提示错误,请查看后端控制台信息,或留意邮箱。 - -## Feature -- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/393) 添加是否发送上下文开关功能 - -## Enhancement -- 感谢 [nagaame](https://github.com/Chanzhaoyu/chatgpt-web/pull/415) 优化`docker`打包镜像文件过大的问题 -- 感谢 [xieccc](https://github.com/Chanzhaoyu/chatgpt-web/pull/404) 新增 `API` 模型配置变量 `OPENAI_API_MODEL` -- 感谢 [acongee](https://github.com/Chanzhaoyu/chatgpt-web/pull/394) 优化输出时滚动条问题 - -## BugFix -- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/392) 修复导出图片会丢失头像的问题 -- 修复深色模式导出图片的样式问题 - - -## v2.10.0 - -`2023-03-07` - -- 老规矩,手动部署的同学需要删除 `node_modules` 安装包重新安装降低出错概率,其他部署不受影响,但是可能会有缓存问题。 -- 虽然说了更新放缓,但是 `issues` 不看, `PR` 不改我睡不着,我的邮箱从每天早上`8`点到凌晨`12`永远在滴滴滴,所以求求各位,超时的`issues`自己关闭下哈,我真的需要缓冲一下。 -- 演示图片请看最后 - -## Feature -- 添加权限功能,用法:`service/.env` 中的 `AUTH_SECRET_KEY` 变量添加密码 -- 感谢 [PeterDaveHello](https://github.com/Chanzhaoyu/chatgpt-web/pull/348) 添加「繁体中文」翻译 -- 感谢 [GermMC](https://github.com/Chanzhaoyu/chatgpt-web/pull/369) 添加聊天记录导入、导出、清空的功能 -- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/374) 添加会话保存为本地图片的功能 - - -## Enhancement -- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/363) 添加 `ctrl+enter` 发送消息 -- 现在新消息只有在结束了之后才滚动到底部,而不是之前的强制性 -- 优化部分代码 - -## BugFix -- 转义状态码前端显示,防止直接暴露 `key`(我可能需要更多的状态码补充) - -## Other -- 更新依赖到最新 - -## 演示 -> 不是界面最新效果,有美化改动 - -权限 - -![权限](https://user-images.githubusercontent.com/24789441/223438518-80d58d42-e344-4e39-b87c-251ff73925ed.png) - -聊天记录导出 - -![聊天记录导出](https://user-images.githubusercontent.com/57023771/223372153-6d8e9ec1-d82c-42af-b4bd-232e50504a25.gif) - -保存图片到本地 - -![保存图片到本地](https://user-images.githubusercontent.com/13901424/223423555-b69b95ef-8bcf-4951-a7c9-98aff2677e18.gif) - -## v2.9.3 - -`2023-03-06` - -## Enhancement -- 感谢 [ChandlerVer5](https://github.com/Chanzhaoyu/chatgpt-web/pull/305) 使用 `markdown-it` 替换 `marked`,解决代码块闪烁的问题 -- 感谢 [shansing](https://github.com/Chanzhaoyu/chatgpt-web/pull/277) 改善文档 -- 感谢 [nalf3in](https://github.com/Chanzhaoyu/chatgpt-web/pull/293) 添加英文翻译 - -## BugFix -- 感谢[sepcnt ](https://github.com/Chanzhaoyu/chatgpt-web/pull/279) 修复切换记录时编辑状态未关闭的问题 -- 修复复制代码的兼容性报错问题 -- 修复部分优化小问题 - -## v2.9.2 - -`2023-03-04` - -手动部署的同学,务必删除根目录和`service`中的`node_modules`重新安装依赖,降低出现问题的概率,自动部署的不需要做改动。 - -### Feature -- 感谢 [hyln9](https://github.com/Chanzhaoyu/chatgpt-web/pull/247) 添加对渲染 `LaTex` 数学公式的支持 -- 感谢 [ottocsb](https://github.com/Chanzhaoyu/chatgpt-web/pull/227) 添加支持 `webAPP` (苹果添加到主页书签访问)支持 -- 添加 `OPENAI_API_BASE_URL` 可选环境变量[#249] -## Enhancement -- 优化在高分屏上主题内容的最大宽度[#257] -- 现在文字按单词截断[#215][#225] -### BugFix -- 修复动态生成时代码块不能被复制的问题[#251][#260] -- 修复 `iOS` 移动端输入框不会被键盘顶起的问题[#256] -- 修复控制台渲染警告 -## Other -- 更新依赖至最新 -- 修改 `README` 内容 - -## v2.9.1 - -`2023-03-02` - -### Feature -- 代码块添加当前代码语言显示和复制功能[#197][#196] -- 完善多语言,现在可以切换中英文显示 - -## Enhancement -- 由[Zo3i](https://github.com/Chanzhaoyu/chatgpt-web/pull/187) 完善 `docker-compose` 部署文档 - -### BugFix -- 由 [ottocsb](https://github.com/Chanzhaoyu/chatgpt-web/pull/200) 修复头像修改不同步的问题 -## Other -- 更新依赖至最新 -- 修改 `README` 内容 -## v2.9.0 - -`2023-03-02` - -### Feature -- 现在能复制带格式的消息文本 -- 新设计的设定页面,可以自定义姓名、描述、头像(链接方式) -- 新增`403`和`404`页面以便扩展 - -## Enhancement -- 更新 `chatgpt` 使 `ChatGPTAPI` 支持 `gpt-3.5-turbo-0301`(默认) -- 取消了前端超时限制设定 - -## v2.8.3 - -`2023-03-01` - -### Feature -- 消息已输出内容不会因为中断而消失[#167] -- 添加复制消息按钮[#133] - -### Other -- `README` 添加声明内容 - -## v2.8.2 - -`2023-02-28` -### Enhancement -- 代码主题调整为 `One Dark - light|dark` 适配深色模式 -### BugFix -- 修复普通文本代码渲染和深色模式下的问题[#139][#154] - -## v2.8.1 - -`2023-02-27` - -### BugFix -- 修复 `API` 版本不是 `Markdown` 时,普通 `HTML` 代码会被渲染的问题 [#146] - -## v2.8.0 - -`2023-02-27` - -- 感谢 [puppywang](https://github.com/Chanzhaoyu/chatgpt-web/commit/628187f5c3348bda0d0518f90699a86525d19018) 修复了 `2.7.0` 版本中关于流输出数据的问题(使用 `nginx` 需要自行配置 `octet-stream` 相关内容) - -- 关于为什么使用 `octet-stream` 而不是 `sse`,是因为更好的兼容之前的模式。 - -- 建议更新到此版本获得比较完整的体验 - -### Enhancement -- 优化了部份代码和类型提示 -- 输入框添加换行提示 -- 移动端输入框现在回车为换行,而不是直接提交 -- 移动端双击标题返回顶部,箭头返回底部 - -### BugFix -- 流输出数据下的问题[#122] -- 修复了 `API Key` 下部份代码不换行的问题 -- 修复移动端深色模式部份样式问题[#123][#126] -- 修复主题模式图标不一致的问题[#126] - -## v2.7.3 - -`2023-02-25` - -### Feature -- 适配系统深色模式 [#118](https://github.com/Chanzhaoyu/chatgpt-web/issues/103) -### BugFix -- 修复用户消息能被渲染为 `HTML` 问题 [#117](https://github.com/Chanzhaoyu/chatgpt-web/issues/117) - -## v2.7.2 - -`2023-02-24` -### Enhancement -- 消息使用 [github-markdown-css](https://www.npmjs.com/package/github-markdown-css) 进行美化,现在支持全语法 -- 移除测试无用函数 - -## v2.7.1 - -`2023-02-23` - -因为消息流在 `accessToken` 中存在解析失败和消息不完整等一系列的问题,调整回正常消息形式 - -### Feature -- 现在可以中断请求过长没有答复的消息 -- 现在可以删除单条消息 -- 设置中显示当前版本信息 - -### BugFix -- 回退 `2.7.0` 的消息不稳定的问题 - -## v2.7.0 - -`2023-02-23` - -### Feature -- 使用消息流返回信息,反应更迅速 - -### Enhancement -- 样式的一点小改动 - -## v2.6.2 - -`2023-02-22` -### BugFix -- 还原修改代理导致的异常问题 - -## v2.6.1 - -`2023-02-22` - -### Feature -- 新增 `Railway` 部署模版 - -### BugFix -- 手动打包 `Proxy` 问题 - -## v2.6.0 - -`2023-02-21` -### Feature -- 新增对 `网页 accessToken` 调用 `ChatGPT`,更智能不过不太稳定 [#51](https://github.com/Chanzhaoyu/chatgpt-web/issues/51) -- 前端页面设置按钮显示查看当前后端服务配置 - -### Enhancement -- 新增 `TIMEOUT_MS` 环境变量设定后端超时时常(单位:毫秒)[#62](https://github.com/Chanzhaoyu/chatgpt-web/issues/62) - -## v2.5.2 - -`2023-02-21` -### Feature -- 增加对 `markdown` 格式的支持 [Demo](https://github.com/Chanzhaoyu/chatgpt-web/pull/77) -### BugFix -- 重载会话时滚动条保持 - -## v2.5.1 - -`2023-02-21` - -### Enhancement -- 调整路由模式为 `hash` -- 调整新增会话添加到 -- 调整移动端样式 - - -## v2.5.0 - -`2023-02-20` - -### Feature -- 会话 `loading` 现在显示为光标动画 -- 会话现在可以再次生成回复 -- 会话异常可以再次进行请求 -- 所有删除选项添加确认操作 - -### Enhancement -- 调整 `chat` 为路由页面而不是组件形式 -- 更新依赖至最新 -- 调整移动端体验 - -### BugFix -- 修复移动端左侧菜单显示不完整的问题 - -## v2.4.1 - -`2023-02-18` - -### Enhancement -- 调整部份移动端上的样式 -- 输入框支持换行 - -## v2.4.0 - -`2023-02-17` - -### Feature -- 响应式支持移动端 -### Enhancement -- 修改部份描述错误 - -## v2.3.3 - -`2023-02-16` - -### Feature -- 添加 `README` 部份说明和贡献列表 -- 添加 `docker` 镜像 -- 添加 `GitHub Action` 自动化构建 - -### BugFix -- 回退依赖更新导致的 [Eslint 报错](https://github.com/eslint/eslint/issues/16896) - -## v2.3.2 - -`2023-02-16` - -### Enhancement -- 更新依赖至最新 -- 优化部份内容 - -## v2.3.1 - -`2023-02-15` - -### BugFix -- 修复多会话状态下一些意想不到的问题 - -## v2.3.0 - -`2023-02-15` -### Feature -- 代码类型信息高亮显示 -- 支持 `node ^16` 版本 -- 移动端响应式初步支持 -- `vite` 中 `proxy` 代理 - -### Enhancement -- 调整超时处理范围 - -### BugFix -- 修复取消请求错误提示会添加到信息中 -- 修复部份情况下提交请求不可用 -- 修复侧边栏宽度变化闪烁的问题 - -## v2.2.0 - -`2023-02-14` -### Feature -- 会话和上下文本地储存 -- 侧边栏本地储存 - -## v2.1.0 - -`2023-02-14` -### Enhancement -- 更新依赖至最新 -- 联想功能移动至前端提交,后端只做转发 - -### BugFix -- 修复部份项目检测有关 `Bug` -- 修复清除上下文按钮失效 - -## v2.0.0 - -`2023-02-13` -### Refactor -重构并优化大部分内容 - -## v1.0.5 - -`2023-02-12` - -### Enhancement -- 输入框焦点,连续提交 - -### BugFix -- 修复信息框样式问题 -- 修复中文输入法提交问题 - -## v1.0.4 - -`2023-02-11` - -### Feature -- 支持上下文联想 - -## v1.0.3 - -`2023-02-11` - -### Enhancement -- 拆分 `service` 文件以便扩展 -- 调整 `Eslint` 相关验证 - -### BugFix -- 修复部份控制台报错 - -## v1.0.2 - -`2023-02-10` - -### BugFix -- 修复新增信息容器不会自动滚动到问题 -- 修复文本过长不换行到问题 [#1](https://github.com/Chanzhaoyu/chatgpt-web/issues/1) diff --git a/spaces/lewiswu1209/MockingBird/vocoder/fregan/inference.py b/spaces/lewiswu1209/MockingBird/vocoder/fregan/inference.py deleted file mode 100644 index 780a613376a7c411e75bd6d7a468a3eb1e893a57..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/vocoder/fregan/inference.py +++ /dev/null @@ -1,74 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals - -import os -import json -import torch -from utils.util import AttrDict -from vocoder.fregan.generator import FreGAN - -generator = None # type: FreGAN -output_sample_rate = None -_device = None - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def load_model(weights_fpath, config_fpath=None, verbose=True): - global generator, _device, output_sample_rate - - if verbose: - print("Building fregan") - - if config_fpath == None: - model_config_fpaths = list(weights_fpath.parent.rglob("*.json")) - if len(model_config_fpaths) > 0: - config_fpath = model_config_fpaths[0] - else: - config_fpath = "./vocoder/fregan/config.json" - with open(config_fpath) as f: - data = f.read() - json_config = json.loads(data) - h = AttrDict(json_config) - output_sample_rate = h.sampling_rate - torch.manual_seed(h.seed) - - if torch.cuda.is_available(): - # _model = _model.cuda() - _device = torch.device('cuda') - else: - _device = torch.device('cpu') - - generator = FreGAN(h).to(_device) - state_dict_g = load_checkpoint( - weights_fpath, _device - ) - generator.load_state_dict(state_dict_g['generator']) - generator.eval() - generator.remove_weight_norm() - - -def is_loaded(): - return generator is not None - - -def infer_waveform(mel, progress_callback=None): - - if generator is None: - raise Exception("Please load fre-gan in memory before using it") - - mel = torch.FloatTensor(mel).to(_device) - mel = mel.unsqueeze(0) - - with torch.no_grad(): - y_g_hat = generator(mel) - audio = y_g_hat.squeeze() - audio = audio.cpu().numpy() - - return audio, output_sample_rate - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Active Reading Skills 3rd Edition Answer Key.zip.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Active Reading Skills 3rd Edition Answer Key.zip.md deleted file mode 100644 index f0d2b27e6aa1009c3d6fce5960176ad7fc1084bd..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Active Reading Skills 3rd Edition Answer Key.zip.md +++ /dev/null @@ -1,18 +0,0 @@ -

    Active Reading Skills 3rd Edition Answer Key.zip


    Download Zip ★★★★★ https://bytlly.com/2uGxIf



    -
    -This pdf contains the following information. Page 1 of 300 and supporting data files.... - -This game is fun for kids, but more fun for parents. It's fun for the whole family! It's a memory card game in which you and your little kids will play the part of the parents, so that you'll learn to behave like them. My Kids The Pajama Memory Card Game can entertain you and your kids for hours. - -Parents ask for a fun way to reinforce parent-child-bonding time. My kids love playing this game. This fun game is an ideal way to develop good behavior, especially for kids who are wild and aggressive. There is nothing more satisfying than watching your wild little baby sit down and become a well-behaved little tiger. It is especially fun for the parents. How it Works My Kids The Pajama Memory Card Game is an entertaining game for parents and children alike. It can provide hours of fun for a child and ensure that the child will pay attention to you for the rest of the day. The game is ideal for children who are age six or younger. No motor skills are required, and you can play with your child in your arms or in your lap. Each game is unique, and kids love to play all the games again and again. My Kids The Pajama Memory Card Game is designed to help children learn good behavior, so that they become independent, well-behaved and mature children. It is equally effective as a punishment tool, so that you can control your children when you feel that they are acting up. This game is fun for kids, but more fun for parents. It's fun for the whole family! It's a memory card game in which you and your little kids will play the part of the parents, so that you'll learn to behave like them. My Kids The Pajama Memory Card Game can entertain you and your kids for hours. - -My Kids The Pajama Memory Card Game is fun for kids, but more fun for parents. It's fun for the whole family! It's a memory card game in which you and your little kids will play the part of the parents, so that you'll learn to behave like them. My Kids The Pajama Memory Card Game can entertain you and your kids for hours. - -... - -Learn more - -My Kids The Pajama Memory Card Game is fun for kids, but more fun for parents. It's fun for the whole family! It's a memory 4fefd39f24
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Ezx Rock Solid Keygen !!HOT!!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Ezx Rock Solid Keygen !!HOT!!.md deleted file mode 100644 index a2b141de353ecec6cec6869a7cf049feb954dd4a..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Ezx Rock Solid Keygen !!HOT!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ezx Rock Solid Keygen


    DOWNLOAD ··· https://bytlly.com/2uGxyw



    -
    -1 - Toontrack EZX. Toontrack EZX is a drum sampling program which contains full drum kits and instruments samples. It has been recorded at the Toontrack studios in Burbank, California. Roland TD - Toontrack Drum-Meter (Serial . Toontrack EZX - New York Rhythm - Rock: Toontrack EZX - New York Rhythm - Rock: 3 - Toontrack EZX - New York Rhythm - Rock: Toontrack EZX - New York Rhythm - Rock: 4 - Toontrack EZX - New York Rhythm - Rock: 5 - Toontrack EZX - New York Rhythm - Rock: 6 - Toontrack EZX - New York Rhythm - Rock: Toontrack EZX - New York Rhythm - Rock: 7 - Toontrack EZX - New York Rhythm - Rock: 8 - Toontrack EZX - New York Rhythm - Rock: 9 - Toontrack EZX - New York Rhythm - Rock: 10 - Toontrack EZX - New York Rhythm - Rock: 11 - Toontrack EZX - New York Rhythm - Rock: 12 - Toontrack EZX - New York Rhythm - Rock: 13 - Toontrack EZX - New York Rhythm - Rock: 14 - Toontrack EZX - New York Rhythm - Rock: 15 - Toontrack EZX - New York Rhythm - Rock: 16 - Toontrack EZX - New York Rhythm - Rock: 17 - Toontrack EZX - New York Rhythm - Rock: 18 - Toontrack EZX - New York Rhythm - Rock: 19 - Toontrack EZX - New York Rhythm - Rock: 20 - Toontrack EZX - New York Rhythm - Rock: 21 - Toontrack EZX - New York Rhythm - Rock: 22 - Toontrack EZX - New York Rhythm - Rock: 23 - Toontrack EZX - New York Rhythm - Rock: 24 - Toontrack EZX - New York Rhythm - Rock: 25 - Toontrack EZX - New York Rhythm - Rock: 26 - Toontrack EZX - New York Rhythm - Rock: 27 - Toontrack EZX - New York Rhythm - Rock: 28 - Toontrack EZX - New York Rhythm - Rock: 29 - Toontrack EZX - New York Rhythm - Rock: 30 - Toontrack EZX - New York Rhythm - Rock: 31 - Toontrack EZX - New York Rhythm - Rock: 32 4fefd39f24
    -
    -
    -

    diff --git a/spaces/ljjggr/bingo/src/lib/hooks/chat-history.ts b/spaces/ljjggr/bingo/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/ljjggr/bingo/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/WhisperPPG.py b/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/WhisperPPG.py deleted file mode 100644 index aa988b0a6d05696ea519d1652e5801302ba8a6c6..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/WhisperPPG.py +++ /dev/null @@ -1,30 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch - -from vencoder.whisper.model import Whisper, ModelDimensions -from vencoder.whisper.audio import pad_or_trim, log_mel_spectrogram - - -class WhisperPPG(SpeechEncoder): - def __init__(self,vec_path = "pretrain/medium.pt",device=None): - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - checkpoint = torch.load(vec_path, map_location=device) - dims = ModelDimensions(**checkpoint["dims"]) - model = Whisper(dims) - model.load_state_dict(checkpoint["model_state_dict"]) - self.hidden_dim = dims - self.model = model.to(self.dev) - - def encoder(self, wav): - audio = wav - audln = audio.shape[0] - ppgln = audln // 320 - audio = pad_or_trim(audio) - mel = log_mel_spectrogram(audio).to(self.dev) - with torch.no_grad(): - ppg = self.model.encoder(mel.unsqueeze(0)).squeeze().data.cpu().float().numpy() - ppg = torch.FloatTensor(ppg[:ppgln,]).to(self.dev) - return ppg[None,:,:].transpose(1, 2) diff --git a/spaces/ludusc/latent-space-theories/torch_utils/ops/filtered_lrelu.py b/spaces/ludusc/latent-space-theories/torch_utils/ops/filtered_lrelu.py deleted file mode 100644 index 6701cd72d1f0683a43f56b59ed3337dd3d6f0d3c..0000000000000000000000000000000000000000 --- a/spaces/ludusc/latent-space-theories/torch_utils/ops/filtered_lrelu.py +++ /dev/null @@ -1,274 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import numpy as np -import torch -import warnings - -from .. import custom_ops -from .. import misc -from . import upfirdn2d -from . import bias_act - -#---------------------------------------------------------------------------- - -_plugin = None - -def _init(): - global _plugin - if _plugin is None: - _plugin = custom_ops.get_plugin( - module_name='filtered_lrelu_plugin', - sources=['filtered_lrelu.cpp', 'filtered_lrelu_wr.cu', 'filtered_lrelu_rd.cu', 'filtered_lrelu_ns.cu'], - headers=['filtered_lrelu.h', 'filtered_lrelu.cu'], - source_dir=os.path.dirname(__file__), - extra_cuda_cflags=['--use_fast_math', '--allow-unsupported-compiler'], - ) - return True - -def _get_filter_size(f): - if f is None: - return 1, 1 - assert isinstance(f, torch.Tensor) - assert 1 <= f.ndim <= 2 - return f.shape[-1], f.shape[0] # width, height - -def _parse_padding(padding): - if isinstance(padding, int): - padding = [padding, padding] - assert isinstance(padding, (list, tuple)) - assert all(isinstance(x, (int, np.integer)) for x in padding) - padding = [int(x) for x in padding] - if len(padding) == 2: - px, py = padding - padding = [px, px, py, py] - px0, px1, py0, py1 = padding - return px0, px1, py0, py1 - -#---------------------------------------------------------------------------- - -def filtered_lrelu(x, fu=None, fd=None, b=None, up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False, impl='cuda'): - r"""Filtered leaky ReLU for a batch of 2D images. - - Performs the following sequence of operations for each channel: - - 1. Add channel-specific bias if provided (`b`). - - 2. Upsample the image by inserting N-1 zeros after each pixel (`up`). - - 3. Pad the image with the specified number of zeros on each side (`padding`). - Negative padding corresponds to cropping the image. - - 4. Convolve the image with the specified upsampling FIR filter (`fu`), shrinking it - so that the footprint of all output pixels lies within the input image. - - 5. Multiply each value by the provided gain factor (`gain`). - - 6. Apply leaky ReLU activation function to each value. - - 7. Clamp each value between -clamp and +clamp, if `clamp` parameter is provided. - - 8. Convolve the image with the specified downsampling FIR filter (`fd`), shrinking - it so that the footprint of all output pixels lies within the input image. - - 9. Downsample the image by keeping every Nth pixel (`down`). - - The fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports gradients of arbitrary order. - - Args: - x: Float32/float16/float64 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - fu: Float32 upsampling FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - fd: Float32 downsampling FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The length of vector must must match the channel dimension of `x`. - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor. (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - gain: Overall scaling factor for signal magnitude (default: sqrt(2)). - slope: Slope on the negative side of leaky ReLU (default: 0.2). - clamp: Maximum magnitude for leaky ReLU output (default: None). - flip_filter: False = convolution, True = correlation (default: False). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _filtered_lrelu_cuda(up=up, down=down, padding=padding, gain=gain, slope=slope, clamp=clamp, flip_filter=flip_filter).apply(x, fu, fd, b, None, 0, 0) - return _filtered_lrelu_ref(x, fu=fu, fd=fd, b=b, up=up, down=down, padding=padding, gain=gain, slope=slope, clamp=clamp, flip_filter=flip_filter) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _filtered_lrelu_ref(x, fu=None, fd=None, b=None, up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False): - """Slow and memory-inefficient reference implementation of `filtered_lrelu()` using - existing `upfirdn2n()` and `bias_act()` ops. - """ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - fu_w, fu_h = _get_filter_size(fu) - fd_w, fd_h = _get_filter_size(fd) - if b is not None: - assert isinstance(b, torch.Tensor) and b.dtype == x.dtype - misc.assert_shape(b, [x.shape[1]]) - assert isinstance(up, int) and up >= 1 - assert isinstance(down, int) and down >= 1 - px0, px1, py0, py1 = _parse_padding(padding) - assert gain == float(gain) and gain > 0 - assert slope == float(slope) and slope >= 0 - assert clamp is None or (clamp == float(clamp) and clamp >= 0) - - # Calculate output size. - batch_size, channels, in_h, in_w = x.shape - in_dtype = x.dtype - out_w = (in_w * up + (px0 + px1) - (fu_w - 1) - (fd_w - 1) + (down - 1)) // down - out_h = (in_h * up + (py0 + py1) - (fu_h - 1) - (fd_h - 1) + (down - 1)) // down - - # Compute using existing ops. - x = bias_act.bias_act(x=x, b=b) # Apply bias. - x = upfirdn2d.upfirdn2d(x=x, f=fu, up=up, padding=[px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter) # Upsample. - x = bias_act.bias_act(x=x, act='lrelu', alpha=slope, gain=gain, clamp=clamp) # Bias, leaky ReLU, clamp. - x = upfirdn2d.upfirdn2d(x=x, f=fd, down=down, flip_filter=flip_filter) # Downsample. - - # Check output shape & dtype. - misc.assert_shape(x, [batch_size, channels, out_h, out_w]) - assert x.dtype == in_dtype - return x - -#---------------------------------------------------------------------------- - -_filtered_lrelu_cuda_cache = dict() - -def _filtered_lrelu_cuda(up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False): - """Fast CUDA implementation of `filtered_lrelu()` using custom ops. - """ - assert isinstance(up, int) and up >= 1 - assert isinstance(down, int) and down >= 1 - px0, px1, py0, py1 = _parse_padding(padding) - assert gain == float(gain) and gain > 0 - gain = float(gain) - assert slope == float(slope) and slope >= 0 - slope = float(slope) - assert clamp is None or (clamp == float(clamp) and clamp >= 0) - clamp = float(clamp if clamp is not None else 'inf') - - # Lookup from cache. - key = (up, down, px0, px1, py0, py1, gain, slope, clamp, flip_filter) - if key in _filtered_lrelu_cuda_cache: - return _filtered_lrelu_cuda_cache[key] - - # Forward op. - class FilteredLReluCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, fu, fd, b, si, sx, sy): # pylint: disable=arguments-differ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - - # Replace empty up/downsample kernels with full 1x1 kernels (faster than separable). - if fu is None: - fu = torch.ones([1, 1], dtype=torch.float32, device=x.device) - if fd is None: - fd = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert 1 <= fu.ndim <= 2 - assert 1 <= fd.ndim <= 2 - - # Replace separable 1x1 kernels with full 1x1 kernels when scale factor is 1. - if up == 1 and fu.ndim == 1 and fu.shape[0] == 1: - fu = fu.square()[None] - if down == 1 and fd.ndim == 1 and fd.shape[0] == 1: - fd = fd.square()[None] - - # Missing sign input tensor. - if si is None: - si = torch.empty([0]) - - # Missing bias tensor. - if b is None: - b = torch.zeros([x.shape[1]], dtype=x.dtype, device=x.device) - - # Construct internal sign tensor only if gradients are needed. - write_signs = (si.numel() == 0) and (x.requires_grad or b.requires_grad) - - # Warn if input storage strides are not in decreasing order due to e.g. channels-last layout. - strides = [x.stride(i) for i in range(x.ndim) if x.size(i) > 1] - if any(a < b for a, b in zip(strides[:-1], strides[1:])): - warnings.warn("low-performance memory layout detected in filtered_lrelu input", RuntimeWarning) - - # Call C++/Cuda plugin if datatype is supported. - if x.dtype in [torch.float16, torch.float32]: - if torch.cuda.current_stream(x.device) != torch.cuda.default_stream(x.device): - warnings.warn("filtered_lrelu called with non-default cuda stream but concurrent execution is not supported", RuntimeWarning) - y, so, return_code = _plugin.filtered_lrelu(x, fu, fd, b, si, up, down, px0, px1, py0, py1, sx, sy, gain, slope, clamp, flip_filter, write_signs) - else: - return_code = -1 - - # No Cuda kernel found? Fall back to generic implementation. Still more memory efficient than the reference implementation because - # only the bit-packed sign tensor is retained for gradient computation. - if return_code < 0: - warnings.warn("filtered_lrelu called with parameters that have no optimized CUDA kernel, using generic fallback", RuntimeWarning) - - y = x.add(b.unsqueeze(-1).unsqueeze(-1)) # Add bias. - y = upfirdn2d.upfirdn2d(x=y, f=fu, up=up, padding=[px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter) # Upsample. - so = _plugin.filtered_lrelu_act_(y, si, sx, sy, gain, slope, clamp, write_signs) # Activation function and sign handling. Modifies y in-place. - y = upfirdn2d.upfirdn2d(x=y, f=fd, down=down, flip_filter=flip_filter) # Downsample. - - # Prepare for gradient computation. - ctx.save_for_backward(fu, fd, (si if si.numel() else so)) - ctx.x_shape = x.shape - ctx.y_shape = y.shape - ctx.s_ofs = sx, sy - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - fu, fd, si = ctx.saved_tensors - _, _, xh, xw = ctx.x_shape - _, _, yh, yw = ctx.y_shape - sx, sy = ctx.s_ofs - dx = None # 0 - dfu = None; assert not ctx.needs_input_grad[1] - dfd = None; assert not ctx.needs_input_grad[2] - db = None # 3 - dsi = None; assert not ctx.needs_input_grad[4] - dsx = None; assert not ctx.needs_input_grad[5] - dsy = None; assert not ctx.needs_input_grad[6] - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[3]: - pp = [ - (fu.shape[-1] - 1) + (fd.shape[-1] - 1) - px0, - xw * up - yw * down + px0 - (up - 1), - (fu.shape[0] - 1) + (fd.shape[0] - 1) - py0, - xh * up - yh * down + py0 - (up - 1), - ] - gg = gain * (up ** 2) / (down ** 2) - ff = (not flip_filter) - sx = sx - (fu.shape[-1] - 1) + px0 - sy = sy - (fu.shape[0] - 1) + py0 - dx = _filtered_lrelu_cuda(up=down, down=up, padding=pp, gain=gg, slope=slope, clamp=None, flip_filter=ff).apply(dy, fd, fu, None, si, sx, sy) - - if ctx.needs_input_grad[3]: - db = dx.sum([0, 2, 3]) - - return dx, dfu, dfd, db, dsi, dsx, dsy - - # Add to cache. - _filtered_lrelu_cuda_cache[key] = FilteredLReluCuda - return FilteredLReluCuda - -#---------------------------------------------------------------------------- diff --git a/spaces/lvwerra/in-the-stack/app.py b/spaces/lvwerra/in-the-stack/app.py deleted file mode 100644 index 408c64177ae325d921996aa0cc5f37bc843d8f6c..0000000000000000000000000000000000000000 --- a/spaces/lvwerra/in-the-stack/app.py +++ /dev/null @@ -1,63 +0,0 @@ -from datasets import load_dataset -import streamlit as st -from huggingface_hub import hf_hub_download -import gzip -import json -import time - -t_0 = time.time() - -@st.cache(allow_output_mutation=True) -def load_all_usernames(filename): - filepath = hf_hub_download(repo_id="bigcode/the-stack-username-to-repo", filename=filename, repo_type="dataset") - - with gzip.open(filepath, 'r') as f: - usernames = f.read().decode('utf-8') - usernames = json.loads(usernames) - - return usernames - -#st.image("./banner.png", use_column_width=True) -filename = "username_to_repo.json.gz" - -st.markdown("**_The Stack is an open governance interface between the AI community and the open source community._**") -st.title("Am I in The Stack?") -st.markdown("As part of the BigCode project, we released and maintain [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a 3.1 TB dataset of permissively licensed source code in 30 programming languages. One of our goals in this project is to give people agency over their source code by letting them decide whether or not it should be used to develop and evaluate machine learning models, as we acknowledge that not all developers may wish to have their data used for that purpose.") - -st.markdown("This tool lets you check if a repository under a given username is part of The Stack dataset. Would you like to have your data removed from future versions of The Stack? You can opt-out following the instructions [here](https://www.bigcode-project.org/docs/about/the-stack/#how-can-i-request-that-my-data-be-removed-from-the-stack).") - -t_start = time.time() -usernames = load_all_usernames(filename) -print("Time load", time.time()-t_start) - -username = st.text_input("Your GitHub Username:") -repos = [] -if st.button("Check!"):# or username: - t_start = time.time() - if username in usernames: - repos = usernames[username] - repo_word = "repository" if len(repos)==1 else "repositories" - st.markdown(f"**Yes**, there is code from **{len(repos)} {repo_word}** in The Stack:") - for repo in repos: - print(repo) - st.markdown(f"`{repo}`") - else: - st.text("**No**, your code is not in The Stack.") - print("Time to check", time.time()-t_start) - -#if st.button(""): -if len(repos)>0: - with st.expander("I want to remove my data from the Stack!"): - st.markdown("Select which repositories you would like to have removed:") - exclude_repo = [] - for repo in repos: - exclude_repo.append(st.checkbox(repo, value=True)) - - - st.markdown("Open an issue with the below text in the opt-out repo [here](https://github.com/bigcode-project/opt-out/issues/new):") - issue_text = "I want to remove the following repositories.\n\n" - issue_text += " - "+ "\n - ".join([repo for (repo, exclude) in zip(repos, exclude_repo) if exclude]) - st.code(issue_text) - - -print("Full time", time.time()-t_0) \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/pybind11/tests/conftest.py b/spaces/ma-xu/LIVE/pybind11/tests/conftest.py deleted file mode 100644 index a2350d041f5d3d57dede9ff23c3177eae2914048..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/conftest.py +++ /dev/null @@ -1,200 +0,0 @@ -# -*- coding: utf-8 -*- -"""pytest configuration - -Extends output capture as needed by pybind11: ignore constructors, optional unordered lines. -Adds docstring and exceptions message sanitizers: ignore Python 2 vs 3 differences. -""" - -import contextlib -import difflib -import gc -import re -import textwrap - -import pytest - -import env - -# Early diagnostic for failed imports -import pybind11_tests # noqa: F401 - -_unicode_marker = re.compile(r'u(\'[^\']*\')') -_long_marker = re.compile(r'([0-9])L') -_hexadecimal = re.compile(r'0x[0-9a-fA-F]+') - -# Avoid collecting Python3 only files -collect_ignore = [] -if env.PY2: - collect_ignore.append("test_async.py") - - -def _strip_and_dedent(s): - """For triple-quote strings""" - return textwrap.dedent(s.lstrip('\n').rstrip()) - - -def _split_and_sort(s): - """For output which does not require specific line order""" - return sorted(_strip_and_dedent(s).splitlines()) - - -def _make_explanation(a, b): - """Explanation for a failed assert -- the a and b arguments are List[str]""" - return ["--- actual / +++ expected"] + [line.strip('\n') for line in difflib.ndiff(a, b)] - - -class Output(object): - """Basic output post-processing and comparison""" - def __init__(self, string): - self.string = string - self.explanation = [] - - def __str__(self): - return self.string - - def __eq__(self, other): - # Ignore constructor/destructor output which is prefixed with "###" - a = [line for line in self.string.strip().splitlines() if not line.startswith("###")] - b = _strip_and_dedent(other).splitlines() - if a == b: - return True - else: - self.explanation = _make_explanation(a, b) - return False - - -class Unordered(Output): - """Custom comparison for output without strict line ordering""" - def __eq__(self, other): - a = _split_and_sort(self.string) - b = _split_and_sort(other) - if a == b: - return True - else: - self.explanation = _make_explanation(a, b) - return False - - -class Capture(object): - def __init__(self, capfd): - self.capfd = capfd - self.out = "" - self.err = "" - - def __enter__(self): - self.capfd.readouterr() - return self - - def __exit__(self, *args): - self.out, self.err = self.capfd.readouterr() - - def __eq__(self, other): - a = Output(self.out) - b = other - if a == b: - return True - else: - self.explanation = a.explanation - return False - - def __str__(self): - return self.out - - def __contains__(self, item): - return item in self.out - - @property - def unordered(self): - return Unordered(self.out) - - @property - def stderr(self): - return Output(self.err) - - -@pytest.fixture -def capture(capsys): - """Extended `capsys` with context manager and custom equality operators""" - return Capture(capsys) - - -class SanitizedString(object): - def __init__(self, sanitizer): - self.sanitizer = sanitizer - self.string = "" - self.explanation = [] - - def __call__(self, thing): - self.string = self.sanitizer(thing) - return self - - def __eq__(self, other): - a = self.string - b = _strip_and_dedent(other) - if a == b: - return True - else: - self.explanation = _make_explanation(a.splitlines(), b.splitlines()) - return False - - -def _sanitize_general(s): - s = s.strip() - s = s.replace("pybind11_tests.", "m.") - s = s.replace("unicode", "str") - s = _long_marker.sub(r"\1", s) - s = _unicode_marker.sub(r"\1", s) - return s - - -def _sanitize_docstring(thing): - s = thing.__doc__ - s = _sanitize_general(s) - return s - - -@pytest.fixture -def doc(): - """Sanitize docstrings and add custom failure explanation""" - return SanitizedString(_sanitize_docstring) - - -def _sanitize_message(thing): - s = str(thing) - s = _sanitize_general(s) - s = _hexadecimal.sub("0", s) - return s - - -@pytest.fixture -def msg(): - """Sanitize messages and add custom failure explanation""" - return SanitizedString(_sanitize_message) - - -# noinspection PyUnusedLocal -def pytest_assertrepr_compare(op, left, right): - """Hook to insert custom failure explanation""" - if hasattr(left, 'explanation'): - return left.explanation - - -@contextlib.contextmanager -def suppress(exception): - """Suppress the desired exception""" - try: - yield - except exception: - pass - - -def gc_collect(): - ''' Run the garbage collector twice (needed when running - reference counting tests with PyPy) ''' - gc.collect() - gc.collect() - - -def pytest_configure(): - pytest.suppress = suppress - pytest.gc_collect = gc_collect diff --git a/spaces/ma-xu/LIVE/thrust/thrust/iterator/detail/any_system_tag.h b/spaces/ma-xu/LIVE/thrust/thrust/iterator/detail/any_system_tag.h deleted file mode 100644 index 27640b5e0dd83881cbd16d19229c409307bc7da8..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/iterator/detail/any_system_tag.h +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ - -struct any_system_tag - : thrust::execution_policy -{ - // allow any_system_tag to convert to any type at all - // XXX make this safer using enable_if> upon c++11 - template operator T () const {return T();} -}; - -} // end thrust - diff --git a/spaces/marcusj83/MusicGenbruh/audiocraft/modules/lstm.py b/spaces/marcusj83/MusicGenbruh/audiocraft/modules/lstm.py deleted file mode 100644 index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000 --- a/spaces/marcusj83/MusicGenbruh/audiocraft/modules/lstm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class StreamableLSTM(nn.Module): - """LSTM without worrying about the hidden state, nor the layout of the data. - Expects input as convolutional layout. - """ - def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True): - super().__init__() - self.skip = skip - self.lstm = nn.LSTM(dimension, dimension, num_layers) - - def forward(self, x): - x = x.permute(2, 0, 1) - y, _ = self.lstm(x) - if self.skip: - y = y + x - y = y.permute(1, 2, 0) - return y diff --git a/spaces/marioboy/neil-breen/synthesizer/synthesizer_dataset.py b/spaces/marioboy/neil-breen/synthesizer/synthesizer_dataset.py deleted file mode 100644 index 9d552d16d0b6757871189037bf0b981c8dfebbaf..0000000000000000000000000000000000000000 --- a/spaces/marioboy/neil-breen/synthesizer/synthesizer_dataset.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -from torch.utils.data import Dataset -import numpy as np -from pathlib import Path -from synthesizer.utils.text import text_to_sequence - - -class SynthesizerDataset(Dataset): - def __init__(self, metadata_fpath: Path, mel_dir: Path, embed_dir: Path, hparams): - print("Using inputs from:\n\t%s\n\t%s\n\t%s" % (metadata_fpath, mel_dir, embed_dir)) - - with metadata_fpath.open("r") as metadata_file: - metadata = [line.split("|") for line in metadata_file] - - mel_fnames = [x[1] for x in metadata if int(x[4])] - mel_fpaths = [mel_dir.joinpath(fname) for fname in mel_fnames] - embed_fnames = [x[2] for x in metadata if int(x[4])] - embed_fpaths = [embed_dir.joinpath(fname) for fname in embed_fnames] - self.samples_fpaths = list(zip(mel_fpaths, embed_fpaths)) - self.samples_texts = [x[5].strip() for x in metadata if int(x[4])] - self.metadata = metadata - self.hparams = hparams - - print("Found %d samples" % len(self.samples_fpaths)) - - def __getitem__(self, index): - # Sometimes index may be a list of 2 (not sure why this happens) - # If that is the case, return a single item corresponding to first element in index - if index is list: - index = index[0] - - mel_path, embed_path = self.samples_fpaths[index] - mel = np.load(mel_path).T.astype(np.float32) - - # Load the embed - embed = np.load(embed_path) - - # Get the text and clean it - text = text_to_sequence(self.samples_texts[index], self.hparams.tts_cleaner_names) - - # Convert the list returned by text_to_sequence to a numpy array - text = np.asarray(text).astype(np.int32) - - return text, mel.astype(np.float32), embed.astype(np.float32), index - - def __len__(self): - return len(self.samples_fpaths) - - -def collate_synthesizer(batch, r, hparams): - # Text - x_lens = [len(x[0]) for x in batch] - max_x_len = max(x_lens) - - chars = [pad1d(x[0], max_x_len) for x in batch] - chars = np.stack(chars) - - # Mel spectrogram - spec_lens = [x[1].shape[-1] for x in batch] - max_spec_len = max(spec_lens) + 1 - if max_spec_len % r != 0: - max_spec_len += r - max_spec_len % r - - # WaveRNN mel spectrograms are normalized to [0, 1] so zero padding adds silence - # By default, SV2TTS uses symmetric mels, where -1*max_abs_value is silence. - if hparams.symmetric_mels: - mel_pad_value = -1 * hparams.max_abs_value - else: - mel_pad_value = 0 - - mel = [pad2d(x[1], max_spec_len, pad_value=mel_pad_value) for x in batch] - mel = np.stack(mel) - - # Speaker embedding (SV2TTS) - embeds = [x[2] for x in batch] - - # Index (for vocoder preprocessing) - indices = [x[3] for x in batch] - - - # Convert all to tensor - chars = torch.tensor(chars).long() - mel = torch.tensor(mel) - embeds = torch.tensor(embeds) - - return chars, mel, embeds, indices - -def pad1d(x, max_len, pad_value=0): - return np.pad(x, (0, max_len - len(x)), mode="constant", constant_values=pad_value) - -def pad2d(x, max_len, pad_value=0): - return np.pad(x, ((0, 0), (0, max_len - x.shape[-1])), mode="constant", constant_values=pad_value) diff --git a/spaces/matthoffner/AudioCraft_Plus/scripts/mos.py b/spaces/matthoffner/AudioCraft_Plus/scripts/mos.py deleted file mode 100644 index a711c9ece23e72ed3a07032c7834ef7c56ab4f11..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/scripts/mos.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -""" -To run this script, from the root of the repo. Make sure to have Flask installed - - FLASK_DEBUG=1 FLASK_APP=scripts.mos flask run -p 4567 - # or if you have gunicorn - gunicorn -w 4 -b 127.0.0.1:8895 -t 120 'scripts.mos:app' --access-logfile - - -""" -from collections import defaultdict -from functools import wraps -from hashlib import sha1 -import json -import math -from pathlib import Path -import random -import typing as tp - -from flask import Flask, redirect, render_template, request, session, url_for - -from audiocraft import train -from audiocraft.utils.samples.manager import get_samples_for_xps - - -SAMPLES_PER_PAGE = 8 -MAX_RATING = 5 -storage = Path(train.main.dora.dir / 'mos_storage') -storage.mkdir(exist_ok=True) -surveys = storage / 'surveys' -surveys.mkdir(exist_ok=True) -magma_root = Path(train.__file__).parent.parent -app = Flask('mos', static_folder=str(magma_root / 'scripts/static'), - template_folder=str(magma_root / 'scripts/templates')) -app.secret_key = b'audiocraft makes the best songs' - - -def normalize_path(path: Path): - """Just to make path a bit nicer, make them relative to the Dora root dir. - """ - path = path.resolve() - dora_dir = train.main.dora.dir.resolve() / 'xps' - return path.relative_to(dora_dir) - - -def get_full_path(normalized_path: Path): - """Revert `normalize_path`. - """ - return train.main.dora.dir.resolve() / 'xps' / normalized_path - - -def get_signature(xps: tp.List[str]): - """Return a signature for a list of XP signatures. - """ - return sha1(json.dumps(xps).encode()).hexdigest()[:10] - - -def ensure_logged(func): - """Ensure user is logged in. - """ - @wraps(func) - def _wrapped(*args, **kwargs): - user = session.get('user') - if user is None: - return redirect(url_for('login', redirect_to=request.url)) - return func(*args, **kwargs) - return _wrapped - - -@app.route('/login', methods=['GET', 'POST']) -def login(): - """Login user if not already, then redirect. - """ - user = session.get('user') - if user is None: - error = None - if request.method == 'POST': - user = request.form['user'] - if not user: - error = 'User cannot be empty' - if user is None or error: - return render_template('login.html', error=error) - assert user - session['user'] = user - redirect_to = request.args.get('redirect_to') - if redirect_to is None: - redirect_to = url_for('index') - return redirect(redirect_to) - - -@app.route('/', methods=['GET', 'POST']) -@ensure_logged -def index(): - """Offer to create a new study. - """ - errors = [] - if request.method == 'POST': - xps_or_grids = [part.strip() for part in request.form['xps'].split()] - xps = set() - for xp_or_grid in xps_or_grids: - xp_path = train.main.dora.dir / 'xps' / xp_or_grid - if xp_path.exists(): - xps.add(xp_or_grid) - continue - grid_path = train.main.dora.dir / 'grids' / xp_or_grid - if grid_path.exists(): - for child in grid_path.iterdir(): - if child.is_symlink(): - xps.add(child.name) - continue - errors.append(f'{xp_or_grid} is neither an XP nor a grid!') - assert xps or errors - blind = 'true' if request.form.get('blind') == 'on' else 'false' - xps = list(xps) - if not errors: - signature = get_signature(xps) - manifest = { - 'xps': xps, - } - survey_path = surveys / signature - survey_path.mkdir(exist_ok=True) - with open(survey_path / 'manifest.json', 'w') as f: - json.dump(manifest, f, indent=2) - return redirect(url_for('survey', blind=blind, signature=signature)) - return render_template('index.html', errors=errors) - - -@app.route('/survey/', methods=['GET', 'POST']) -@ensure_logged -def survey(signature): - success = request.args.get('success', False) - seed = int(request.args.get('seed', 4321)) - blind = request.args.get('blind', 'false') in ['true', 'on', 'True'] - exclude_prompted = request.args.get('exclude_prompted', 'false') in ['true', 'on', 'True'] - exclude_unprompted = request.args.get('exclude_unprompted', 'false') in ['true', 'on', 'True'] - max_epoch = int(request.args.get('max_epoch', '-1')) - survey_path = surveys / signature - assert survey_path.exists(), survey_path - - user = session['user'] - result_folder = survey_path / 'results' - result_folder.mkdir(exist_ok=True) - result_file = result_folder / f'{user}_{seed}.json' - - with open(survey_path / 'manifest.json') as f: - manifest = json.load(f) - - xps = [train.main.get_xp_from_sig(xp) for xp in manifest['xps']] - names, ref_name = train.main.get_names(xps) - - samples_kwargs = { - 'exclude_prompted': exclude_prompted, - 'exclude_unprompted': exclude_unprompted, - 'max_epoch': max_epoch, - } - matched_samples = get_samples_for_xps(xps, epoch=-1, **samples_kwargs) # fetch latest epoch - models_by_id = { - id: [{ - 'xp': xps[idx], - 'xp_name': names[idx], - 'model_id': f'{xps[idx].sig}-{sample.id}', - 'sample': sample, - 'is_prompted': sample.prompt is not None, - 'errors': [], - } for idx, sample in enumerate(samples)] - for id, samples in matched_samples.items() - } - experiments = [ - {'xp': xp, 'name': names[idx], 'epoch': list(matched_samples.values())[0][idx].epoch} - for idx, xp in enumerate(xps) - ] - - keys = list(matched_samples.keys()) - keys.sort() - rng = random.Random(seed) - rng.shuffle(keys) - model_ids = keys[:SAMPLES_PER_PAGE] - - if blind: - for key in model_ids: - rng.shuffle(models_by_id[key]) - - ok = True - if request.method == 'POST': - all_samples_results = [] - for id in model_ids: - models = models_by_id[id] - result = { - 'id': id, - 'is_prompted': models[0]['is_prompted'], - 'models': {} - } - all_samples_results.append(result) - for model in models: - rating = request.form[model['model_id']] - if rating: - rating = int(rating) - assert rating <= MAX_RATING and rating >= 1 - result['models'][model['xp'].sig] = rating - model['rating'] = rating - else: - ok = False - model['errors'].append('Please rate this model.') - if ok: - result = { - 'results': all_samples_results, - 'seed': seed, - 'user': user, - 'blind': blind, - 'exclude_prompted': exclude_prompted, - 'exclude_unprompted': exclude_unprompted, - } - print(result) - with open(result_file, 'w') as f: - json.dump(result, f) - seed = seed + 1 - return redirect(url_for( - 'survey', signature=signature, blind=blind, seed=seed, - exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, - max_epoch=max_epoch, success=True)) - - ratings = list(range(1, MAX_RATING + 1)) - return render_template( - 'survey.html', ratings=ratings, blind=blind, seed=seed, signature=signature, success=success, - exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, max_epoch=max_epoch, - experiments=experiments, models_by_id=models_by_id, model_ids=model_ids, errors=[], - ref_name=ref_name, already_filled=result_file.exists()) - - -@app.route('/audio/') -def audio(path: str): - full_path = Path('/') / path - assert full_path.suffix in [".mp3", ".wav"] - return full_path.read_bytes(), {'Content-Type': 'audio/mpeg'} - - -def mean(x): - return sum(x) / len(x) - - -def std(x): - m = mean(x) - return math.sqrt(sum((i - m)**2 for i in x) / len(x)) - - -@app.route('/results/') -@ensure_logged -def results(signature): - - survey_path = surveys / signature - assert survey_path.exists(), survey_path - result_folder = survey_path / 'results' - result_folder.mkdir(exist_ok=True) - - # ratings per model, then per user. - ratings_per_model = defaultdict(list) - users = [] - for result_file in result_folder.iterdir(): - if result_file.suffix != '.json': - continue - with open(result_file) as f: - results = json.load(f) - users.append(results['user']) - for result in results['results']: - for sig, rating in result['models'].items(): - ratings_per_model[sig].append(rating) - - fmt = '{:.2f}' - models = [] - for model in sorted(ratings_per_model.keys()): - ratings = ratings_per_model[model] - - models.append({ - 'sig': model, - 'samples': len(ratings), - 'mean_rating': fmt.format(mean(ratings)), - # the value 1.96 was probably chosen to achieve some - # confidence interval assuming gaussianity. - 'std_rating': fmt.format(1.96 * std(ratings) / len(ratings)**0.5), - }) - return render_template('results.html', signature=signature, models=models, users=users) diff --git a/spaces/matthoffner/chatbot-mini/utils/app/settings.ts b/spaces/matthoffner/chatbot-mini/utils/app/settings.ts deleted file mode 100644 index 2d21816ba7fb8d4db9d22b169e01b30f8ba366ec..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/utils/app/settings.ts +++ /dev/null @@ -1,25 +0,0 @@ -import { Settings } from '@/types/settings'; - -const STORAGE_KEY = 'settings'; - -export const getSettings = (): any => { - /* - let settings: Settings = { - theme: 'dark', - }; - const settingsJson = localStorage.getItem(STORAGE_KEY); - if (settingsJson) { - try { - let savedSettings = JSON.parse(settingsJson) as Settings; - settings = Object.assign(settings, savedSettings); - } catch (e) { - console.error(e); - } - } - return settings; - */ -}; - -export const saveSettings = (settings: Settings) => { - localStorage.setItem(STORAGE_KEY, JSON.stringify(settings)); -}; diff --git a/spaces/matthoffner/chatbot/components/Markdown/CodeBlock.tsx b/spaces/matthoffner/chatbot/components/Markdown/CodeBlock.tsx deleted file mode 100644 index 1b53e8b4d1351ae2f890c4239887091b4ec51b57..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Markdown/CodeBlock.tsx +++ /dev/null @@ -1,94 +0,0 @@ -import { IconCheck, IconClipboard, IconDownload } from '@tabler/icons-react'; -import { FC, memo, useState } from 'react'; -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter'; -import { oneDark } from 'react-syntax-highlighter/dist/cjs/styles/prism'; - -import { useTranslation } from 'next-i18next'; - -import { - generateRandomString, - programmingLanguages, -} from '@/utils/app/codeblock'; - -interface Props { - language: string; - value: string; -} - -export const CodeBlock: FC = memo(({ language, value }) => { - const { t } = useTranslation('markdown'); - const [isCopied, setIsCopied] = useState(false); - - const copyToClipboard = () => { - if (!navigator.clipboard || !navigator.clipboard.writeText) { - return; - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true); - - setTimeout(() => { - setIsCopied(false); - }, 2000); - }); - }; - const downloadAsFile = () => { - const fileExtension = programmingLanguages[language] || '.file'; - const suggestedFileName = `file-${generateRandomString( - 3, - true, - )}${fileExtension}`; - const fileName = window.prompt( - t('Enter file name') || '', - suggestedFileName, - ); - - if (!fileName) { - // user pressed cancel on prompt - return; - } - - const blob = new Blob([value], { type: 'text/plain' }); - const url = URL.createObjectURL(blob); - const link = document.createElement('a'); - link.download = fileName; - link.href = url; - link.style.display = 'none'; - document.body.appendChild(link); - link.click(); - document.body.removeChild(link); - URL.revokeObjectURL(url); - }; - return ( -
    -
    - {language} - -
    - - -
    -
    - - - {value} - -
    - ); -}); -CodeBlock.displayName = 'CodeBlock'; diff --git a/spaces/matthoffner/chatbot/components/Spinner/index.ts b/spaces/matthoffner/chatbot/components/Spinner/index.ts deleted file mode 100644 index f90663a519f138da6e80f382b3afee6d13029fd8..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Spinner/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from './Spinner'; diff --git a/spaces/matthoffner/chatbot/types/chat.ts b/spaces/matthoffner/chatbot/types/chat.ts deleted file mode 100644 index 1233f2cbe347464ba4937d7a0272ea533ded116b..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/types/chat.ts +++ /dev/null @@ -1,26 +0,0 @@ -import { OpenAIModel } from './openai'; - -export interface Message { - role: Role; - content: string; -} - -export type Role = 'assistant' | 'user'; - -export interface ChatBody { - model: OpenAIModel; - messages: Message[]; - key: string; - prompt: string; - temperature: number; -} - -export interface Conversation { - id: string; - name: string; - messages: Message[]; - model: OpenAIModel; - prompt: string; - temperature: number; - folderId: string | null; -} diff --git a/spaces/matthoffner/open-codetree/components/Playground/IframeLoaderScreen.tsx b/spaces/matthoffner/open-codetree/components/Playground/IframeLoaderScreen.tsx deleted file mode 100644 index 2c16de20765f0aa8222802faea93b57afcd20d18..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/open-codetree/components/Playground/IframeLoaderScreen.tsx +++ /dev/null @@ -1,32 +0,0 @@ -import React from "react"; - -export const IframeLoaderScreen = () => { - return ( -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - ); -}; diff --git a/spaces/meraih/English-Japanese-Anime-TTS/losses.py b/spaces/meraih/English-Japanese-Anime-TTS/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/meraih/English-Japanese-Anime-TTS/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/merve/dataset-worldviews/public/base-rate/style.css b/spaces/merve/dataset-worldviews/public/base-rate/style.css deleted file mode 100644 index 5ba8d020fda588a3e7f61ff3fab4d377aa3bd4f2..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/base-rate/style.css +++ /dev/null @@ -1,134 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; - width: auto; - -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -#big-matrix text{ - font-family: 'Google Sans', sans-serif; - /*pointer-events: none;*/ - text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff; - text-shadow: 0 1px 0 rgba(255,255,255, .6), 1px 0 0 rgba(255,255,255, .6), 0 -1px 0 rgba(255,255,255, .6), -1px 0 0 rgba(255,255,255, .6); -} - - -body{ - max-width: 900px; -} - -h1{ -} - -h1{ - /*text-align: center;*/ -} - -h3{ - font-size: 20px; -} -#big-matrix{ - text-align: center; - margin-top: 40px; - font-family: 'Google Sans', sans-serif; - -} -div.big-container{ - display: inline-block; - margin: 10px; -} - -#metrics{ - text-align: center; -} -div.metrics-container{ - display: inline-block; - margin: 10px; -} - -div.metrics-container > div{ - display: inline-block; - vertical-align: middle; - pointer-events: none; -} - - - - -.drag{ - cursor: pointer; - fill-opacity: 0; - fill: #f0f; - stroke-opacity: 0; -} - -svg.dragging{ - cursor: pointer; -} - -sl{ - /*background: #000; */ - color: #000; - border: 1px solid #eee; - width: 1em; - display: inline-block; - padding-left: 2px; - padding-right: 2px; - font-style: normal; -} - -#instructions{ - margin-top: 10px; - margin-bottom: 10px; - text-align: center; -} - - - - - diff --git a/spaces/merve/hidden-bias/source/anonymization/make-slides.js b/spaces/merve/hidden-bias/source/anonymization/make-slides.js deleted file mode 100644 index 3feff55ba9248cee61cd7ec881fade8ef661e67c..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/anonymization/make-slides.js +++ /dev/null @@ -1,98 +0,0 @@ -window.makeSlides = function(){ - var slides = [ - { - xKey: 'grid', - circleDelayFn: d => axii.ageScale(d.age), - showFlipRect: 0, - populationTarget: 144, - headsProbTarget: .5, - }, - { - xKey: 'age', - showAgeAxis: 1, - }, - { - xKey: 'ageState', - showStateAxis: 1, - }, - { - showUniqueBox: 1 - }, - { - xKey: 'ageStateSeason', - showUniqueBox: 1, - showUniqueSeasonBox: 1, - showSeasonAxis: 1, - }, - { - xKey: 'heads', - showUniqueBox: 0, - showUniqueSeasonBox: 0, - showSeasonAxis: 0, - showAgeAxis: 0, - showStateAxis: 0, - showHeadAxis: 1, - }, - { - showFlipCircle: 1, - showHeadCaptionAxis: 1, - }, - - // Flip coin - { - xKey: 'plagerizedShifted', - showHeadAxis: 0, - showHeadCaptionAxis: 0, - showHistogramAxis: 1, - }, - - // Exactly how far off can these estimates be after adding noise? Flip more coins to see the distribution. - { - enterHistogram: 1, - showHistogram: 1, - // showPlagerizedAxis: 0, - showEstimate: 1, - }, - - // Reducing the random noise increases our point estimate, but risks leaking information about students. - { - animateHeadsProbSlider: 1, - animatePopulationSlider: 1, - enterHistogram: 0, - name: 'noise', - headsProbTarget: .35, - }, - - // If we collect information from lots of people, we can have high accuracy and protect everyone's privacy. - { - showEstimate: 0, - showAllStudents: 1, - name: 'population', - animateHeadsProbSlider: -1, - animatePopulationSlider: 1, - populationTarget: 400, - }, - - ] - - var keys = [] - slides.forEach((d, i) => { - keys = keys.concat(d3.keys(d)) - d.index = i - }) - _.uniq(keys).forEach(str => { - var prev = null - slides.forEach(d => { - if (typeof(d[str]) === 'undefined'){ - d[str] = prev - } - prev = d[str] - }) - }) - - return slides -} - - - -if (window.init) window.init() diff --git a/spaces/merve/measuring-fairness/public/third_party/swoopy-drag.js b/spaces/merve/measuring-fairness/public/third_party/swoopy-drag.js deleted file mode 100644 index 3c740601b5111efdf47f0fd5da9d41de58ceb757..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/third_party/swoopy-drag.js +++ /dev/null @@ -1,193 +0,0 @@ -// https://github.com/1wheel/swoopy-drag Copyright (c) 2016 Adam Pearce - -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? factory(exports, require('d3')) : - typeof define === 'function' && define.amd ? define(['exports', 'd3'], factory) : - (factory((global.d3 = global.d3 || {}),global.d3)); -}(this, function (exports,d3) { 'use strict'; - - function swoopyDrag(){ - var x = function(d){ return d } - var y = function(d){ return d } - - var annotations = [] - var annotationSel - - var draggable = false - - var dispatch = d3.dispatch('drag') - - var textDrag = d3.drag() - .on('drag', function(d){ - var x = d3.event.x - var y = d3.event.y - d.textOffset = [x, y].map(Math.round) - - d3.select(this).call(translate, d.textOffset) - - dispatch.call('drag') - }) - .subject(function(d){ return {x: d.textOffset[0], y: d.textOffset[1]} }) - - var circleDrag = d3.drag() - .on('drag', function(d){ - var x = d3.event.x - var y = d3.event.y - d.pos = [x, y].map(Math.round) - - var parentSel = d3.select(this.parentNode) - - var path = '' - var points = parentSel.selectAll('circle').data() - if (points[0].type == 'A'){ - path = calcCirclePath(points) - } else{ - points.forEach(function(d){ path = path + d.type + d.pos }) - } - - parentSel.select('path').attr('d', path).datum().path = path - d3.select(this).call(translate, d.pos) - - dispatch.call('drag') - }) - .subject(function(d){ return {x: d.pos[0], y: d.pos[1]} }) - - - var rv = function(sel){ - annotationSel = sel.html('').selectAll('g') - .data(annotations).enter() - .append('g') - .call(translate, function(d){ return [x(d), y(d)] }) - - var textSel = annotationSel.append('text') - .call(translate, ƒ('textOffset')) - .text(ƒ('text')) - - annotationSel.append('path') - .attr('d', ƒ('path')) - - if (!draggable) return - - annotationSel.style('cursor', 'pointer') - textSel.call(textDrag) - - annotationSel.selectAll('circle').data(function(d){ - var points = [] - - if (~d.path.indexOf('A')){ - //handle arc paths seperatly -- only one circle supported - var pathNode = d3.select(this).select('path').node() - var l = pathNode.getTotalLength() - - points = [0, .5, 1].map(function(d){ - var p = pathNode.getPointAtLength(d*l) - return {pos: [p.x, p.y], type: 'A'} - }) - } else{ - var i = 1 - var type = 'M' - var commas = 0 - - for (var j = 1; j < d.path.length; j++){ - var curChar = d.path[j] - if (curChar == ',') commas++ - if (curChar == 'L' || curChar == 'C' || commas == 2){ - points.push({pos: d.path.slice(i, j).split(','), type: type}) - type = curChar - i = j + 1 - commas = 0 - } - } - - points.push({pos: d.path.slice(i, j).split(','), type: type}) - } - - return points - }).enter().append('circle') - .attr('r', 8) - .attr('fill', 'rgba(0,0,0,0)') - .attr('stroke', '#333') - .attr('stroke-dasharray', '2 2') - .call(translate, ƒ('pos')) - .call(circleDrag) - - dispatch.call('drag') - } - - - rv.annotations = function(_x){ - if (typeof(_x) == 'undefined') return annotations - annotations = _x - return rv - } - rv.x = function(_x){ - if (typeof(_x) == 'undefined') return x - x = _x - return rv - } - rv.y = function(_x){ - if (typeof(_x) == 'undefined') return y - y = _x - return rv - } - rv.draggable = function(_x){ - if (typeof(_x) == 'undefined') return draggable - draggable = _x - return rv - } - rv.on = function() { - var value = dispatch.on.apply(dispatch, arguments); - return value === dispatch ? rv : value; - } - - return rv - - //convert 3 points to an Arc Path - function calcCirclePath(points){ - var a = points[0].pos - var b = points[2].pos - var c = points[1].pos - - var A = dist(b, c) - var B = dist(c, a) - var C = dist(a, b) - - var angle = Math.acos((A*A + B*B - C*C)/(2*A*B)) - - //calc radius of circle - var K = .5*A*B*Math.sin(angle) - var r = A*B*C/4/K - r = Math.round(r*1000)/1000 - - //large arc flag - var laf = +(Math.PI/2 > angle) - - //sweep flag - var saf = +((b[0] - a[0])*(c[1] - a[1]) - (b[1] - a[1])*(c[0] - a[0]) < 0) - - return ['M', a, 'A', r, r, 0, laf, saf, b].join(' ') - } - - function dist(a, b){ - return Math.sqrt( - Math.pow(a[0] - b[0], 2) + - Math.pow(a[1] - b[1], 2)) - } - - - //no jetpack dependency - function translate(sel, pos){ - sel.attr('transform', function(d){ - var posStr = typeof(pos) == 'function' ? pos(d) : pos - return 'translate(' + posStr + ')' - }) - } - - function ƒ(str){ return function(d){ return d[str] } } - } - - exports.swoopyDrag = swoopyDrag; - - Object.defineProperty(exports, '__esModule', { value: true }); - -})); diff --git a/spaces/merve/measuring-fairness/public/third_party/umap.js b/spaces/merve/measuring-fairness/public/third_party/umap.js deleted file mode 100644 index 13bb989b285114e7a79d0a213422997c19a3c2f0..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/third_party/umap.js +++ /dev/null @@ -1,6864 +0,0 @@ -// https://github.com/pair-code/umap-js Copyright 2019 Google -(function webpackUniversalModuleDefinition(root, factory) { - if(typeof exports === 'object' && typeof module === 'object') - module.exports = factory(); - else if(typeof define === 'function' && define.amd) - define([], factory); - else { - var a = factory(); - for(var i in a) (typeof exports === 'object' ? exports : root)[i] = a[i]; - } -})(window, function() { -return /******/ (function(modules) { // webpackBootstrap -/******/ // The module cache -/******/ var installedModules = {}; -/******/ -/******/ // The require function -/******/ function __webpack_require__(moduleId) { -/******/ -/******/ // Check if module is in cache -/******/ if(installedModules[moduleId]) { -/******/ return installedModules[moduleId].exports; -/******/ } -/******/ // Create a new module (and put it into the cache) -/******/ var module = installedModules[moduleId] = { -/******/ i: moduleId, -/******/ l: false, -/******/ exports: {} -/******/ }; -/******/ -/******/ // Execute the module function -/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__); -/******/ -/******/ // Flag the module as loaded -/******/ module.l = true; -/******/ -/******/ // Return the exports of the module -/******/ return module.exports; -/******/ } -/******/ -/******/ -/******/ // expose the modules object (__webpack_modules__) -/******/ __webpack_require__.m = modules; -/******/ -/******/ // expose the module cache -/******/ __webpack_require__.c = installedModules; -/******/ -/******/ // define getter function for harmony exports -/******/ __webpack_require__.d = function(exports, name, getter) { -/******/ if(!__webpack_require__.o(exports, name)) { -/******/ Object.defineProperty(exports, name, { enumerable: true, get: getter }); -/******/ } -/******/ }; -/******/ -/******/ // define __esModule on exports -/******/ __webpack_require__.r = function(exports) { -/******/ if(typeof Symbol !== 'undefined' && Symbol.toStringTag) { -/******/ Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' }); -/******/ } -/******/ Object.defineProperty(exports, '__esModule', { value: true }); -/******/ }; -/******/ -/******/ // create a fake namespace object -/******/ // mode & 1: value is a module id, require it -/******/ // mode & 2: merge all properties of value into the ns -/******/ // mode & 4: return value when already ns object -/******/ // mode & 8|1: behave like require -/******/ __webpack_require__.t = function(value, mode) { -/******/ if(mode & 1) value = __webpack_require__(value); -/******/ if(mode & 8) return value; -/******/ if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value; -/******/ var ns = Object.create(null); -/******/ __webpack_require__.r(ns); -/******/ Object.defineProperty(ns, 'default', { enumerable: true, value: value }); -/******/ if(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key)); -/******/ return ns; -/******/ }; -/******/ -/******/ // getDefaultExport function for compatibility with non-harmony modules -/******/ __webpack_require__.n = function(module) { -/******/ var getter = module && module.__esModule ? -/******/ function getDefault() { return module['default']; } : -/******/ function getModuleExports() { return module; }; -/******/ __webpack_require__.d(getter, 'a', getter); -/******/ return getter; -/******/ }; -/******/ -/******/ // Object.prototype.hasOwnProperty.call -/******/ __webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); }; -/******/ -/******/ // __webpack_public_path__ -/******/ __webpack_require__.p = ""; -/******/ -/******/ -/******/ // Load entry module and return exports -/******/ return __webpack_require__(__webpack_require__.s = 5); -/******/ }) -/************************************************************************/ -/******/ ([ -/* 0 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - - -const toString = Object.prototype.toString; - -function isAnyArray(object) { - return toString.call(object).endsWith('Array]'); -} - -module.exports = isAnyArray; - - -/***/ }), -/* 1 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -function tauRandInt(n, random) { - return Math.floor(random() * n); -} -exports.tauRandInt = tauRandInt; -function tauRand(random) { - return random(); -} -exports.tauRand = tauRand; -function norm(vec) { - var e_1, _a; - var result = 0; - try { - for (var vec_1 = __values(vec), vec_1_1 = vec_1.next(); !vec_1_1.done; vec_1_1 = vec_1.next()) { - var item = vec_1_1.value; - result += Math.pow(item, 2); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (vec_1_1 && !vec_1_1.done && (_a = vec_1.return)) _a.call(vec_1); - } - finally { if (e_1) throw e_1.error; } - } - return Math.sqrt(result); -} -exports.norm = norm; -function empty(n) { - var output = []; - for (var i = 0; i < n; i++) { - output.push(undefined); - } - return output; -} -exports.empty = empty; -function range(n) { - return empty(n).map(function (_, i) { return i; }); -} -exports.range = range; -function filled(n, v) { - return empty(n).map(function () { return v; }); -} -exports.filled = filled; -function zeros(n) { - return filled(n, 0); -} -exports.zeros = zeros; -function ones(n) { - return filled(n, 1); -} -exports.ones = ones; -function linear(a, b, len) { - return empty(len).map(function (_, i) { - return a + i * ((b - a) / (len - 1)); - }); -} -exports.linear = linear; -function sum(input) { - return input.reduce(function (sum, val) { return sum + val; }); -} -exports.sum = sum; -function mean(input) { - return sum(input) / input.length; -} -exports.mean = mean; -function max(input) { - var max = 0; - for (var i = 0; i < input.length; i++) { - max = input[i] > max ? input[i] : max; - } - return max; -} -exports.max = max; -function max2d(input) { - var max = 0; - for (var i = 0; i < input.length; i++) { - for (var j = 0; j < input[i].length; j++) { - max = input[i][j] > max ? input[i][j] : max; - } - } - return max; -} -exports.max2d = max2d; -function rejectionSample(nSamples, poolSize, random) { - var result = zeros(nSamples); - for (var i = 0; i < nSamples; i++) { - var rejectSample = true; - while (rejectSample) { - var j = tauRandInt(poolSize, random); - var broken = false; - for (var k = 0; k < i; k++) { - if (j === result[k]) { - broken = true; - break; - } - } - if (!broken) { - rejectSample = false; - } - result[i] = j; - } - } - return result; -} -exports.rejectionSample = rejectionSample; -function reshape2d(x, a, b) { - var rows = []; - var count = 0; - var index = 0; - if (x.length !== a * b) { - throw new Error('Array dimensions must match input length.'); - } - for (var i = 0; i < a; i++) { - var col = []; - for (var j = 0; j < b; j++) { - col.push(x[index]); - index += 1; - } - rows.push(col); - count += 1; - } - return rows; -} -exports.reshape2d = reshape2d; - - -/***/ }), -/* 2 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var utils = __importStar(__webpack_require__(1)); -function makeHeap(nPoints, size) { - var makeArrays = function (fillValue) { - return utils.empty(nPoints).map(function () { - return utils.filled(size, fillValue); - }); - }; - var heap = []; - heap.push(makeArrays(-1)); - heap.push(makeArrays(Infinity)); - heap.push(makeArrays(0)); - return heap; -} -exports.makeHeap = makeHeap; -function rejectionSample(nSamples, poolSize, random) { - var result = utils.zeros(nSamples); - for (var i = 0; i < nSamples; i++) { - var rejectSample = true; - var j = 0; - while (rejectSample) { - j = utils.tauRandInt(poolSize, random); - var broken = false; - for (var k = 0; k < i; k++) { - if (j === result[k]) { - broken = true; - break; - } - } - if (!broken) - rejectSample = false; - } - result[i] = j; - } - return result; -} -exports.rejectionSample = rejectionSample; -function heapPush(heap, row, weight, index, flag) { - row = Math.floor(row); - var indices = heap[0][row]; - var weights = heap[1][row]; - var isNew = heap[2][row]; - if (weight >= weights[0]) { - return 0; - } - for (var i = 0; i < indices.length; i++) { - if (index === indices[i]) { - return 0; - } - } - return uncheckedHeapPush(heap, row, weight, index, flag); -} -exports.heapPush = heapPush; -function uncheckedHeapPush(heap, row, weight, index, flag) { - var indices = heap[0][row]; - var weights = heap[1][row]; - var isNew = heap[2][row]; - if (weight >= weights[0]) { - return 0; - } - weights[0] = weight; - indices[0] = index; - isNew[0] = flag; - var i = 0; - var iSwap = 0; - while (true) { - var ic1 = 2 * i + 1; - var ic2 = ic1 + 1; - var heapShape2 = heap[0][0].length; - if (ic1 >= heapShape2) { - break; - } - else if (ic2 >= heapShape2) { - if (weights[ic1] > weight) { - iSwap = ic1; - } - else { - break; - } - } - else if (weights[ic1] >= weights[ic2]) { - if (weight < weights[ic1]) { - iSwap = ic1; - } - else { - break; - } - } - else { - if (weight < weights[ic2]) { - iSwap = ic2; - } - else { - break; - } - } - weights[i] = weights[iSwap]; - indices[i] = indices[iSwap]; - isNew[i] = isNew[iSwap]; - i = iSwap; - } - weights[i] = weight; - indices[i] = index; - isNew[i] = flag; - return 1; -} -exports.uncheckedHeapPush = uncheckedHeapPush; -function buildCandidates(currentGraph, nVertices, nNeighbors, maxCandidates, random) { - var candidateNeighbors = makeHeap(nVertices, maxCandidates); - for (var i = 0; i < nVertices; i++) { - for (var j = 0; j < nNeighbors; j++) { - if (currentGraph[0][i][j] < 0) { - continue; - } - var idx = currentGraph[0][i][j]; - var isn = currentGraph[2][i][j]; - var d = utils.tauRand(random); - heapPush(candidateNeighbors, i, d, idx, isn); - heapPush(candidateNeighbors, idx, d, i, isn); - currentGraph[2][i][j] = 0; - } - } - return candidateNeighbors; -} -exports.buildCandidates = buildCandidates; -function deheapSort(heap) { - var indices = heap[0]; - var weights = heap[1]; - for (var i = 0; i < indices.length; i++) { - var indHeap = indices[i]; - var distHeap = weights[i]; - for (var j = 0; j < indHeap.length - 1; j++) { - var indHeapIndex = indHeap.length - j - 1; - var distHeapIndex = distHeap.length - j - 1; - var temp1 = indHeap[0]; - indHeap[0] = indHeap[indHeapIndex]; - indHeap[indHeapIndex] = temp1; - var temp2 = distHeap[0]; - distHeap[0] = distHeap[distHeapIndex]; - distHeap[distHeapIndex] = temp2; - siftDown(distHeap, indHeap, distHeapIndex, 0); - } - } - return { indices: indices, weights: weights }; -} -exports.deheapSort = deheapSort; -function siftDown(heap1, heap2, ceiling, elt) { - while (elt * 2 + 1 < ceiling) { - var leftChild = elt * 2 + 1; - var rightChild = leftChild + 1; - var swap = elt; - if (heap1[swap] < heap1[leftChild]) { - swap = leftChild; - } - if (rightChild < ceiling && heap1[swap] < heap1[rightChild]) { - swap = rightChild; - } - if (swap === elt) { - break; - } - else { - var temp1 = heap1[elt]; - heap1[elt] = heap1[swap]; - heap1[swap] = temp1; - var temp2 = heap2[elt]; - heap2[elt] = heap2[swap]; - heap2[swap] = temp2; - elt = swap; - } - } -} -function smallestFlagged(heap, row) { - var ind = heap[0][row]; - var dist = heap[1][row]; - var flag = heap[2][row]; - var minDist = Infinity; - var resultIndex = -1; - for (var i = 0; i > ind.length; i++) { - if (flag[i] === 1 && dist[i] < minDist) { - minDist = dist[i]; - resultIndex = i; - } - } - if (resultIndex >= 0) { - flag[resultIndex] = 0; - return Math.floor(ind[resultIndex]); - } - else { - return -1; - } -} -exports.smallestFlagged = smallestFlagged; - - -/***/ }), -/* 3 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var _a; -var utils = __importStar(__webpack_require__(1)); -var SparseMatrix = (function () { - function SparseMatrix(rows, cols, values, dims) { - this.entries = new Map(); - this.nRows = 0; - this.nCols = 0; - this.rows = __spread(rows); - this.cols = __spread(cols); - this.values = __spread(values); - for (var i = 0; i < values.length; i++) { - var key = this.makeKey(this.rows[i], this.cols[i]); - this.entries.set(key, i); - } - this.nRows = dims[0]; - this.nCols = dims[1]; - } - SparseMatrix.prototype.makeKey = function (row, col) { - return row + ":" + col; - }; - SparseMatrix.prototype.checkDims = function (row, col) { - var withinBounds = row < this.nRows && col < this.nCols; - if (!withinBounds) { - throw new Error('array index out of bounds'); - } - }; - SparseMatrix.prototype.set = function (row, col, value) { - this.checkDims(row, col); - var key = this.makeKey(row, col); - if (!this.entries.has(key)) { - this.rows.push(row); - this.cols.push(col); - this.values.push(value); - this.entries.set(key, this.values.length - 1); - } - else { - var index = this.entries.get(key); - this.values[index] = value; - } - }; - SparseMatrix.prototype.get = function (row, col, defaultValue) { - if (defaultValue === void 0) { defaultValue = 0; } - this.checkDims(row, col); - var key = this.makeKey(row, col); - if (this.entries.has(key)) { - var index = this.entries.get(key); - return this.values[index]; - } - else { - return defaultValue; - } - }; - SparseMatrix.prototype.getDims = function () { - return [this.nRows, this.nCols]; - }; - SparseMatrix.prototype.getRows = function () { - return __spread(this.rows); - }; - SparseMatrix.prototype.getCols = function () { - return __spread(this.cols); - }; - SparseMatrix.prototype.getValues = function () { - return __spread(this.values); - }; - SparseMatrix.prototype.forEach = function (fn) { - for (var i = 0; i < this.values.length; i++) { - fn(this.values[i], this.rows[i], this.cols[i]); - } - }; - SparseMatrix.prototype.map = function (fn) { - var vals = []; - for (var i = 0; i < this.values.length; i++) { - vals.push(fn(this.values[i], this.rows[i], this.cols[i])); - } - var dims = [this.nRows, this.nCols]; - return new SparseMatrix(this.rows, this.cols, vals, dims); - }; - SparseMatrix.prototype.toArray = function () { - var _this = this; - var rows = utils.empty(this.nRows); - var output = rows.map(function () { - return utils.zeros(_this.nCols); - }); - for (var i = 0; i < this.values.length; i++) { - output[this.rows[i]][this.cols[i]] = this.values[i]; - } - return output; - }; - return SparseMatrix; -}()); -exports.SparseMatrix = SparseMatrix; -function transpose(matrix) { - var cols = []; - var rows = []; - var vals = []; - matrix.forEach(function (value, row, col) { - cols.push(row); - rows.push(col); - vals.push(value); - }); - var dims = [matrix.nCols, matrix.nRows]; - return new SparseMatrix(rows, cols, vals, dims); -} -exports.transpose = transpose; -function identity(size) { - var _a = __read(size, 1), rows = _a[0]; - var matrix = new SparseMatrix([], [], [], size); - for (var i = 0; i < rows; i++) { - matrix.set(i, i, 1); - } - return matrix; -} -exports.identity = identity; -function pairwiseMultiply(a, b) { - return elementWise(a, b, function (x, y) { return x * y; }); -} -exports.pairwiseMultiply = pairwiseMultiply; -function add(a, b) { - return elementWise(a, b, function (x, y) { return x + y; }); -} -exports.add = add; -function subtract(a, b) { - return elementWise(a, b, function (x, y) { return x - y; }); -} -exports.subtract = subtract; -function maximum(a, b) { - return elementWise(a, b, function (x, y) { return (x > y ? x : y); }); -} -exports.maximum = maximum; -function multiplyScalar(a, scalar) { - return a.map(function (value) { - return value * scalar; - }); -} -exports.multiplyScalar = multiplyScalar; -function eliminateZeros(m) { - var zeroIndices = new Set(); - var values = m.getValues(); - var rows = m.getRows(); - var cols = m.getCols(); - for (var i = 0; i < values.length; i++) { - if (values[i] === 0) { - zeroIndices.add(i); - } - } - var removeByZeroIndex = function (_, index) { return !zeroIndices.has(index); }; - var nextValues = values.filter(removeByZeroIndex); - var nextRows = rows.filter(removeByZeroIndex); - var nextCols = cols.filter(removeByZeroIndex); - return new SparseMatrix(nextRows, nextCols, nextValues, m.getDims()); -} -exports.eliminateZeros = eliminateZeros; -function normalize(m, normType) { - if (normType === void 0) { normType = "l2"; } - var e_1, _a; - var normFn = normFns[normType]; - var colsByRow = new Map(); - m.forEach(function (_, row, col) { - var cols = colsByRow.get(row) || []; - cols.push(col); - colsByRow.set(row, cols); - }); - var nextMatrix = new SparseMatrix([], [], [], m.getDims()); - var _loop_1 = function (row) { - var cols = colsByRow.get(row).sort(); - var vals = cols.map(function (col) { return m.get(row, col); }); - var norm = normFn(vals); - for (var i = 0; i < norm.length; i++) { - nextMatrix.set(row, cols[i], norm[i]); - } - }; - try { - for (var _b = __values(colsByRow.keys()), _c = _b.next(); !_c.done; _c = _b.next()) { - var row = _c.value; - _loop_1(row); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (_c && !_c.done && (_a = _b.return)) _a.call(_b); - } - finally { if (e_1) throw e_1.error; } - } - return nextMatrix; -} -exports.normalize = normalize; -var normFns = (_a = {}, - _a["max"] = function (xs) { - var max = -Infinity; - for (var i = 0; i < xs.length; i++) { - max = xs[i] > max ? xs[i] : max; - } - return xs.map(function (x) { return x / max; }); - }, - _a["l1"] = function (xs) { - var sum = 0; - for (var i = 0; i < xs.length; i++) { - sum += xs[i]; - } - return xs.map(function (x) { return x / sum; }); - }, - _a["l2"] = function (xs) { - var sum = 0; - for (var i = 0; i < xs.length; i++) { - sum += Math.pow(xs[i], 2); - } - return xs.map(function (x) { return Math.sqrt(Math.pow(x, 2) / sum); }); - }, - _a); -function elementWise(a, b, op) { - var visited = new Set(); - var rows = []; - var cols = []; - var vals = []; - var operate = function (row, col) { - rows.push(row); - cols.push(col); - var nextValue = op(a.get(row, col), b.get(row, col)); - vals.push(nextValue); - }; - var valuesA = a.getValues(); - var rowsA = a.getRows(); - var colsA = a.getCols(); - for (var i = 0; i < valuesA.length; i++) { - var row = rowsA[i]; - var col = colsA[i]; - var key = row + ":" + col; - visited.add(key); - operate(row, col); - } - var valuesB = b.getValues(); - var rowsB = b.getRows(); - var colsB = b.getCols(); - for (var i = 0; i < valuesB.length; i++) { - var row = rowsB[i]; - var col = colsB[i]; - var key = row + ":" + col; - if (visited.has(key)) - continue; - operate(row, col); - } - var dims = [a.nRows, a.nCols]; - return new SparseMatrix(rows, cols, vals, dims); -} -function getCSR(x) { - var entries = []; - x.forEach(function (value, row, col) { - entries.push({ value: value, row: row, col: col }); - }); - entries.sort(function (a, b) { - if (a.row === b.row) { - return a.col - b.col; - } - else { - return a.row - b.col; - } - }); - var indices = []; - var values = []; - var indptr = []; - var currentRow = -1; - for (var i = 0; i < entries.length; i++) { - var _a = entries[i], row = _a.row, col = _a.col, value = _a.value; - if (row !== currentRow) { - currentRow = row; - indptr.push(i); - } - indices.push(col); - values.push(value); - } - return { indices: indices, values: values, indptr: indptr }; -} -exports.getCSR = getCSR; - - -/***/ }), -/* 4 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var utils = __importStar(__webpack_require__(1)); -var FlatTree = (function () { - function FlatTree(hyperplanes, offsets, children, indices) { - this.hyperplanes = hyperplanes; - this.offsets = offsets; - this.children = children; - this.indices = indices; - } - return FlatTree; -}()); -exports.FlatTree = FlatTree; -function makeForest(data, nNeighbors, nTrees, random) { - var leafSize = Math.max(10, nNeighbors); - var trees = utils - .range(nTrees) - .map(function (_, i) { return makeTree(data, leafSize, i, random); }); - var forest = trees.map(function (tree) { return flattenTree(tree, leafSize); }); - return forest; -} -exports.makeForest = makeForest; -function makeTree(data, leafSize, n, random) { - if (leafSize === void 0) { leafSize = 30; } - var indices = utils.range(data.length); - var tree = makeEuclideanTree(data, indices, leafSize, n, random); - return tree; -} -function makeEuclideanTree(data, indices, leafSize, q, random) { - if (leafSize === void 0) { leafSize = 30; } - if (indices.length > leafSize) { - var splitResults = euclideanRandomProjectionSplit(data, indices, random); - var indicesLeft = splitResults.indicesLeft, indicesRight = splitResults.indicesRight, hyperplane = splitResults.hyperplane, offset = splitResults.offset; - var leftChild = makeEuclideanTree(data, indicesLeft, leafSize, q + 1, random); - var rightChild = makeEuclideanTree(data, indicesRight, leafSize, q + 1, random); - var node = { leftChild: leftChild, rightChild: rightChild, isLeaf: false, hyperplane: hyperplane, offset: offset }; - return node; - } - else { - var node = { indices: indices, isLeaf: true }; - return node; - } -} -function euclideanRandomProjectionSplit(data, indices, random) { - var dim = data[0].length; - var leftIndex = utils.tauRandInt(indices.length, random); - var rightIndex = utils.tauRandInt(indices.length, random); - rightIndex += leftIndex === rightIndex ? 1 : 0; - rightIndex = rightIndex % indices.length; - var left = indices[leftIndex]; - var right = indices[rightIndex]; - var hyperplaneOffset = 0; - var hyperplaneVector = utils.zeros(dim); - for (var i = 0; i < hyperplaneVector.length; i++) { - hyperplaneVector[i] = data[left][i] - data[right][i]; - hyperplaneOffset -= - (hyperplaneVector[i] * (data[left][i] + data[right][i])) / 2.0; - } - var nLeft = 0; - var nRight = 0; - var side = utils.zeros(indices.length); - for (var i = 0; i < indices.length; i++) { - var margin = hyperplaneOffset; - for (var d = 0; d < dim; d++) { - margin += hyperplaneVector[d] * data[indices[i]][d]; - } - if (margin === 0) { - side[i] = utils.tauRandInt(2, random); - if (side[i] === 0) { - nLeft += 1; - } - else { - nRight += 1; - } - } - else if (margin > 0) { - side[i] = 0; - nLeft += 1; - } - else { - side[i] = 1; - nRight += 1; - } - } - var indicesLeft = utils.zeros(nLeft); - var indicesRight = utils.zeros(nRight); - nLeft = 0; - nRight = 0; - for (var i in utils.range(side.length)) { - if (side[i] === 0) { - indicesLeft[nLeft] = indices[i]; - nLeft += 1; - } - else { - indicesRight[nRight] = indices[i]; - nRight += 1; - } - } - return { - indicesLeft: indicesLeft, - indicesRight: indicesRight, - hyperplane: hyperplaneVector, - offset: hyperplaneOffset, - }; -} -function flattenTree(tree, leafSize) { - var nNodes = numNodes(tree); - var nLeaves = numLeaves(tree); - var hyperplanes = utils - .range(nNodes) - .map(function () { return utils.zeros(tree.hyperplane.length); }); - var offsets = utils.zeros(nNodes); - var children = utils.range(nNodes).map(function () { return [-1, -1]; }); - var indices = utils - .range(nLeaves) - .map(function () { return utils.range(leafSize).map(function () { return -1; }); }); - recursiveFlatten(tree, hyperplanes, offsets, children, indices, 0, 0); - return new FlatTree(hyperplanes, offsets, children, indices); -} -function recursiveFlatten(tree, hyperplanes, offsets, children, indices, nodeNum, leafNum) { - var _a; - if (tree.isLeaf) { - children[nodeNum][0] = -leafNum; - (_a = indices[leafNum]).splice.apply(_a, __spread([0, tree.indices.length], tree.indices)); - leafNum += 1; - return { nodeNum: nodeNum, leafNum: leafNum }; - } - else { - hyperplanes[nodeNum] = tree.hyperplane; - offsets[nodeNum] = tree.offset; - children[nodeNum][0] = nodeNum + 1; - var oldNodeNum = nodeNum; - var res = recursiveFlatten(tree.leftChild, hyperplanes, offsets, children, indices, nodeNum + 1, leafNum); - nodeNum = res.nodeNum; - leafNum = res.leafNum; - children[oldNodeNum][1] = nodeNum + 1; - res = recursiveFlatten(tree.rightChild, hyperplanes, offsets, children, indices, nodeNum + 1, leafNum); - return { nodeNum: res.nodeNum, leafNum: res.leafNum }; - } -} -function numNodes(tree) { - if (tree.isLeaf) { - return 1; - } - else { - return 1 + numNodes(tree.leftChild) + numNodes(tree.rightChild); - } -} -function numLeaves(tree) { - if (tree.isLeaf) { - return 1; - } - else { - return numLeaves(tree.leftChild) + numLeaves(tree.rightChild); - } -} -function makeLeafArray(rpForest) { - var e_1, _a; - if (rpForest.length > 0) { - var output = []; - try { - for (var rpForest_1 = __values(rpForest), rpForest_1_1 = rpForest_1.next(); !rpForest_1_1.done; rpForest_1_1 = rpForest_1.next()) { - var tree = rpForest_1_1.value; - output.push.apply(output, __spread(tree.indices)); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (rpForest_1_1 && !rpForest_1_1.done && (_a = rpForest_1.return)) _a.call(rpForest_1); - } - finally { if (e_1) throw e_1.error; } - } - return output; - } - else { - return [[-1]]; - } -} -exports.makeLeafArray = makeLeafArray; -function selectSide(hyperplane, offset, point, random) { - var margin = offset; - for (var d = 0; d < point.length; d++) { - margin += hyperplane[d] * point[d]; - } - if (margin === 0) { - var side = utils.tauRandInt(2, random); - return side; - } - else if (margin > 0) { - return 0; - } - else { - return 1; - } -} -function searchFlatTree(point, tree, random) { - var node = 0; - while (tree.children[node][0] > 0) { - var side = selectSide(tree.hyperplanes[node], tree.offsets[node], point, random); - if (side === 0) { - node = tree.children[node][0]; - } - else { - node = tree.children[node][1]; - } - } - var index = -1 * tree.children[node][0]; - return tree.indices[index]; -} -exports.searchFlatTree = searchFlatTree; - - -/***/ }), -/* 5 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -Object.defineProperty(exports, "__esModule", { value: true }); -var umap_1 = __webpack_require__(6); -exports.UMAP = umap_1.UMAP; - - -/***/ }), -/* 6 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : new P(function (resolve) { resolve(result.value); }).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); -}; -var __generator = (this && this.__generator) || function (thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } -}; -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var heap = __importStar(__webpack_require__(2)); -var matrix = __importStar(__webpack_require__(3)); -var nnDescent = __importStar(__webpack_require__(7)); -var tree = __importStar(__webpack_require__(4)); -var utils = __importStar(__webpack_require__(1)); -var ml_levenberg_marquardt_1 = __importDefault(__webpack_require__(8)); -var SMOOTH_K_TOLERANCE = 1e-5; -var MIN_K_DIST_SCALE = 1e-3; -var UMAP = (function () { - function UMAP(params) { - if (params === void 0) { params = {}; } - var _this = this; - this.learningRate = 1.0; - this.localConnectivity = 1.0; - this.minDist = 0.1; - this.nComponents = 2; - this.nEpochs = 0; - this.nNeighbors = 15; - this.negativeSampleRate = 5; - this.random = Math.random; - this.repulsionStrength = 1.0; - this.setOpMixRatio = 1.0; - this.spread = 1.0; - this.transformQueueSize = 4.0; - this.targetMetric = "categorical"; - this.targetWeight = 0.5; - this.targetNNeighbors = this.nNeighbors; - this.distanceFn = euclidean; - this.isInitialized = false; - this.rpForest = []; - this.embedding = []; - this.optimizationState = new OptimizationState(); - var setParam = function (key) { - if (params[key] !== undefined) - _this[key] = params[key]; - }; - setParam('distanceFn'); - setParam('learningRate'); - setParam('localConnectivity'); - setParam('minDist'); - setParam('nComponents'); - setParam('nEpochs'); - setParam('nNeighbors'); - setParam('negativeSampleRate'); - setParam('random'); - setParam('repulsionStrength'); - setParam('setOpMixRatio'); - setParam('spread'); - setParam('transformQueueSize'); - } - UMAP.prototype.fit = function (X) { - this.initializeFit(X); - this.optimizeLayout(); - return this.embedding; - }; - UMAP.prototype.fitAsync = function (X, callback) { - if (callback === void 0) { callback = function () { return true; }; } - return __awaiter(this, void 0, void 0, function () { - return __generator(this, function (_a) { - switch (_a.label) { - case 0: - this.initializeFit(X); - return [4, this.optimizeLayoutAsync(callback)]; - case 1: - _a.sent(); - return [2, this.embedding]; - } - }); - }); - }; - UMAP.prototype.setSupervisedProjection = function (Y, params) { - if (params === void 0) { params = {}; } - this.Y = Y; - this.targetMetric = params.targetMetric || this.targetMetric; - this.targetWeight = params.targetWeight || this.targetWeight; - this.targetNNeighbors = params.targetNNeighbors || this.targetNNeighbors; - }; - UMAP.prototype.setPrecomputedKNN = function (knnIndices, knnDistances) { - this.knnIndices = knnIndices; - this.knnDistances = knnDistances; - }; - UMAP.prototype.initializeFit = function (X) { - if (this.X === X && this.isInitialized) { - return this.getNEpochs(); - } - this.X = X; - if (!this.knnIndices && !this.knnDistances) { - var knnResults = this.nearestNeighbors(X); - this.knnIndices = knnResults.knnIndices; - this.knnDistances = knnResults.knnDistances; - } - this.graph = this.fuzzySimplicialSet(X, this.nNeighbors, this.setOpMixRatio); - this.makeSearchFns(); - this.searchGraph = this.makeSearchGraph(X); - this.processGraphForSupervisedProjection(); - var _a = this.initializeSimplicialSetEmbedding(), head = _a.head, tail = _a.tail, epochsPerSample = _a.epochsPerSample; - this.optimizationState.head = head; - this.optimizationState.tail = tail; - this.optimizationState.epochsPerSample = epochsPerSample; - this.initializeOptimization(); - this.prepareForOptimizationLoop(); - this.isInitialized = true; - return this.getNEpochs(); - }; - UMAP.prototype.makeSearchFns = function () { - var _a = nnDescent.makeInitializations(this.distanceFn), initFromTree = _a.initFromTree, initFromRandom = _a.initFromRandom; - this.initFromTree = initFromTree; - this.initFromRandom = initFromRandom; - this.search = nnDescent.makeInitializedNNSearch(this.distanceFn); - }; - UMAP.prototype.makeSearchGraph = function (X) { - var knnIndices = this.knnIndices; - var knnDistances = this.knnDistances; - var dims = [X.length, X.length]; - var searchGraph = new matrix.SparseMatrix([], [], [], dims); - for (var i = 0; i < knnIndices.length; i++) { - var knn = knnIndices[i]; - var distances = knnDistances[i]; - for (var j = 0; j < knn.length; j++) { - var neighbor = knn[j]; - var distance = distances[j]; - if (distance > 0) { - searchGraph.set(i, neighbor, distance); - } - } - } - var transpose = matrix.transpose(searchGraph); - return matrix.maximum(searchGraph, transpose); - }; - UMAP.prototype.transform = function (toTransform) { - var _this = this; - var rawData = this.X; - if (rawData === undefined || rawData.length === 0) { - throw new Error('No data has been fit.'); - } - var nNeighbors = Math.floor(this.nNeighbors * this.transformQueueSize); - var init = nnDescent.initializeSearch(this.rpForest, rawData, toTransform, nNeighbors, this.initFromRandom, this.initFromTree, this.random); - var result = this.search(rawData, this.searchGraph, init, toTransform); - var _a = heap.deheapSort(result), indices = _a.indices, distances = _a.weights; - indices = indices.map(function (x) { return x.slice(0, _this.nNeighbors); }); - distances = distances.map(function (x) { return x.slice(0, _this.nNeighbors); }); - var adjustedLocalConnectivity = Math.max(0, this.localConnectivity - 1); - var _b = this.smoothKNNDistance(distances, this.nNeighbors, adjustedLocalConnectivity), sigmas = _b.sigmas, rhos = _b.rhos; - var _c = this.computeMembershipStrengths(indices, distances, sigmas, rhos), rows = _c.rows, cols = _c.cols, vals = _c.vals; - var size = [toTransform.length, rawData.length]; - var graph = new matrix.SparseMatrix(rows, cols, vals, size); - var normed = matrix.normalize(graph, "l1"); - var csrMatrix = matrix.getCSR(normed); - var nPoints = toTransform.length; - var eIndices = utils.reshape2d(csrMatrix.indices, nPoints, this.nNeighbors); - var eWeights = utils.reshape2d(csrMatrix.values, nPoints, this.nNeighbors); - var embedding = initTransform(eIndices, eWeights, this.embedding); - var nEpochs = this.nEpochs - ? this.nEpochs / 3 - : graph.nRows <= 10000 - ? 100 - : 30; - var graphMax = graph - .getValues() - .reduce(function (max, val) { return (val > max ? val : max); }, 0); - graph = graph.map(function (value) { return (value < graphMax / nEpochs ? 0 : value); }); - graph = matrix.eliminateZeros(graph); - var epochsPerSample = this.makeEpochsPerSample(graph.getValues(), nEpochs); - var head = graph.getRows(); - var tail = graph.getCols(); - this.assignOptimizationStateParameters({ - headEmbedding: embedding, - tailEmbedding: this.embedding, - head: head, - tail: tail, - currentEpoch: 0, - nEpochs: nEpochs, - nVertices: graph.getDims()[1], - epochsPerSample: epochsPerSample, - }); - this.prepareForOptimizationLoop(); - return this.optimizeLayout(); - }; - UMAP.prototype.processGraphForSupervisedProjection = function () { - var _a = this, Y = _a.Y, X = _a.X; - if (Y) { - if (Y.length !== X.length) { - throw new Error('Length of X and y must be equal'); - } - if (this.targetMetric === "categorical") { - var lt = this.targetWeight < 1.0; - var farDist = lt ? 2.5 * (1.0 / (1.0 - this.targetWeight)) : 1.0e12; - this.graph = this.categoricalSimplicialSetIntersection(this.graph, Y, farDist); - } - } - }; - UMAP.prototype.step = function () { - var currentEpoch = this.optimizationState.currentEpoch; - if (currentEpoch < this.getNEpochs()) { - this.optimizeLayoutStep(currentEpoch); - } - return this.optimizationState.currentEpoch; - }; - UMAP.prototype.getEmbedding = function () { - return this.embedding; - }; - UMAP.prototype.nearestNeighbors = function (X) { - var _a = this, distanceFn = _a.distanceFn, nNeighbors = _a.nNeighbors; - var log2 = function (n) { return Math.log(n) / Math.log(2); }; - var metricNNDescent = nnDescent.makeNNDescent(distanceFn, this.random); - var round = function (n) { - return n === 0.5 ? 0 : Math.round(n); - }; - var nTrees = 5 + Math.floor(round(Math.pow(X.length, 0.5) / 20.0)); - var nIters = Math.max(5, Math.floor(Math.round(log2(X.length)))); - this.rpForest = tree.makeForest(X, nNeighbors, nTrees, this.random); - var leafArray = tree.makeLeafArray(this.rpForest); - var _b = metricNNDescent(X, leafArray, nNeighbors, nIters), indices = _b.indices, weights = _b.weights; - return { knnIndices: indices, knnDistances: weights }; - }; - UMAP.prototype.fuzzySimplicialSet = function (X, nNeighbors, setOpMixRatio) { - if (setOpMixRatio === void 0) { setOpMixRatio = 1.0; } - var _a = this, _b = _a.knnIndices, knnIndices = _b === void 0 ? [] : _b, _c = _a.knnDistances, knnDistances = _c === void 0 ? [] : _c, localConnectivity = _a.localConnectivity; - var _d = this.smoothKNNDistance(knnDistances, nNeighbors, localConnectivity), sigmas = _d.sigmas, rhos = _d.rhos; - var _e = this.computeMembershipStrengths(knnIndices, knnDistances, sigmas, rhos), rows = _e.rows, cols = _e.cols, vals = _e.vals; - var size = [X.length, X.length]; - var sparseMatrix = new matrix.SparseMatrix(rows, cols, vals, size); - var transpose = matrix.transpose(sparseMatrix); - var prodMatrix = matrix.pairwiseMultiply(sparseMatrix, transpose); - var a = matrix.subtract(matrix.add(sparseMatrix, transpose), prodMatrix); - var b = matrix.multiplyScalar(a, setOpMixRatio); - var c = matrix.multiplyScalar(prodMatrix, 1.0 - setOpMixRatio); - var result = matrix.add(b, c); - return result; - }; - UMAP.prototype.categoricalSimplicialSetIntersection = function (simplicialSet, target, farDist, unknownDist) { - if (unknownDist === void 0) { unknownDist = 1.0; } - var intersection = fastIntersection(simplicialSet, target, unknownDist, farDist); - intersection = matrix.eliminateZeros(intersection); - return resetLocalConnectivity(intersection); - }; - UMAP.prototype.smoothKNNDistance = function (distances, k, localConnectivity, nIter, bandwidth) { - if (localConnectivity === void 0) { localConnectivity = 1.0; } - if (nIter === void 0) { nIter = 64; } - if (bandwidth === void 0) { bandwidth = 1.0; } - var target = (Math.log(k) / Math.log(2)) * bandwidth; - var rho = utils.zeros(distances.length); - var result = utils.zeros(distances.length); - for (var i = 0; i < distances.length; i++) { - var lo = 0.0; - var hi = Infinity; - var mid = 1.0; - var ithDistances = distances[i]; - var nonZeroDists = ithDistances.filter(function (d) { return d > 0.0; }); - if (nonZeroDists.length >= localConnectivity) { - var index = Math.floor(localConnectivity); - var interpolation = localConnectivity - index; - if (index > 0) { - rho[i] = nonZeroDists[index - 1]; - if (interpolation > SMOOTH_K_TOLERANCE) { - rho[i] += - interpolation * (nonZeroDists[index] - nonZeroDists[index - 1]); - } - } - else { - rho[i] = interpolation * nonZeroDists[0]; - } - } - else if (nonZeroDists.length > 0) { - rho[i] = utils.max(nonZeroDists); - } - for (var n = 0; n < nIter; n++) { - var psum = 0.0; - for (var j = 1; j < distances[i].length; j++) { - var d = distances[i][j] - rho[i]; - if (d > 0) { - psum += Math.exp(-(d / mid)); - } - else { - psum += 1.0; - } - } - if (Math.abs(psum - target) < SMOOTH_K_TOLERANCE) { - break; - } - if (psum > target) { - hi = mid; - mid = (lo + hi) / 2.0; - } - else { - lo = mid; - if (hi === Infinity) { - mid *= 2; - } - else { - mid = (lo + hi) / 2.0; - } - } - } - result[i] = mid; - if (rho[i] > 0.0) { - var meanIthDistances = utils.mean(ithDistances); - if (result[i] < MIN_K_DIST_SCALE * meanIthDistances) { - result[i] = MIN_K_DIST_SCALE * meanIthDistances; - } - } - else { - var meanDistances = utils.mean(distances.map(utils.mean)); - if (result[i] < MIN_K_DIST_SCALE * meanDistances) { - result[i] = MIN_K_DIST_SCALE * meanDistances; - } - } - } - return { sigmas: result, rhos: rho }; - }; - UMAP.prototype.computeMembershipStrengths = function (knnIndices, knnDistances, sigmas, rhos) { - var nSamples = knnIndices.length; - var nNeighbors = knnIndices[0].length; - var rows = utils.zeros(nSamples * nNeighbors); - var cols = utils.zeros(nSamples * nNeighbors); - var vals = utils.zeros(nSamples * nNeighbors); - for (var i = 0; i < nSamples; i++) { - for (var j = 0; j < nNeighbors; j++) { - var val = 0; - if (knnIndices[i][j] === -1) { - continue; - } - if (knnIndices[i][j] === i) { - val = 0.0; - } - else if (knnDistances[i][j] - rhos[i] <= 0.0) { - val = 1.0; - } - else { - val = Math.exp(-((knnDistances[i][j] - rhos[i]) / sigmas[i])); - } - rows[i * nNeighbors + j] = i; - cols[i * nNeighbors + j] = knnIndices[i][j]; - vals[i * nNeighbors + j] = val; - } - } - return { rows: rows, cols: cols, vals: vals }; - }; - UMAP.prototype.initializeSimplicialSetEmbedding = function () { - var _this = this; - var nEpochs = this.getNEpochs(); - var nComponents = this.nComponents; - var graphValues = this.graph.getValues(); - var graphMax = 0; - for (var i = 0; i < graphValues.length; i++) { - var value = graphValues[i]; - if (graphMax < graphValues[i]) { - graphMax = value; - } - } - var graph = this.graph.map(function (value) { - if (value < graphMax / nEpochs) { - return 0; - } - else { - return value; - } - }); - this.embedding = utils.zeros(graph.nRows).map(function () { - return utils.zeros(nComponents).map(function () { - return utils.tauRand(_this.random) * 20 + -10; - }); - }); - var weights = []; - var head = []; - var tail = []; - for (var i = 0; i < graph.nRows; i++) { - for (var j = 0; j < graph.nCols; j++) { - var value = graph.get(i, j); - if (value) { - weights.push(value); - tail.push(i); - head.push(j); - } - } - } - var epochsPerSample = this.makeEpochsPerSample(weights, nEpochs); - return { head: head, tail: tail, epochsPerSample: epochsPerSample }; - }; - UMAP.prototype.makeEpochsPerSample = function (weights, nEpochs) { - var result = utils.filled(weights.length, -1.0); - var max = utils.max(weights); - var nSamples = weights.map(function (w) { return (w / max) * nEpochs; }); - nSamples.forEach(function (n, i) { - if (n > 0) - result[i] = nEpochs / nSamples[i]; - }); - return result; - }; - UMAP.prototype.assignOptimizationStateParameters = function (state) { - Object.assign(this.optimizationState, state); - }; - UMAP.prototype.prepareForOptimizationLoop = function () { - var _a = this, repulsionStrength = _a.repulsionStrength, learningRate = _a.learningRate, negativeSampleRate = _a.negativeSampleRate; - var _b = this.optimizationState, epochsPerSample = _b.epochsPerSample, headEmbedding = _b.headEmbedding, tailEmbedding = _b.tailEmbedding; - var dim = headEmbedding[0].length; - var moveOther = headEmbedding.length === tailEmbedding.length; - var epochsPerNegativeSample = epochsPerSample.map(function (e) { return e / negativeSampleRate; }); - var epochOfNextNegativeSample = __spread(epochsPerNegativeSample); - var epochOfNextSample = __spread(epochsPerSample); - this.assignOptimizationStateParameters({ - epochOfNextSample: epochOfNextSample, - epochOfNextNegativeSample: epochOfNextNegativeSample, - epochsPerNegativeSample: epochsPerNegativeSample, - moveOther: moveOther, - initialAlpha: learningRate, - alpha: learningRate, - gamma: repulsionStrength, - dim: dim, - }); - }; - UMAP.prototype.initializeOptimization = function () { - var headEmbedding = this.embedding; - var tailEmbedding = this.embedding; - var _a = this.optimizationState, head = _a.head, tail = _a.tail, epochsPerSample = _a.epochsPerSample; - var nEpochs = this.getNEpochs(); - var nVertices = this.graph.nCols; - var _b = findABParams(this.spread, this.minDist), a = _b.a, b = _b.b; - this.assignOptimizationStateParameters({ - headEmbedding: headEmbedding, - tailEmbedding: tailEmbedding, - head: head, - tail: tail, - epochsPerSample: epochsPerSample, - a: a, - b: b, - nEpochs: nEpochs, - nVertices: nVertices, - }); - }; - UMAP.prototype.optimizeLayoutStep = function (n) { - var optimizationState = this.optimizationState; - var head = optimizationState.head, tail = optimizationState.tail, headEmbedding = optimizationState.headEmbedding, tailEmbedding = optimizationState.tailEmbedding, epochsPerSample = optimizationState.epochsPerSample, epochOfNextSample = optimizationState.epochOfNextSample, epochOfNextNegativeSample = optimizationState.epochOfNextNegativeSample, epochsPerNegativeSample = optimizationState.epochsPerNegativeSample, moveOther = optimizationState.moveOther, initialAlpha = optimizationState.initialAlpha, alpha = optimizationState.alpha, gamma = optimizationState.gamma, a = optimizationState.a, b = optimizationState.b, dim = optimizationState.dim, nEpochs = optimizationState.nEpochs, nVertices = optimizationState.nVertices; - var clipValue = 4.0; - for (var i = 0; i < epochsPerSample.length; i++) { - if (epochOfNextSample[i] > n) { - continue; - } - var j = head[i]; - var k = tail[i]; - var current = headEmbedding[j]; - var other = tailEmbedding[k]; - var distSquared = rDist(current, other); - var gradCoeff = 0; - if (distSquared > 0) { - gradCoeff = -2.0 * a * b * Math.pow(distSquared, b - 1.0); - gradCoeff /= a * Math.pow(distSquared, b) + 1.0; - } - for (var d = 0; d < dim; d++) { - var gradD = clip(gradCoeff * (current[d] - other[d]), clipValue); - current[d] += gradD * alpha; - if (moveOther) { - other[d] += -gradD * alpha; - } - } - epochOfNextSample[i] += epochsPerSample[i]; - var nNegSamples = Math.floor((n - epochOfNextNegativeSample[i]) / epochsPerNegativeSample[i]); - for (var p = 0; p < nNegSamples; p++) { - var k_1 = utils.tauRandInt(nVertices, this.random); - var other_1 = tailEmbedding[k_1]; - var distSquared_1 = rDist(current, other_1); - var gradCoeff_1 = 0.0; - if (distSquared_1 > 0.0) { - gradCoeff_1 = 2.0 * gamma * b; - gradCoeff_1 /= - (0.001 + distSquared_1) * (a * Math.pow(distSquared_1, b) + 1); - } - else if (j === k_1) { - continue; - } - for (var d = 0; d < dim; d++) { - var gradD = 4.0; - if (gradCoeff_1 > 0.0) { - gradD = clip(gradCoeff_1 * (current[d] - other_1[d]), clipValue); - } - current[d] += gradD * alpha; - } - } - epochOfNextNegativeSample[i] += nNegSamples * epochsPerNegativeSample[i]; - } - optimizationState.alpha = initialAlpha * (1.0 - n / nEpochs); - optimizationState.currentEpoch += 1; - return headEmbedding; - }; - UMAP.prototype.optimizeLayoutAsync = function (epochCallback) { - var _this = this; - if (epochCallback === void 0) { epochCallback = function () { return true; }; } - return new Promise(function (resolve, reject) { - var step = function () { return __awaiter(_this, void 0, void 0, function () { - var _a, nEpochs, currentEpoch, epochCompleted, shouldStop, isFinished; - return __generator(this, function (_b) { - try { - _a = this.optimizationState, nEpochs = _a.nEpochs, currentEpoch = _a.currentEpoch; - this.embedding = this.optimizeLayoutStep(currentEpoch); - epochCompleted = this.optimizationState.currentEpoch; - shouldStop = epochCallback(epochCompleted) === false; - isFinished = epochCompleted === nEpochs; - if (!shouldStop && !isFinished) { - step(); - } - else { - return [2, resolve(isFinished)]; - } - } - catch (err) { - reject(err); - } - return [2]; - }); - }); }; - step(); - }); - }; - UMAP.prototype.optimizeLayout = function (epochCallback) { - if (epochCallback === void 0) { epochCallback = function () { return true; }; } - var isFinished = false; - var embedding = []; - while (!isFinished) { - var _a = this.optimizationState, nEpochs = _a.nEpochs, currentEpoch = _a.currentEpoch; - embedding = this.optimizeLayoutStep(currentEpoch); - var epochCompleted = this.optimizationState.currentEpoch; - var shouldStop = epochCallback(epochCompleted) === false; - isFinished = epochCompleted === nEpochs || shouldStop; - } - return embedding; - }; - UMAP.prototype.getNEpochs = function () { - var graph = this.graph; - if (this.nEpochs > 0) { - return this.nEpochs; - } - var length = graph.nRows; - if (length <= 2500) { - return 500; - } - else if (length <= 5000) { - return 400; - } - else if (length <= 7500) { - return 300; - } - else { - return 200; - } - }; - return UMAP; -}()); -exports.UMAP = UMAP; -function euclidean(x, y) { - var result = 0; - for (var i = 0; i < x.length; i++) { - result += Math.pow((x[i] - y[i]), 2); - } - return Math.sqrt(result); -} -exports.euclidean = euclidean; -function cosine(x, y) { - var result = 0.0; - var normX = 0.0; - var normY = 0.0; - for (var i = 0; i < x.length; i++) { - result += x[i] * y[i]; - normX += Math.pow(x[i], 2); - normY += Math.pow(y[i], 2); - } - if (normX === 0 && normY === 0) { - return 0; - } - else if (normX === 0 || normY === 0) { - return 1.0; - } - else { - return 1.0 - result / Math.sqrt(normX * normY); - } -} -exports.cosine = cosine; -var OptimizationState = (function () { - function OptimizationState() { - this.currentEpoch = 0; - this.headEmbedding = []; - this.tailEmbedding = []; - this.head = []; - this.tail = []; - this.epochsPerSample = []; - this.epochOfNextSample = []; - this.epochOfNextNegativeSample = []; - this.epochsPerNegativeSample = []; - this.moveOther = true; - this.initialAlpha = 1.0; - this.alpha = 1.0; - this.gamma = 1.0; - this.a = 1.5769434603113077; - this.b = 0.8950608779109733; - this.dim = 2; - this.nEpochs = 500; - this.nVertices = 0; - } - return OptimizationState; -}()); -function clip(x, clipValue) { - if (x > clipValue) - return clipValue; - else if (x < -clipValue) - return -clipValue; - else - return x; -} -function rDist(x, y) { - var result = 0.0; - for (var i = 0; i < x.length; i++) { - result += Math.pow(x[i] - y[i], 2); - } - return result; -} -function findABParams(spread, minDist) { - var curve = function (_a) { - var _b = __read(_a, 2), a = _b[0], b = _b[1]; - return function (x) { - return 1.0 / (1.0 + a * Math.pow(x, (2 * b))); - }; - }; - var xv = utils - .linear(0, spread * 3, 300) - .map(function (val) { return (val < minDist ? 1.0 : val); }); - var yv = utils.zeros(xv.length).map(function (val, index) { - var gte = xv[index] >= minDist; - return gte ? Math.exp(-(xv[index] - minDist) / spread) : val; - }); - var initialValues = [0.5, 0.5]; - var data = { x: xv, y: yv }; - var options = { - damping: 1.5, - initialValues: initialValues, - gradientDifference: 10e-2, - maxIterations: 100, - errorTolerance: 10e-3, - }; - var parameterValues = ml_levenberg_marquardt_1.default(data, curve, options).parameterValues; - var _a = __read(parameterValues, 2), a = _a[0], b = _a[1]; - return { a: a, b: b }; -} -exports.findABParams = findABParams; -function fastIntersection(graph, target, unknownDist, farDist) { - if (unknownDist === void 0) { unknownDist = 1.0; } - if (farDist === void 0) { farDist = 5.0; } - return graph.map(function (value, row, col) { - if (target[row] === -1 || target[col] === -1) { - return value * Math.exp(-unknownDist); - } - else if (target[row] !== target[col]) { - return value * Math.exp(-farDist); - } - else { - return value; - } - }); -} -exports.fastIntersection = fastIntersection; -function resetLocalConnectivity(simplicialSet) { - simplicialSet = matrix.normalize(simplicialSet, "max"); - var transpose = matrix.transpose(simplicialSet); - var prodMatrix = matrix.pairwiseMultiply(transpose, simplicialSet); - simplicialSet = matrix.add(simplicialSet, matrix.subtract(transpose, prodMatrix)); - return matrix.eliminateZeros(simplicialSet); -} -exports.resetLocalConnectivity = resetLocalConnectivity; -function initTransform(indices, weights, embedding) { - var result = utils - .zeros(indices.length) - .map(function (z) { return utils.zeros(embedding[0].length); }); - for (var i = 0; i < indices.length; i++) { - for (var j = 0; j < indices[0].length; j++) { - for (var d = 0; d < embedding[0].length; d++) { - var a = indices[i][j]; - result[i][d] += weights[i][j] * embedding[a][d]; - } - } - } - return result; -} -exports.initTransform = initTransform; - - -/***/ }), -/* 7 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var heap = __importStar(__webpack_require__(2)); -var matrix = __importStar(__webpack_require__(3)); -var tree = __importStar(__webpack_require__(4)); -var utils = __importStar(__webpack_require__(1)); -function makeNNDescent(distanceFn, random) { - return function nNDescent(data, leafArray, nNeighbors, nIters, maxCandidates, delta, rho, rpTreeInit) { - if (nIters === void 0) { nIters = 10; } - if (maxCandidates === void 0) { maxCandidates = 50; } - if (delta === void 0) { delta = 0.001; } - if (rho === void 0) { rho = 0.5; } - if (rpTreeInit === void 0) { rpTreeInit = true; } - var nVertices = data.length; - var currentGraph = heap.makeHeap(data.length, nNeighbors); - for (var i = 0; i < data.length; i++) { - var indices = heap.rejectionSample(nNeighbors, data.length, random); - for (var j = 0; j < indices.length; j++) { - var d = distanceFn(data[i], data[indices[j]]); - heap.heapPush(currentGraph, i, d, indices[j], 1); - heap.heapPush(currentGraph, indices[j], d, i, 1); - } - } - if (rpTreeInit) { - for (var n = 0; n < leafArray.length; n++) { - for (var i = 0; i < leafArray[n].length; i++) { - if (leafArray[n][i] < 0) { - break; - } - for (var j = i + 1; j < leafArray[n].length; j++) { - if (leafArray[n][j] < 0) { - break; - } - var d = distanceFn(data[leafArray[n][i]], data[leafArray[n][j]]); - heap.heapPush(currentGraph, leafArray[n][i], d, leafArray[n][j], 1); - heap.heapPush(currentGraph, leafArray[n][j], d, leafArray[n][i], 1); - } - } - } - } - for (var n = 0; n < nIters; n++) { - var candidateNeighbors = heap.buildCandidates(currentGraph, nVertices, nNeighbors, maxCandidates, random); - var c = 0; - for (var i = 0; i < nVertices; i++) { - for (var j = 0; j < maxCandidates; j++) { - var p = Math.floor(candidateNeighbors[0][i][j]); - if (p < 0 || utils.tauRand(random) < rho) { - continue; - } - for (var k = 0; k < maxCandidates; k++) { - var q = Math.floor(candidateNeighbors[0][i][k]); - var cj = candidateNeighbors[2][i][j]; - var ck = candidateNeighbors[2][i][k]; - if (q < 0 || (!cj && !ck)) { - continue; - } - var d = distanceFn(data[p], data[q]); - c += heap.heapPush(currentGraph, p, d, q, 1); - c += heap.heapPush(currentGraph, q, d, p, 1); - } - } - } - if (c <= delta * nNeighbors * data.length) { - break; - } - } - var sorted = heap.deheapSort(currentGraph); - return sorted; - }; -} -exports.makeNNDescent = makeNNDescent; -function makeInitializations(distanceFn) { - function initFromRandom(nNeighbors, data, queryPoints, _heap, random) { - for (var i = 0; i < queryPoints.length; i++) { - var indices = utils.rejectionSample(nNeighbors, data.length, random); - for (var j = 0; j < indices.length; j++) { - if (indices[j] < 0) { - continue; - } - var d = distanceFn(data[indices[j]], queryPoints[i]); - heap.heapPush(_heap, i, d, indices[j], 1); - } - } - } - function initFromTree(_tree, data, queryPoints, _heap, random) { - for (var i = 0; i < queryPoints.length; i++) { - var indices = tree.searchFlatTree(queryPoints[i], _tree, random); - for (var j = 0; j < indices.length; j++) { - if (indices[j] < 0) { - return; - } - var d = distanceFn(data[indices[j]], queryPoints[i]); - heap.heapPush(_heap, i, d, indices[j], 1); - } - } - return; - } - return { initFromRandom: initFromRandom, initFromTree: initFromTree }; -} -exports.makeInitializations = makeInitializations; -function makeInitializedNNSearch(distanceFn) { - return function nnSearchFn(data, graph, initialization, queryPoints) { - var e_1, _a; - var _b = matrix.getCSR(graph), indices = _b.indices, indptr = _b.indptr; - for (var i = 0; i < queryPoints.length; i++) { - var tried = new Set(initialization[0][i]); - while (true) { - var vertex = heap.smallestFlagged(initialization, i); - if (vertex === -1) { - break; - } - var candidates = indices.slice(indptr[vertex], indptr[vertex + 1]); - try { - for (var candidates_1 = __values(candidates), candidates_1_1 = candidates_1.next(); !candidates_1_1.done; candidates_1_1 = candidates_1.next()) { - var candidate = candidates_1_1.value; - if (candidate === vertex || - candidate === -1 || - tried.has(candidate)) { - continue; - } - var d = distanceFn(data[candidate], queryPoints[i]); - heap.uncheckedHeapPush(initialization, i, d, candidate, 1); - tried.add(candidate); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (candidates_1_1 && !candidates_1_1.done && (_a = candidates_1.return)) _a.call(candidates_1); - } - finally { if (e_1) throw e_1.error; } - } - } - } - return initialization; - }; -} -exports.makeInitializedNNSearch = makeInitializedNNSearch; -function initializeSearch(forest, data, queryPoints, nNeighbors, initFromRandom, initFromTree, random) { - var e_2, _a; - var results = heap.makeHeap(queryPoints.length, nNeighbors); - initFromRandom(nNeighbors, data, queryPoints, results, random); - if (forest) { - try { - for (var forest_1 = __values(forest), forest_1_1 = forest_1.next(); !forest_1_1.done; forest_1_1 = forest_1.next()) { - var tree_1 = forest_1_1.value; - initFromTree(tree_1, data, queryPoints, results, random); - } - } - catch (e_2_1) { e_2 = { error: e_2_1 }; } - finally { - try { - if (forest_1_1 && !forest_1_1.done && (_a = forest_1.return)) _a.call(forest_1); - } - finally { if (e_2) throw e_2.error; } - } - } - return results; -} -exports.initializeSearch = initializeSearch; - - -/***/ }), -/* 8 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - - -var mlMatrix = __webpack_require__(9); - -/** - * Calculate current error - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} parameters - Array of current parameter values - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @return {number} - */ -function errorCalculation( - data, - parameters, - parameterizedFunction -) { - var error = 0; - const func = parameterizedFunction(parameters); - - for (var i = 0; i < data.x.length; i++) { - error += Math.abs(data.y[i] - func(data.x[i])); - } - - return error; -} - -/** - * Difference of the matrix function over the parameters - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} evaluatedData - Array of previous evaluated function values - * @param {Array} params - Array of previous parameter values - * @param {number} gradientDifference - Adjustment for decrease the damping parameter - * @param {function} paramFunction - The parameters and returns a function with the independent variable as a parameter - * @return {Matrix} - */ -function gradientFunction( - data, - evaluatedData, - params, - gradientDifference, - paramFunction -) { - const n = params.length; - const m = data.x.length; - - var ans = new Array(n); - - for (var param = 0; param < n; param++) { - ans[param] = new Array(m); - var auxParams = params.concat(); - auxParams[param] += gradientDifference; - var funcParam = paramFunction(auxParams); - - for (var point = 0; point < m; point++) { - ans[param][point] = evaluatedData[point] - funcParam(data.x[point]); - } - } - return new mlMatrix.Matrix(ans); -} - -/** - * Matrix function over the samples - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} evaluatedData - Array of previous evaluated function values - * @return {Matrix} - */ -function matrixFunction(data, evaluatedData) { - const m = data.x.length; - - var ans = new Array(m); - - for (var point = 0; point < m; point++) { - ans[point] = data.y[point] - evaluatedData[point]; - } - - return new mlMatrix.Matrix([ans]); -} - -/** - * Iteration for Levenberg-Marquardt - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} params - Array of previous parameter values - * @param {number} damping - Levenberg-Marquardt parameter - * @param {number} gradientDifference - Adjustment for decrease the damping parameter - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @return {Array} - */ -function step( - data, - params, - damping, - gradientDifference, - parameterizedFunction -) { - var identity = mlMatrix.Matrix.eye(params.length).mul( - damping * gradientDifference * gradientDifference - ); - - var l = data.x.length; - var evaluatedData = new Array(l); - const func = parameterizedFunction(params); - for (var i = 0; i < l; i++) { - evaluatedData[i] = func(data.x[i]); - } - var gradientFunc = gradientFunction( - data, - evaluatedData, - params, - gradientDifference, - parameterizedFunction - ); - var matrixFunc = matrixFunction(data, evaluatedData).transposeView(); - var inverseMatrix = mlMatrix.inverse( - identity.add(gradientFunc.mmul(gradientFunc.transposeView())) - ); - params = new mlMatrix.Matrix([params]); - params = params.sub( - inverseMatrix - .mmul(gradientFunc) - .mmul(matrixFunc) - .mul(gradientDifference) - .transposeView() - ); - - return params.to1DArray(); -} - -/** - * Curve fitting algorithm - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @param {object} [options] - Options object - * @param {number} [options.damping] - Levenberg-Marquardt parameter - * @param {number} [options.gradientDifference = 10e-2] - Adjustment for decrease the damping parameter - * @param {Array} [options.initialValues] - Array of initial parameter values - * @param {number} [options.maxIterations = 100] - Maximum of allowed iterations - * @param {number} [options.errorTolerance = 10e-3] - Minimum uncertainty allowed for each point - * @return {{parameterValues: Array, parameterError: number, iterations: number}} - */ -function levenbergMarquardt( - data, - parameterizedFunction, - options = {} -) { - let { - maxIterations = 100, - gradientDifference = 10e-2, - damping = 0, - errorTolerance = 10e-3, - initialValues - } = options; - - if (damping <= 0) { - throw new Error('The damping option must be a positive number'); - } else if (!data.x || !data.y) { - throw new Error('The data parameter must have x and y elements'); - } else if ( - !Array.isArray(data.x) || - data.x.length < 2 || - !Array.isArray(data.y) || - data.y.length < 2 - ) { - throw new Error( - 'The data parameter elements must be an array with more than 2 points' - ); - } else { - let dataLen = data.x.length; - if (dataLen !== data.y.length) { - throw new Error('The data parameter elements must have the same size'); - } - } - - var parameters = - initialValues || new Array(parameterizedFunction.length).fill(1); - - if (!Array.isArray(parameters)) { - throw new Error('initialValues must be an array'); - } - - var error = errorCalculation(data, parameters, parameterizedFunction); - - var converged = error <= errorTolerance; - - for ( - var iteration = 0; - iteration < maxIterations && !converged; - iteration++ - ) { - parameters = step( - data, - parameters, - damping, - gradientDifference, - parameterizedFunction - ); - error = errorCalculation(data, parameters, parameterizedFunction); - converged = error <= errorTolerance; - } - - return { - parameterValues: parameters, - parameterError: error, - iterations: iteration - }; -} - -module.exports = levenbergMarquardt; - - -/***/ }), -/* 9 */ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -__webpack_require__.r(__webpack_exports__); - -// EXTERNAL MODULE: ./node_modules/is-any-array/src/index.js -var src = __webpack_require__(0); -var src_default = /*#__PURE__*/__webpack_require__.n(src); - -// CONCATENATED MODULE: ./node_modules/ml-array-max/lib-es6/index.js - - -/** - * Computes the maximum of the given values - * @param {Array} input - * @return {number} - */ - -function lib_es6_max(input) { - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } - - if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var max = input[0]; - - for (var i = 1; i < input.length; i++) { - if (input[i] > max) max = input[i]; - } - - return max; -} - -/* harmony default export */ var lib_es6 = (lib_es6_max); - -// CONCATENATED MODULE: ./node_modules/ml-array-min/lib-es6/index.js - - -/** - * Computes the minimum of the given values - * @param {Array} input - * @return {number} - */ - -function lib_es6_min(input) { - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } - - if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var min = input[0]; - - for (var i = 1; i < input.length; i++) { - if (input[i] < min) min = input[i]; - } - - return min; -} - -/* harmony default export */ var ml_array_min_lib_es6 = (lib_es6_min); - -// CONCATENATED MODULE: ./node_modules/ml-array-rescale/lib-es6/index.js - - - - -function rescale(input) { - var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {}; - - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } else if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var output; - - if (options.output !== undefined) { - if (!src_default()(options.output)) { - throw new TypeError('output option must be an array if specified'); - } - - output = options.output; - } else { - output = new Array(input.length); - } - - var currentMin = ml_array_min_lib_es6(input); - var currentMax = lib_es6(input); - - if (currentMin === currentMax) { - throw new RangeError('minimum and maximum input values are equal. Cannot rescale a constant array'); - } - - var _options$min = options.min, - minValue = _options$min === void 0 ? options.autoMinMax ? currentMin : 0 : _options$min, - _options$max = options.max, - maxValue = _options$max === void 0 ? options.autoMinMax ? currentMax : 1 : _options$max; - - if (minValue >= maxValue) { - throw new RangeError('min option must be smaller than max option'); - } - - var factor = (maxValue - minValue) / (currentMax - currentMin); - - for (var i = 0; i < input.length; i++) { - output[i] = (input[i] - currentMin) * factor + minValue; - } - - return output; -} - -/* harmony default export */ var ml_array_rescale_lib_es6 = (rescale); - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/lu.js - - -/** - * @class LuDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/LuDecomposition.cs - * @param {Matrix} matrix - */ -class lu_LuDecomposition { - constructor(matrix) { - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - - var lu = matrix.clone(); - var rows = lu.rows; - var columns = lu.columns; - var pivotVector = new Array(rows); - var pivotSign = 1; - var i, j, k, p, s, t, v; - var LUcolj, kmax; - - for (i = 0; i < rows; i++) { - pivotVector[i] = i; - } - - LUcolj = new Array(rows); - - for (j = 0; j < columns; j++) { - for (i = 0; i < rows; i++) { - LUcolj[i] = lu.get(i, j); - } - - for (i = 0; i < rows; i++) { - kmax = Math.min(i, j); - s = 0; - for (k = 0; k < kmax; k++) { - s += lu.get(i, k) * LUcolj[k]; - } - LUcolj[i] -= s; - lu.set(i, j, LUcolj[i]); - } - - p = j; - for (i = j + 1; i < rows; i++) { - if (Math.abs(LUcolj[i]) > Math.abs(LUcolj[p])) { - p = i; - } - } - - if (p !== j) { - for (k = 0; k < columns; k++) { - t = lu.get(p, k); - lu.set(p, k, lu.get(j, k)); - lu.set(j, k, t); - } - - v = pivotVector[p]; - pivotVector[p] = pivotVector[j]; - pivotVector[j] = v; - - pivotSign = -pivotSign; - } - - if (j < rows && lu.get(j, j) !== 0) { - for (i = j + 1; i < rows; i++) { - lu.set(i, j, lu.get(i, j) / lu.get(j, j)); - } - } - } - - this.LU = lu; - this.pivotVector = pivotVector; - this.pivotSign = pivotSign; - } - - /** - * - * @return {boolean} - */ - isSingular() { - var data = this.LU; - var col = data.columns; - for (var j = 0; j < col; j++) { - if (data[j][j] === 0) { - return true; - } - } - return false; - } - - /** - * - * @param {Matrix} value - * @return {Matrix} - */ - solve(value) { - value = matrix_Matrix.checkMatrix(value); - - var lu = this.LU; - var rows = lu.rows; - - if (rows !== value.rows) { - throw new Error('Invalid matrix dimensions'); - } - if (this.isSingular()) { - throw new Error('LU matrix is singular'); - } - - var count = value.columns; - var X = value.subMatrixRow(this.pivotVector, 0, count - 1); - var columns = lu.columns; - var i, j, k; - - for (k = 0; k < columns; k++) { - for (i = k + 1; i < columns; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * lu[i][k]; - } - } - } - for (k = columns - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - X[k][j] /= lu[k][k]; - } - for (i = 0; i < k; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * lu[i][k]; - } - } - } - return X; - } - - /** - * - * @return {number} - */ - get determinant() { - var data = this.LU; - if (!data.isSquare()) { - throw new Error('Matrix must be square'); - } - var determinant = this.pivotSign; - var col = data.columns; - for (var j = 0; j < col; j++) { - determinant *= data[j][j]; - } - return determinant; - } - - /** - * - * @return {Matrix} - */ - get lowerTriangularMatrix() { - var data = this.LU; - var rows = data.rows; - var columns = data.columns; - var X = new matrix_Matrix(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - if (i > j) { - X[i][j] = data[i][j]; - } else if (i === j) { - X[i][j] = 1; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Matrix} - */ - get upperTriangularMatrix() { - var data = this.LU; - var rows = data.rows; - var columns = data.columns; - var X = new matrix_Matrix(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - if (i <= j) { - X[i][j] = data[i][j]; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Array} - */ - get pivotPermutationVector() { - return this.pivotVector.slice(); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/util.js -function hypotenuse(a, b) { - var r = 0; - if (Math.abs(a) > Math.abs(b)) { - r = b / a; - return Math.abs(a) * Math.sqrt(1 + r * r); - } - if (b !== 0) { - r = a / b; - return Math.abs(b) * Math.sqrt(1 + r * r); - } - return 0; -} - -function getFilled2DArray(rows, columns, value) { - var array = new Array(rows); - for (var i = 0; i < rows; i++) { - array[i] = new Array(columns); - for (var j = 0; j < columns; j++) { - array[i][j] = value; - } - } - return array; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/svd.js - - - - -/** - * @class SingularValueDecomposition - * @see https://github.com/accord-net/framework/blob/development/Sources/Accord.Math/Decompositions/SingularValueDecomposition.cs - * @param {Matrix} value - * @param {object} [options] - * @param {boolean} [options.computeLeftSingularVectors=true] - * @param {boolean} [options.computeRightSingularVectors=true] - * @param {boolean} [options.autoTranspose=false] - */ -class svd_SingularValueDecomposition { - constructor(value, options = {}) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var m = value.rows; - var n = value.columns; - - const { - computeLeftSingularVectors = true, - computeRightSingularVectors = true, - autoTranspose = false - } = options; - - var wantu = Boolean(computeLeftSingularVectors); - var wantv = Boolean(computeRightSingularVectors); - - var swapped = false; - var a; - if (m < n) { - if (!autoTranspose) { - a = value.clone(); - // eslint-disable-next-line no-console - console.warn( - 'Computing SVD on a matrix with more columns than rows. Consider enabling autoTranspose' - ); - } else { - a = value.transpose(); - m = a.rows; - n = a.columns; - swapped = true; - var aux = wantu; - wantu = wantv; - wantv = aux; - } - } else { - a = value.clone(); - } - - var nu = Math.min(m, n); - var ni = Math.min(m + 1, n); - var s = new Array(ni); - var U = getFilled2DArray(m, nu, 0); - var V = getFilled2DArray(n, n, 0); - - var e = new Array(n); - var work = new Array(m); - - var si = new Array(ni); - for (let i = 0; i < ni; i++) si[i] = i; - - var nct = Math.min(m - 1, n); - var nrt = Math.max(0, Math.min(n - 2, m)); - var mrc = Math.max(nct, nrt); - - for (let k = 0; k < mrc; k++) { - if (k < nct) { - s[k] = 0; - for (let i = k; i < m; i++) { - s[k] = hypotenuse(s[k], a[i][k]); - } - if (s[k] !== 0) { - if (a[k][k] < 0) { - s[k] = -s[k]; - } - for (let i = k; i < m; i++) { - a[i][k] /= s[k]; - } - a[k][k] += 1; - } - s[k] = -s[k]; - } - - for (let j = k + 1; j < n; j++) { - if (k < nct && s[k] !== 0) { - let t = 0; - for (let i = k; i < m; i++) { - t += a[i][k] * a[i][j]; - } - t = -t / a[k][k]; - for (let i = k; i < m; i++) { - a[i][j] += t * a[i][k]; - } - } - e[j] = a[k][j]; - } - - if (wantu && k < nct) { - for (let i = k; i < m; i++) { - U[i][k] = a[i][k]; - } - } - - if (k < nrt) { - e[k] = 0; - for (let i = k + 1; i < n; i++) { - e[k] = hypotenuse(e[k], e[i]); - } - if (e[k] !== 0) { - if (e[k + 1] < 0) { - e[k] = 0 - e[k]; - } - for (let i = k + 1; i < n; i++) { - e[i] /= e[k]; - } - e[k + 1] += 1; - } - e[k] = -e[k]; - if (k + 1 < m && e[k] !== 0) { - for (let i = k + 1; i < m; i++) { - work[i] = 0; - } - for (let i = k + 1; i < m; i++) { - for (let j = k + 1; j < n; j++) { - work[i] += e[j] * a[i][j]; - } - } - for (let j = k + 1; j < n; j++) { - let t = -e[j] / e[k + 1]; - for (let i = k + 1; i < m; i++) { - a[i][j] += t * work[i]; - } - } - } - if (wantv) { - for (let i = k + 1; i < n; i++) { - V[i][k] = e[i]; - } - } - } - } - - let p = Math.min(n, m + 1); - if (nct < n) { - s[nct] = a[nct][nct]; - } - if (m < p) { - s[p - 1] = 0; - } - if (nrt + 1 < p) { - e[nrt] = a[nrt][p - 1]; - } - e[p - 1] = 0; - - if (wantu) { - for (let j = nct; j < nu; j++) { - for (let i = 0; i < m; i++) { - U[i][j] = 0; - } - U[j][j] = 1; - } - for (let k = nct - 1; k >= 0; k--) { - if (s[k] !== 0) { - for (let j = k + 1; j < nu; j++) { - let t = 0; - for (let i = k; i < m; i++) { - t += U[i][k] * U[i][j]; - } - t = -t / U[k][k]; - for (let i = k; i < m; i++) { - U[i][j] += t * U[i][k]; - } - } - for (let i = k; i < m; i++) { - U[i][k] = -U[i][k]; - } - U[k][k] = 1 + U[k][k]; - for (let i = 0; i < k - 1; i++) { - U[i][k] = 0; - } - } else { - for (let i = 0; i < m; i++) { - U[i][k] = 0; - } - U[k][k] = 1; - } - } - } - - if (wantv) { - for (let k = n - 1; k >= 0; k--) { - if (k < nrt && e[k] !== 0) { - for (let j = k + 1; j < n; j++) { - let t = 0; - for (let i = k + 1; i < n; i++) { - t += V[i][k] * V[i][j]; - } - t = -t / V[k + 1][k]; - for (let i = k + 1; i < n; i++) { - V[i][j] += t * V[i][k]; - } - } - } - for (let i = 0; i < n; i++) { - V[i][k] = 0; - } - V[k][k] = 1; - } - } - - var pp = p - 1; - var iter = 0; - var eps = Number.EPSILON; - while (p > 0) { - let k, kase; - for (k = p - 2; k >= -1; k--) { - if (k === -1) { - break; - } - const alpha = - Number.MIN_VALUE + eps * Math.abs(s[k] + Math.abs(s[k + 1])); - if (Math.abs(e[k]) <= alpha || Number.isNaN(e[k])) { - e[k] = 0; - break; - } - } - if (k === p - 2) { - kase = 4; - } else { - let ks; - for (ks = p - 1; ks >= k; ks--) { - if (ks === k) { - break; - } - let t = - (ks !== p ? Math.abs(e[ks]) : 0) + - (ks !== k + 1 ? Math.abs(e[ks - 1]) : 0); - if (Math.abs(s[ks]) <= eps * t) { - s[ks] = 0; - break; - } - } - if (ks === k) { - kase = 3; - } else if (ks === p - 1) { - kase = 1; - } else { - kase = 2; - k = ks; - } - } - - k++; - - switch (kase) { - case 1: { - let f = e[p - 2]; - e[p - 2] = 0; - for (let j = p - 2; j >= k; j--) { - let t = hypotenuse(s[j], f); - let cs = s[j] / t; - let sn = f / t; - s[j] = t; - if (j !== k) { - f = -sn * e[j - 1]; - e[j - 1] = cs * e[j - 1]; - } - if (wantv) { - for (let i = 0; i < n; i++) { - t = cs * V[i][j] + sn * V[i][p - 1]; - V[i][p - 1] = -sn * V[i][j] + cs * V[i][p - 1]; - V[i][j] = t; - } - } - } - break; - } - case 2: { - let f = e[k - 1]; - e[k - 1] = 0; - for (let j = k; j < p; j++) { - let t = hypotenuse(s[j], f); - let cs = s[j] / t; - let sn = f / t; - s[j] = t; - f = -sn * e[j]; - e[j] = cs * e[j]; - if (wantu) { - for (let i = 0; i < m; i++) { - t = cs * U[i][j] + sn * U[i][k - 1]; - U[i][k - 1] = -sn * U[i][j] + cs * U[i][k - 1]; - U[i][j] = t; - } - } - } - break; - } - case 3: { - const scale = Math.max( - Math.abs(s[p - 1]), - Math.abs(s[p - 2]), - Math.abs(e[p - 2]), - Math.abs(s[k]), - Math.abs(e[k]) - ); - const sp = s[p - 1] / scale; - const spm1 = s[p - 2] / scale; - const epm1 = e[p - 2] / scale; - const sk = s[k] / scale; - const ek = e[k] / scale; - const b = ((spm1 + sp) * (spm1 - sp) + epm1 * epm1) / 2; - const c = sp * epm1 * (sp * epm1); - let shift = 0; - if (b !== 0 || c !== 0) { - if (b < 0) { - shift = 0 - Math.sqrt(b * b + c); - } else { - shift = Math.sqrt(b * b + c); - } - shift = c / (b + shift); - } - let f = (sk + sp) * (sk - sp) + shift; - let g = sk * ek; - for (let j = k; j < p - 1; j++) { - let t = hypotenuse(f, g); - if (t === 0) t = Number.MIN_VALUE; - let cs = f / t; - let sn = g / t; - if (j !== k) { - e[j - 1] = t; - } - f = cs * s[j] + sn * e[j]; - e[j] = cs * e[j] - sn * s[j]; - g = sn * s[j + 1]; - s[j + 1] = cs * s[j + 1]; - if (wantv) { - for (let i = 0; i < n; i++) { - t = cs * V[i][j] + sn * V[i][j + 1]; - V[i][j + 1] = -sn * V[i][j] + cs * V[i][j + 1]; - V[i][j] = t; - } - } - t = hypotenuse(f, g); - if (t === 0) t = Number.MIN_VALUE; - cs = f / t; - sn = g / t; - s[j] = t; - f = cs * e[j] + sn * s[j + 1]; - s[j + 1] = -sn * e[j] + cs * s[j + 1]; - g = sn * e[j + 1]; - e[j + 1] = cs * e[j + 1]; - if (wantu && j < m - 1) { - for (let i = 0; i < m; i++) { - t = cs * U[i][j] + sn * U[i][j + 1]; - U[i][j + 1] = -sn * U[i][j] + cs * U[i][j + 1]; - U[i][j] = t; - } - } - } - e[p - 2] = f; - iter = iter + 1; - break; - } - case 4: { - if (s[k] <= 0) { - s[k] = s[k] < 0 ? -s[k] : 0; - if (wantv) { - for (let i = 0; i <= pp; i++) { - V[i][k] = -V[i][k]; - } - } - } - while (k < pp) { - if (s[k] >= s[k + 1]) { - break; - } - let t = s[k]; - s[k] = s[k + 1]; - s[k + 1] = t; - if (wantv && k < n - 1) { - for (let i = 0; i < n; i++) { - t = V[i][k + 1]; - V[i][k + 1] = V[i][k]; - V[i][k] = t; - } - } - if (wantu && k < m - 1) { - for (let i = 0; i < m; i++) { - t = U[i][k + 1]; - U[i][k + 1] = U[i][k]; - U[i][k] = t; - } - } - k++; - } - iter = 0; - p--; - break; - } - // no default - } - } - - if (swapped) { - var tmp = V; - V = U; - U = tmp; - } - - this.m = m; - this.n = n; - this.s = s; - this.U = U; - this.V = V; - } - - /** - * Solve a problem of least square (Ax=b) by using the SVD. Useful when A is singular. When A is not singular, it would be better to use qr.solve(value). - * Example : We search to approximate x, with A matrix shape m*n, x vector size n, b vector size m (m > n). We will use : - * var svd = SingularValueDecomposition(A); - * var x = svd.solve(b); - * @param {Matrix} value - Matrix 1D which is the vector b (in the equation Ax = b) - * @return {Matrix} - The vector x - */ - solve(value) { - var Y = value; - var e = this.threshold; - var scols = this.s.length; - var Ls = matrix_Matrix.zeros(scols, scols); - - for (let i = 0; i < scols; i++) { - if (Math.abs(this.s[i]) <= e) { - Ls[i][i] = 0; - } else { - Ls[i][i] = 1 / this.s[i]; - } - } - - var U = this.U; - var V = this.rightSingularVectors; - - var VL = V.mmul(Ls); - var vrows = V.rows; - var urows = U.length; - var VLU = matrix_Matrix.zeros(vrows, urows); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < urows; j++) { - let sum = 0; - for (let k = 0; k < scols; k++) { - sum += VL[i][k] * U[j][k]; - } - VLU[i][j] = sum; - } - } - - return VLU.mmul(Y); - } - - /** - * - * @param {Array} value - * @return {Matrix} - */ - solveForDiagonal(value) { - return this.solve(matrix_Matrix.diag(value)); - } - - /** - * Get the inverse of the matrix. We compute the inverse of a matrix using SVD when this matrix is singular or ill-conditioned. Example : - * var svd = SingularValueDecomposition(A); - * var inverseA = svd.inverse(); - * @return {Matrix} - The approximation of the inverse of the matrix - */ - inverse() { - var V = this.V; - var e = this.threshold; - var vrows = V.length; - var vcols = V[0].length; - var X = new matrix_Matrix(vrows, this.s.length); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < vcols; j++) { - if (Math.abs(this.s[j]) > e) { - X[i][j] = V[i][j] / this.s[j]; - } else { - X[i][j] = 0; - } - } - } - - var U = this.U; - - var urows = U.length; - var ucols = U[0].length; - var Y = new matrix_Matrix(vrows, urows); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < urows; j++) { - let sum = 0; - for (let k = 0; k < ucols; k++) { - sum += X[i][k] * U[j][k]; - } - Y[i][j] = sum; - } - } - - return Y; - } - - /** - * - * @return {number} - */ - get condition() { - return this.s[0] / this.s[Math.min(this.m, this.n) - 1]; - } - - /** - * - * @return {number} - */ - get norm2() { - return this.s[0]; - } - - /** - * - * @return {number} - */ - get rank() { - var tol = Math.max(this.m, this.n) * this.s[0] * Number.EPSILON; - var r = 0; - var s = this.s; - for (var i = 0, ii = s.length; i < ii; i++) { - if (s[i] > tol) { - r++; - } - } - return r; - } - - /** - * - * @return {Array} - */ - get diagonal() { - return this.s; - } - - /** - * - * @return {number} - */ - get threshold() { - return Number.EPSILON / 2 * Math.max(this.m, this.n) * this.s[0]; - } - - /** - * - * @return {Matrix} - */ - get leftSingularVectors() { - if (!matrix_Matrix.isMatrix(this.U)) { - this.U = new matrix_Matrix(this.U); - } - return this.U; - } - - /** - * - * @return {Matrix} - */ - get rightSingularVectors() { - if (!matrix_Matrix.isMatrix(this.V)) { - this.V = new matrix_Matrix(this.V); - } - return this.V; - } - - /** - * - * @return {Matrix} - */ - get diagonalMatrix() { - return matrix_Matrix.diag(this.s); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/util.js - - -/** - * @private - * Check that a row index is not out of bounds - * @param {Matrix} matrix - * @param {number} index - * @param {boolean} [outer] - */ -function checkRowIndex(matrix, index, outer) { - var max = outer ? matrix.rows : matrix.rows - 1; - if (index < 0 || index > max) { - throw new RangeError('Row index out of range'); - } -} - -/** - * @private - * Check that a column index is not out of bounds - * @param {Matrix} matrix - * @param {number} index - * @param {boolean} [outer] - */ -function checkColumnIndex(matrix, index, outer) { - var max = outer ? matrix.columns : matrix.columns - 1; - if (index < 0 || index > max) { - throw new RangeError('Column index out of range'); - } -} - -/** - * @private - * Check that the provided vector is an array with the right length - * @param {Matrix} matrix - * @param {Array|Matrix} vector - * @return {Array} - * @throws {RangeError} - */ -function checkRowVector(matrix, vector) { - if (vector.to1DArray) { - vector = vector.to1DArray(); - } - if (vector.length !== matrix.columns) { - throw new RangeError( - 'vector size must be the same as the number of columns' - ); - } - return vector; -} - -/** - * @private - * Check that the provided vector is an array with the right length - * @param {Matrix} matrix - * @param {Array|Matrix} vector - * @return {Array} - * @throws {RangeError} - */ -function checkColumnVector(matrix, vector) { - if (vector.to1DArray) { - vector = vector.to1DArray(); - } - if (vector.length !== matrix.rows) { - throw new RangeError('vector size must be the same as the number of rows'); - } - return vector; -} - -function checkIndices(matrix, rowIndices, columnIndices) { - return { - row: checkRowIndices(matrix, rowIndices), - column: checkColumnIndices(matrix, columnIndices) - }; -} - -function checkRowIndices(matrix, rowIndices) { - if (typeof rowIndices !== 'object') { - throw new TypeError('unexpected type for row indices'); - } - - var rowOut = rowIndices.some((r) => { - return r < 0 || r >= matrix.rows; - }); - - if (rowOut) { - throw new RangeError('row indices are out of range'); - } - - if (!Array.isArray(rowIndices)) rowIndices = Array.from(rowIndices); - - return rowIndices; -} - -function checkColumnIndices(matrix, columnIndices) { - if (typeof columnIndices !== 'object') { - throw new TypeError('unexpected type for column indices'); - } - - var columnOut = columnIndices.some((c) => { - return c < 0 || c >= matrix.columns; - }); - - if (columnOut) { - throw new RangeError('column indices are out of range'); - } - if (!Array.isArray(columnIndices)) columnIndices = Array.from(columnIndices); - - return columnIndices; -} - -function checkRange(matrix, startRow, endRow, startColumn, endColumn) { - if (arguments.length !== 5) { - throw new RangeError('expected 4 arguments'); - } - checkNumber('startRow', startRow); - checkNumber('endRow', endRow); - checkNumber('startColumn', startColumn); - checkNumber('endColumn', endColumn); - if ( - startRow > endRow || - startColumn > endColumn || - startRow < 0 || - startRow >= matrix.rows || - endRow < 0 || - endRow >= matrix.rows || - startColumn < 0 || - startColumn >= matrix.columns || - endColumn < 0 || - endColumn >= matrix.columns - ) { - throw new RangeError('Submatrix indices are out of range'); - } -} - -function getRange(from, to) { - var arr = new Array(to - from + 1); - for (var i = 0; i < arr.length; i++) { - arr[i] = from + i; - } - return arr; -} - -function sumByRow(matrix) { - var sum = matrix_Matrix.zeros(matrix.rows, 1); - for (var i = 0; i < matrix.rows; ++i) { - for (var j = 0; j < matrix.columns; ++j) { - sum.set(i, 0, sum.get(i, 0) + matrix.get(i, j)); - } - } - return sum; -} - -function sumByColumn(matrix) { - var sum = matrix_Matrix.zeros(1, matrix.columns); - for (var i = 0; i < matrix.rows; ++i) { - for (var j = 0; j < matrix.columns; ++j) { - sum.set(0, j, sum.get(0, j) + matrix.get(i, j)); - } - } - return sum; -} - -function sumAll(matrix) { - var v = 0; - for (var i = 0; i < matrix.rows; i++) { - for (var j = 0; j < matrix.columns; j++) { - v += matrix.get(i, j); - } - } - return v; -} - -function checkNumber(name, value) { - if (typeof value !== 'number') { - throw new TypeError(`${name} must be a number`); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/base.js - - - -class base_BaseView extends AbstractMatrix() { - constructor(matrix, rows, columns) { - super(); - this.matrix = matrix; - this.rows = rows; - this.columns = columns; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/transpose.js - - -class transpose_MatrixTransposeView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.columns, matrix.rows); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(columnIndex, rowIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(columnIndex, rowIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/row.js - - -class row_MatrixRowView extends base_BaseView { - constructor(matrix, row) { - super(matrix, 1, matrix.columns); - this.row = row; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.row, columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.row, columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/sub.js - - - - -class sub_MatrixSubView extends base_BaseView { - constructor(matrix, startRow, endRow, startColumn, endColumn) { - checkRange(matrix, startRow, endRow, startColumn, endColumn); - super(matrix, endRow - startRow + 1, endColumn - startColumn + 1); - this.startRow = startRow; - this.startColumn = startColumn; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set( - this.startRow + rowIndex, - this.startColumn + columnIndex, - value - ); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get( - this.startRow + rowIndex, - this.startColumn + columnIndex - ); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/selection.js - - - - -class selection_MatrixSelectionView extends base_BaseView { - constructor(matrix, rowIndices, columnIndices) { - var indices = checkIndices(matrix, rowIndices, columnIndices); - super(matrix, indices.row.length, indices.column.length); - this.rowIndices = indices.row; - this.columnIndices = indices.column; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set( - this.rowIndices[rowIndex], - this.columnIndices[columnIndex], - value - ); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get( - this.rowIndices[rowIndex], - this.columnIndices[columnIndex] - ); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/rowSelection.js - - - - -class rowSelection_MatrixRowSelectionView extends base_BaseView { - constructor(matrix, rowIndices) { - rowIndices = checkRowIndices(matrix, rowIndices); - super(matrix, rowIndices.length, matrix.columns); - this.rowIndices = rowIndices; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.rowIndices[rowIndex], columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.rowIndices[rowIndex], columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/columnSelection.js - - - - -class columnSelection_MatrixColumnSelectionView extends base_BaseView { - constructor(matrix, columnIndices) { - columnIndices = checkColumnIndices(matrix, columnIndices); - super(matrix, matrix.rows, columnIndices.length); - this.columnIndices = columnIndices; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.columnIndices[columnIndex], value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(rowIndex, this.columnIndices[columnIndex]); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/column.js - - -class column_MatrixColumnView extends base_BaseView { - constructor(matrix, column) { - super(matrix, matrix.rows, 1); - this.column = column; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.column, value); - return this; - } - - get(rowIndex) { - return this.matrix.get(rowIndex, this.column); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/flipRow.js - - -class flipRow_MatrixFlipRowView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.rows, matrix.columns); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.rows - rowIndex - 1, columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.rows - rowIndex - 1, columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/flipColumn.js - - -class flipColumn_MatrixFlipColumnView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.rows, matrix.columns); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.columns - columnIndex - 1, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(rowIndex, this.columns - columnIndex - 1); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/abstractMatrix.js - - - - - - - - - - - - - - - -function AbstractMatrix(superCtor) { - if (superCtor === undefined) superCtor = Object; - - /** - * Real matrix - * @class Matrix - * @param {number|Array|Matrix} nRows - Number of rows of the new matrix, - * 2D array containing the data or Matrix instance to clone - * @param {number} [nColumns] - Number of columns of the new matrix - */ - class Matrix extends superCtor { - static get [Symbol.species]() { - return this; - } - - /** - * Constructs a Matrix with the chosen dimensions from a 1D array - * @param {number} newRows - Number of rows - * @param {number} newColumns - Number of columns - * @param {Array} newData - A 1D array containing data for the matrix - * @return {Matrix} - The new matrix - */ - static from1DArray(newRows, newColumns, newData) { - var length = newRows * newColumns; - if (length !== newData.length) { - throw new RangeError('Data length does not match given dimensions'); - } - var newMatrix = new this(newRows, newColumns); - for (var row = 0; row < newRows; row++) { - for (var column = 0; column < newColumns; column++) { - newMatrix.set(row, column, newData[row * newColumns + column]); - } - } - return newMatrix; - } - - /** - * Creates a row vector, a matrix with only one row. - * @param {Array} newData - A 1D array containing data for the vector - * @return {Matrix} - The new matrix - */ - static rowVector(newData) { - var vector = new this(1, newData.length); - for (var i = 0; i < newData.length; i++) { - vector.set(0, i, newData[i]); - } - return vector; - } - - /** - * Creates a column vector, a matrix with only one column. - * @param {Array} newData - A 1D array containing data for the vector - * @return {Matrix} - The new matrix - */ - static columnVector(newData) { - var vector = new this(newData.length, 1); - for (var i = 0; i < newData.length; i++) { - vector.set(i, 0, newData[i]); - } - return vector; - } - - /** - * Creates an empty matrix with the given dimensions. Values will be undefined. Same as using new Matrix(rows, columns). - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static empty(rows, columns) { - return new this(rows, columns); - } - - /** - * Creates a matrix with the given dimensions. Values will be set to zero. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static zeros(rows, columns) { - return this.empty(rows, columns).fill(0); - } - - /** - * Creates a matrix with the given dimensions. Values will be set to one. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static ones(rows, columns) { - return this.empty(rows, columns).fill(1); - } - - /** - * Creates a matrix with the given dimensions. Values will be randomly set. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @param {function} [rng=Math.random] - Random number generator - * @return {Matrix} The new matrix - */ - static rand(rows, columns, rng) { - if (rng === undefined) rng = Math.random; - var matrix = this.empty(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - matrix.set(i, j, rng()); - } - } - return matrix; - } - - /** - * Creates a matrix with the given dimensions. Values will be random integers. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @param {number} [maxValue=1000] - Maximum value - * @param {function} [rng=Math.random] - Random number generator - * @return {Matrix} The new matrix - */ - static randInt(rows, columns, maxValue, rng) { - if (maxValue === undefined) maxValue = 1000; - if (rng === undefined) rng = Math.random; - var matrix = this.empty(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - var value = Math.floor(rng() * maxValue); - matrix.set(i, j, value); - } - } - return matrix; - } - - /** - * Creates an identity matrix with the given dimension. Values of the diagonal will be 1 and others will be 0. - * @param {number} rows - Number of rows - * @param {number} [columns=rows] - Number of columns - * @param {number} [value=1] - Value to fill the diagonal with - * @return {Matrix} - The new identity matrix - */ - static eye(rows, columns, value) { - if (columns === undefined) columns = rows; - if (value === undefined) value = 1; - var min = Math.min(rows, columns); - var matrix = this.zeros(rows, columns); - for (var i = 0; i < min; i++) { - matrix.set(i, i, value); - } - return matrix; - } - - /** - * Creates a diagonal matrix based on the given array. - * @param {Array} data - Array containing the data for the diagonal - * @param {number} [rows] - Number of rows (Default: data.length) - * @param {number} [columns] - Number of columns (Default: rows) - * @return {Matrix} - The new diagonal matrix - */ - static diag(data, rows, columns) { - var l = data.length; - if (rows === undefined) rows = l; - if (columns === undefined) columns = rows; - var min = Math.min(l, rows, columns); - var matrix = this.zeros(rows, columns); - for (var i = 0; i < min; i++) { - matrix.set(i, i, data[i]); - } - return matrix; - } - - /** - * Returns a matrix whose elements are the minimum between matrix1 and matrix2 - * @param {Matrix} matrix1 - * @param {Matrix} matrix2 - * @return {Matrix} - */ - static min(matrix1, matrix2) { - matrix1 = this.checkMatrix(matrix1); - matrix2 = this.checkMatrix(matrix2); - var rows = matrix1.rows; - var columns = matrix1.columns; - var result = new this(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - result.set(i, j, Math.min(matrix1.get(i, j), matrix2.get(i, j))); - } - } - return result; - } - - /** - * Returns a matrix whose elements are the maximum between matrix1 and matrix2 - * @param {Matrix} matrix1 - * @param {Matrix} matrix2 - * @return {Matrix} - */ - static max(matrix1, matrix2) { - matrix1 = this.checkMatrix(matrix1); - matrix2 = this.checkMatrix(matrix2); - var rows = matrix1.rows; - var columns = matrix1.columns; - var result = new this(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - result.set(i, j, Math.max(matrix1.get(i, j), matrix2.get(i, j))); - } - } - return result; - } - - /** - * Check that the provided value is a Matrix and tries to instantiate one if not - * @param {*} value - The value to check - * @return {Matrix} - */ - static checkMatrix(value) { - return Matrix.isMatrix(value) ? value : new this(value); - } - - /** - * Returns true if the argument is a Matrix, false otherwise - * @param {*} value - The value to check - * @return {boolean} - */ - static isMatrix(value) { - return (value != null) && (value.klass === 'Matrix'); - } - - /** - * @prop {number} size - The number of elements in the matrix. - */ - get size() { - return this.rows * this.columns; - } - - /** - * Applies a callback for each element of the matrix. The function is called in the matrix (this) context. - * @param {function} callback - Function that will be called with two parameters : i (row) and j (column) - * @return {Matrix} this - */ - apply(callback) { - if (typeof callback !== 'function') { - throw new TypeError('callback must be a function'); - } - var ii = this.rows; - var jj = this.columns; - for (var i = 0; i < ii; i++) { - for (var j = 0; j < jj; j++) { - callback.call(this, i, j); - } - } - return this; - } - - /** - * Returns a new 1D array filled row by row with the matrix values - * @return {Array} - */ - to1DArray() { - var array = new Array(this.size); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - array[i * this.columns + j] = this.get(i, j); - } - } - return array; - } - - /** - * Returns a 2D array containing a copy of the data - * @return {Array} - */ - to2DArray() { - var copy = new Array(this.rows); - for (var i = 0; i < this.rows; i++) { - copy[i] = new Array(this.columns); - for (var j = 0; j < this.columns; j++) { - copy[i][j] = this.get(i, j); - } - } - return copy; - } - - /** - * @return {boolean} true if the matrix has one row - */ - isRowVector() { - return this.rows === 1; - } - - /** - * @return {boolean} true if the matrix has one column - */ - isColumnVector() { - return this.columns === 1; - } - - /** - * @return {boolean} true if the matrix has one row or one column - */ - isVector() { - return (this.rows === 1) || (this.columns === 1); - } - - /** - * @return {boolean} true if the matrix has the same number of rows and columns - */ - isSquare() { - return this.rows === this.columns; - } - - /** - * @return {boolean} true if the matrix is square and has the same values on both sides of the diagonal - */ - isSymmetric() { - if (this.isSquare()) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j <= i; j++) { - if (this.get(i, j) !== this.get(j, i)) { - return false; - } - } - } - return true; - } - return false; - } - - /** - * Sets a given element of the matrix. mat.set(3,4,1) is equivalent to mat[3][4]=1 - * @abstract - * @param {number} rowIndex - Index of the row - * @param {number} columnIndex - Index of the column - * @param {number} value - The new value for the element - * @return {Matrix} this - */ - set(rowIndex, columnIndex, value) { // eslint-disable-line no-unused-vars - throw new Error('set method is unimplemented'); - } - - /** - * Returns the given element of the matrix. mat.get(3,4) is equivalent to matrix[3][4] - * @abstract - * @param {number} rowIndex - Index of the row - * @param {number} columnIndex - Index of the column - * @return {number} - */ - get(rowIndex, columnIndex) { // eslint-disable-line no-unused-vars - throw new Error('get method is unimplemented'); - } - - /** - * Creates a new matrix that is a repetition of the current matrix. New matrix has rowRep times the number of - * rows of the matrix, and colRep times the number of columns of the matrix - * @param {number} rowRep - Number of times the rows should be repeated - * @param {number} colRep - Number of times the columns should be re - * @return {Matrix} - * @example - * var matrix = new Matrix([[1,2]]); - * matrix.repeat(2); // [[1,2],[1,2]] - */ - repeat(rowRep, colRep) { - rowRep = rowRep || 1; - colRep = colRep || 1; - var matrix = new this.constructor[Symbol.species](this.rows * rowRep, this.columns * colRep); - for (var i = 0; i < rowRep; i++) { - for (var j = 0; j < colRep; j++) { - matrix.setSubMatrix(this, this.rows * i, this.columns * j); - } - } - return matrix; - } - - /** - * Fills the matrix with a given value. All elements will be set to this value. - * @param {number} value - New value - * @return {Matrix} this - */ - fill(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, value); - } - } - return this; - } - - /** - * Negates the matrix. All elements will be multiplied by (-1) - * @return {Matrix} this - */ - neg() { - return this.mulS(-1); - } - - /** - * Returns a new array from the given row index - * @param {number} index - Row index - * @return {Array} - */ - getRow(index) { - checkRowIndex(this, index); - var row = new Array(this.columns); - for (var i = 0; i < this.columns; i++) { - row[i] = this.get(index, i); - } - return row; - } - - /** - * Returns a new row vector from the given row index - * @param {number} index - Row index - * @return {Matrix} - */ - getRowVector(index) { - return this.constructor.rowVector(this.getRow(index)); - } - - /** - * Sets a row at the given index - * @param {number} index - Row index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - setRow(index, array) { - checkRowIndex(this, index); - array = checkRowVector(this, array); - for (var i = 0; i < this.columns; i++) { - this.set(index, i, array[i]); - } - return this; - } - - /** - * Swaps two rows - * @param {number} row1 - First row index - * @param {number} row2 - Second row index - * @return {Matrix} this - */ - swapRows(row1, row2) { - checkRowIndex(this, row1); - checkRowIndex(this, row2); - for (var i = 0; i < this.columns; i++) { - var temp = this.get(row1, i); - this.set(row1, i, this.get(row2, i)); - this.set(row2, i, temp); - } - return this; - } - - /** - * Returns a new array from the given column index - * @param {number} index - Column index - * @return {Array} - */ - getColumn(index) { - checkColumnIndex(this, index); - var column = new Array(this.rows); - for (var i = 0; i < this.rows; i++) { - column[i] = this.get(i, index); - } - return column; - } - - /** - * Returns a new column vector from the given column index - * @param {number} index - Column index - * @return {Matrix} - */ - getColumnVector(index) { - return this.constructor.columnVector(this.getColumn(index)); - } - - /** - * Sets a column at the given index - * @param {number} index - Column index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - setColumn(index, array) { - checkColumnIndex(this, index); - array = checkColumnVector(this, array); - for (var i = 0; i < this.rows; i++) { - this.set(i, index, array[i]); - } - return this; - } - - /** - * Swaps two columns - * @param {number} column1 - First column index - * @param {number} column2 - Second column index - * @return {Matrix} this - */ - swapColumns(column1, column2) { - checkColumnIndex(this, column1); - checkColumnIndex(this, column2); - for (var i = 0; i < this.rows; i++) { - var temp = this.get(i, column1); - this.set(i, column1, this.get(i, column2)); - this.set(i, column2, temp); - } - return this; - } - - /** - * Adds the values of a vector to each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - addRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) + vector[j]); - } - } - return this; - } - - /** - * Subtracts the values of a vector from each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - subRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) - vector[j]); - } - } - return this; - } - - /** - * Multiplies the values of a vector with each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - mulRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) * vector[j]); - } - } - return this; - } - - /** - * Divides the values of each row by those of a vector - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - divRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) / vector[j]); - } - } - return this; - } - - /** - * Adds the values of a vector to each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - addColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) + vector[i]); - } - } - return this; - } - - /** - * Subtracts the values of a vector from each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - subColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) - vector[i]); - } - } - return this; - } - - /** - * Multiplies the values of a vector with each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - mulColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) * vector[i]); - } - } - return this; - } - - /** - * Divides the values of each column by those of a vector - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - divColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) / vector[i]); - } - } - return this; - } - - /** - * Multiplies the values of a row with a scalar - * @param {number} index - Row index - * @param {number} value - * @return {Matrix} this - */ - mulRow(index, value) { - checkRowIndex(this, index); - for (var i = 0; i < this.columns; i++) { - this.set(index, i, this.get(index, i) * value); - } - return this; - } - - /** - * Multiplies the values of a column with a scalar - * @param {number} index - Column index - * @param {number} value - * @return {Matrix} this - */ - mulColumn(index, value) { - checkColumnIndex(this, index); - for (var i = 0; i < this.rows; i++) { - this.set(i, index, this.get(i, index) * value); - } - return this; - } - - /** - * Returns the maximum value of the matrix - * @return {number} - */ - max() { - var v = this.get(0, 0); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) > v) { - v = this.get(i, j); - } - } - } - return v; - } - - /** - * Returns the index of the maximum value - * @return {Array} - */ - maxIndex() { - var v = this.get(0, 0); - var idx = [0, 0]; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) > v) { - v = this.get(i, j); - idx[0] = i; - idx[1] = j; - } - } - } - return idx; - } - - /** - * Returns the minimum value of the matrix - * @return {number} - */ - min() { - var v = this.get(0, 0); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) < v) { - v = this.get(i, j); - } - } - } - return v; - } - - /** - * Returns the index of the minimum value - * @return {Array} - */ - minIndex() { - var v = this.get(0, 0); - var idx = [0, 0]; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) < v) { - v = this.get(i, j); - idx[0] = i; - idx[1] = j; - } - } - } - return idx; - } - - /** - * Returns the maximum value of one row - * @param {number} row - Row index - * @return {number} - */ - maxRow(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) > v) { - v = this.get(row, i); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one row - * @param {number} row - Row index - * @return {Array} - */ - maxRowIndex(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - var idx = [row, 0]; - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) > v) { - v = this.get(row, i); - idx[1] = i; - } - } - return idx; - } - - /** - * Returns the minimum value of one row - * @param {number} row - Row index - * @return {number} - */ - minRow(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) < v) { - v = this.get(row, i); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one row - * @param {number} row - Row index - * @return {Array} - */ - minRowIndex(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - var idx = [row, 0]; - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) < v) { - v = this.get(row, i); - idx[1] = i; - } - } - return idx; - } - - /** - * Returns the maximum value of one column - * @param {number} column - Column index - * @return {number} - */ - maxColumn(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) > v) { - v = this.get(i, column); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one column - * @param {number} column - Column index - * @return {Array} - */ - maxColumnIndex(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - var idx = [0, column]; - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) > v) { - v = this.get(i, column); - idx[0] = i; - } - } - return idx; - } - - /** - * Returns the minimum value of one column - * @param {number} column - Column index - * @return {number} - */ - minColumn(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) < v) { - v = this.get(i, column); - } - } - return v; - } - - /** - * Returns the index of the minimum value of one column - * @param {number} column - Column index - * @return {Array} - */ - minColumnIndex(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - var idx = [0, column]; - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) < v) { - v = this.get(i, column); - idx[0] = i; - } - } - return idx; - } - - /** - * Returns an array containing the diagonal values of the matrix - * @return {Array} - */ - diag() { - var min = Math.min(this.rows, this.columns); - var diag = new Array(min); - for (var i = 0; i < min; i++) { - diag[i] = this.get(i, i); - } - return diag; - } - - /** - * Returns the sum by the argument given, if no argument given, - * it returns the sum of all elements of the matrix. - * @param {string} by - sum by 'row' or 'column'. - * @return {Matrix|number} - */ - sum(by) { - switch (by) { - case 'row': - return sumByRow(this); - case 'column': - return sumByColumn(this); - default: - return sumAll(this); - } - } - - /** - * Returns the mean of all elements of the matrix - * @return {number} - */ - mean() { - return this.sum() / this.size; - } - - /** - * Returns the product of all elements of the matrix - * @return {number} - */ - prod() { - var prod = 1; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - prod *= this.get(i, j); - } - } - return prod; - } - - /** - * Returns the norm of a matrix. - * @param {string} type - "frobenius" (default) or "max" return resp. the Frobenius norm and the max norm. - * @return {number} - */ - norm(type = 'frobenius') { - var result = 0; - if (type === 'max') { - return this.max(); - } else if (type === 'frobenius') { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - result = result + this.get(i, j) * this.get(i, j); - } - } - return Math.sqrt(result); - } else { - throw new RangeError(`unknown norm type: ${type}`); - } - } - - /** - * Computes the cumulative sum of the matrix elements (in place, row by row) - * @return {Matrix} this - */ - cumulativeSum() { - var sum = 0; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - sum += this.get(i, j); - this.set(i, j, sum); - } - } - return this; - } - - /** - * Computes the dot (scalar) product between the matrix and another - * @param {Matrix} vector2 vector - * @return {number} - */ - dot(vector2) { - if (Matrix.isMatrix(vector2)) vector2 = vector2.to1DArray(); - var vector1 = this.to1DArray(); - if (vector1.length !== vector2.length) { - throw new RangeError('vectors do not have the same size'); - } - var dot = 0; - for (var i = 0; i < vector1.length; i++) { - dot += vector1[i] * vector2[i]; - } - return dot; - } - - /** - * Returns the matrix product between this and other - * @param {Matrix} other - * @return {Matrix} - */ - mmul(other) { - other = this.constructor.checkMatrix(other); - if (this.columns !== other.rows) { - // eslint-disable-next-line no-console - console.warn('Number of columns of left matrix are not equal to number of rows of right matrix.'); - } - - var m = this.rows; - var n = this.columns; - var p = other.columns; - - var result = new this.constructor[Symbol.species](m, p); - - var Bcolj = new Array(n); - for (var j = 0; j < p; j++) { - for (var k = 0; k < n; k++) { - Bcolj[k] = other.get(k, j); - } - - for (var i = 0; i < m; i++) { - var s = 0; - for (k = 0; k < n; k++) { - s += this.get(i, k) * Bcolj[k]; - } - - result.set(i, j, s); - } - } - return result; - } - - strassen2x2(other) { - var result = new this.constructor[Symbol.species](2, 2); - const a11 = this.get(0, 0); - const b11 = other.get(0, 0); - const a12 = this.get(0, 1); - const b12 = other.get(0, 1); - const a21 = this.get(1, 0); - const b21 = other.get(1, 0); - const a22 = this.get(1, 1); - const b22 = other.get(1, 1); - - // Compute intermediate values. - const m1 = (a11 + a22) * (b11 + b22); - const m2 = (a21 + a22) * b11; - const m3 = a11 * (b12 - b22); - const m4 = a22 * (b21 - b11); - const m5 = (a11 + a12) * b22; - const m6 = (a21 - a11) * (b11 + b12); - const m7 = (a12 - a22) * (b21 + b22); - - // Combine intermediate values into the output. - const c00 = m1 + m4 - m5 + m7; - const c01 = m3 + m5; - const c10 = m2 + m4; - const c11 = m1 - m2 + m3 + m6; - - result.set(0, 0, c00); - result.set(0, 1, c01); - result.set(1, 0, c10); - result.set(1, 1, c11); - return result; - } - - strassen3x3(other) { - var result = new this.constructor[Symbol.species](3, 3); - - const a00 = this.get(0, 0); - const a01 = this.get(0, 1); - const a02 = this.get(0, 2); - const a10 = this.get(1, 0); - const a11 = this.get(1, 1); - const a12 = this.get(1, 2); - const a20 = this.get(2, 0); - const a21 = this.get(2, 1); - const a22 = this.get(2, 2); - - const b00 = other.get(0, 0); - const b01 = other.get(0, 1); - const b02 = other.get(0, 2); - const b10 = other.get(1, 0); - const b11 = other.get(1, 1); - const b12 = other.get(1, 2); - const b20 = other.get(2, 0); - const b21 = other.get(2, 1); - const b22 = other.get(2, 2); - - const m1 = (a00 + a01 + a02 - a10 - a11 - a21 - a22) * b11; - const m2 = (a00 - a10) * (-b01 + b11); - const m3 = a11 * (-b00 + b01 + b10 - b11 - b12 - b20 + b22); - const m4 = (-a00 + a10 + a11) * (b00 - b01 + b11); - const m5 = (a10 + a11) * (-b00 + b01); - const m6 = a00 * b00; - const m7 = (-a00 + a20 + a21) * (b00 - b02 + b12); - const m8 = (-a00 + a20) * (b02 - b12); - const m9 = (a20 + a21) * (-b00 + b02); - const m10 = (a00 + a01 + a02 - a11 - a12 - a20 - a21) * b12; - const m11 = a21 * (-b00 + b02 + b10 - b11 - b12 - b20 + b21); - const m12 = (-a02 + a21 + a22) * (b11 + b20 - b21); - const m13 = (a02 - a22) * (b11 - b21); - const m14 = a02 * b20; - const m15 = (a21 + a22) * (-b20 + b21); - const m16 = (-a02 + a11 + a12) * (b12 + b20 - b22); - const m17 = (a02 - a12) * (b12 - b22); - const m18 = (a11 + a12) * (-b20 + b22); - const m19 = a01 * b10; - const m20 = a12 * b21; - const m21 = a10 * b02; - const m22 = a20 * b01; - const m23 = a22 * b22; - - const c00 = m6 + m14 + m19; - const c01 = m1 + m4 + m5 + m6 + m12 + m14 + m15; - const c02 = m6 + m7 + m9 + m10 + m14 + m16 + m18; - const c10 = m2 + m3 + m4 + m6 + m14 + m16 + m17; - const c11 = m2 + m4 + m5 + m6 + m20; - const c12 = m14 + m16 + m17 + m18 + m21; - const c20 = m6 + m7 + m8 + m11 + m12 + m13 + m14; - const c21 = m12 + m13 + m14 + m15 + m22; - const c22 = m6 + m7 + m8 + m9 + m23; - - result.set(0, 0, c00); - result.set(0, 1, c01); - result.set(0, 2, c02); - result.set(1, 0, c10); - result.set(1, 1, c11); - result.set(1, 2, c12); - result.set(2, 0, c20); - result.set(2, 1, c21); - result.set(2, 2, c22); - return result; - } - - /** - * Returns the matrix product between x and y. More efficient than mmul(other) only when we multiply squared matrix and when the size of the matrix is > 1000. - * @param {Matrix} y - * @return {Matrix} - */ - mmulStrassen(y) { - var x = this.clone(); - var r1 = x.rows; - var c1 = x.columns; - var r2 = y.rows; - var c2 = y.columns; - if (c1 !== r2) { - // eslint-disable-next-line no-console - console.warn(`Multiplying ${r1} x ${c1} and ${r2} x ${c2} matrix: dimensions do not match.`); - } - - // Put a matrix into the top left of a matrix of zeros. - // `rows` and `cols` are the dimensions of the output matrix. - function embed(mat, rows, cols) { - var r = mat.rows; - var c = mat.columns; - if ((r === rows) && (c === cols)) { - return mat; - } else { - var resultat = Matrix.zeros(rows, cols); - resultat = resultat.setSubMatrix(mat, 0, 0); - return resultat; - } - } - - - // Make sure both matrices are the same size. - // This is exclusively for simplicity: - // this algorithm can be implemented with matrices of different sizes. - - var r = Math.max(r1, r2); - var c = Math.max(c1, c2); - x = embed(x, r, c); - y = embed(y, r, c); - - // Our recursive multiplication function. - function blockMult(a, b, rows, cols) { - // For small matrices, resort to naive multiplication. - if (rows <= 512 || cols <= 512) { - return a.mmul(b); // a is equivalent to this - } - - // Apply dynamic padding. - if ((rows % 2 === 1) && (cols % 2 === 1)) { - a = embed(a, rows + 1, cols + 1); - b = embed(b, rows + 1, cols + 1); - } else if (rows % 2 === 1) { - a = embed(a, rows + 1, cols); - b = embed(b, rows + 1, cols); - } else if (cols % 2 === 1) { - a = embed(a, rows, cols + 1); - b = embed(b, rows, cols + 1); - } - - var halfRows = parseInt(a.rows / 2, 10); - var halfCols = parseInt(a.columns / 2, 10); - // Subdivide input matrices. - var a11 = a.subMatrix(0, halfRows - 1, 0, halfCols - 1); - var b11 = b.subMatrix(0, halfRows - 1, 0, halfCols - 1); - - var a12 = a.subMatrix(0, halfRows - 1, halfCols, a.columns - 1); - var b12 = b.subMatrix(0, halfRows - 1, halfCols, b.columns - 1); - - var a21 = a.subMatrix(halfRows, a.rows - 1, 0, halfCols - 1); - var b21 = b.subMatrix(halfRows, b.rows - 1, 0, halfCols - 1); - - var a22 = a.subMatrix(halfRows, a.rows - 1, halfCols, a.columns - 1); - var b22 = b.subMatrix(halfRows, b.rows - 1, halfCols, b.columns - 1); - - // Compute intermediate values. - var m1 = blockMult(Matrix.add(a11, a22), Matrix.add(b11, b22), halfRows, halfCols); - var m2 = blockMult(Matrix.add(a21, a22), b11, halfRows, halfCols); - var m3 = blockMult(a11, Matrix.sub(b12, b22), halfRows, halfCols); - var m4 = blockMult(a22, Matrix.sub(b21, b11), halfRows, halfCols); - var m5 = blockMult(Matrix.add(a11, a12), b22, halfRows, halfCols); - var m6 = blockMult(Matrix.sub(a21, a11), Matrix.add(b11, b12), halfRows, halfCols); - var m7 = blockMult(Matrix.sub(a12, a22), Matrix.add(b21, b22), halfRows, halfCols); - - // Combine intermediate values into the output. - var c11 = Matrix.add(m1, m4); - c11.sub(m5); - c11.add(m7); - var c12 = Matrix.add(m3, m5); - var c21 = Matrix.add(m2, m4); - var c22 = Matrix.sub(m1, m2); - c22.add(m3); - c22.add(m6); - - // Crop output to the desired size (undo dynamic padding). - var resultat = Matrix.zeros(2 * c11.rows, 2 * c11.columns); - resultat = resultat.setSubMatrix(c11, 0, 0); - resultat = resultat.setSubMatrix(c12, c11.rows, 0); - resultat = resultat.setSubMatrix(c21, 0, c11.columns); - resultat = resultat.setSubMatrix(c22, c11.rows, c11.columns); - return resultat.subMatrix(0, rows - 1, 0, cols - 1); - } - return blockMult(x, y, r, c); - } - - /** - * Returns a row-by-row scaled matrix - * @param {number} [min=0] - Minimum scaled value - * @param {number} [max=1] - Maximum scaled value - * @return {Matrix} - The scaled matrix - */ - scaleRows(min, max) { - min = min === undefined ? 0 : min; - max = max === undefined ? 1 : max; - if (min >= max) { - throw new RangeError('min should be strictly smaller than max'); - } - var newMatrix = this.constructor.empty(this.rows, this.columns); - for (var i = 0; i < this.rows; i++) { - var scaled = ml_array_rescale_lib_es6(this.getRow(i), { min, max }); - newMatrix.setRow(i, scaled); - } - return newMatrix; - } - - /** - * Returns a new column-by-column scaled matrix - * @param {number} [min=0] - Minimum scaled value - * @param {number} [max=1] - Maximum scaled value - * @return {Matrix} - The new scaled matrix - * @example - * var matrix = new Matrix([[1,2],[-1,0]]); - * var scaledMatrix = matrix.scaleColumns(); // [[1,1],[0,0]] - */ - scaleColumns(min, max) { - min = min === undefined ? 0 : min; - max = max === undefined ? 1 : max; - if (min >= max) { - throw new RangeError('min should be strictly smaller than max'); - } - var newMatrix = this.constructor.empty(this.rows, this.columns); - for (var i = 0; i < this.columns; i++) { - var scaled = ml_array_rescale_lib_es6(this.getColumn(i), { - min: min, - max: max - }); - newMatrix.setColumn(i, scaled); - } - return newMatrix; - } - - - /** - * Returns the Kronecker product (also known as tensor product) between this and other - * See https://en.wikipedia.org/wiki/Kronecker_product - * @param {Matrix} other - * @return {Matrix} - */ - kroneckerProduct(other) { - other = this.constructor.checkMatrix(other); - - var m = this.rows; - var n = this.columns; - var p = other.rows; - var q = other.columns; - - var result = new this.constructor[Symbol.species](m * p, n * q); - for (var i = 0; i < m; i++) { - for (var j = 0; j < n; j++) { - for (var k = 0; k < p; k++) { - for (var l = 0; l < q; l++) { - result[p * i + k][q * j + l] = this.get(i, j) * other.get(k, l); - } - } - } - } - return result; - } - - /** - * Transposes the matrix and returns a new one containing the result - * @return {Matrix} - */ - transpose() { - var result = new this.constructor[Symbol.species](this.columns, this.rows); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - result.set(j, i, this.get(i, j)); - } - } - return result; - } - - /** - * Sorts the rows (in place) - * @param {function} compareFunction - usual Array.prototype.sort comparison function - * @return {Matrix} this - */ - sortRows(compareFunction) { - if (compareFunction === undefined) compareFunction = compareNumbers; - for (var i = 0; i < this.rows; i++) { - this.setRow(i, this.getRow(i).sort(compareFunction)); - } - return this; - } - - /** - * Sorts the columns (in place) - * @param {function} compareFunction - usual Array.prototype.sort comparison function - * @return {Matrix} this - */ - sortColumns(compareFunction) { - if (compareFunction === undefined) compareFunction = compareNumbers; - for (var i = 0; i < this.columns; i++) { - this.setColumn(i, this.getColumn(i).sort(compareFunction)); - } - return this; - } - - /** - * Returns a subset of the matrix - * @param {number} startRow - First row index - * @param {number} endRow - Last row index - * @param {number} startColumn - First column index - * @param {number} endColumn - Last column index - * @return {Matrix} - */ - subMatrix(startRow, endRow, startColumn, endColumn) { - checkRange(this, startRow, endRow, startColumn, endColumn); - var newMatrix = new this.constructor[Symbol.species](endRow - startRow + 1, endColumn - startColumn + 1); - for (var i = startRow; i <= endRow; i++) { - for (var j = startColumn; j <= endColumn; j++) { - newMatrix[i - startRow][j - startColumn] = this.get(i, j); - } - } - return newMatrix; - } - - /** - * Returns a subset of the matrix based on an array of row indices - * @param {Array} indices - Array containing the row indices - * @param {number} [startColumn = 0] - First column index - * @param {number} [endColumn = this.columns-1] - Last column index - * @return {Matrix} - */ - subMatrixRow(indices, startColumn, endColumn) { - if (startColumn === undefined) startColumn = 0; - if (endColumn === undefined) endColumn = this.columns - 1; - if ((startColumn > endColumn) || (startColumn < 0) || (startColumn >= this.columns) || (endColumn < 0) || (endColumn >= this.columns)) { - throw new RangeError('Argument out of range'); - } - - var newMatrix = new this.constructor[Symbol.species](indices.length, endColumn - startColumn + 1); - for (var i = 0; i < indices.length; i++) { - for (var j = startColumn; j <= endColumn; j++) { - if (indices[i] < 0 || indices[i] >= this.rows) { - throw new RangeError(`Row index out of range: ${indices[i]}`); - } - newMatrix.set(i, j - startColumn, this.get(indices[i], j)); - } - } - return newMatrix; - } - - /** - * Returns a subset of the matrix based on an array of column indices - * @param {Array} indices - Array containing the column indices - * @param {number} [startRow = 0] - First row index - * @param {number} [endRow = this.rows-1] - Last row index - * @return {Matrix} - */ - subMatrixColumn(indices, startRow, endRow) { - if (startRow === undefined) startRow = 0; - if (endRow === undefined) endRow = this.rows - 1; - if ((startRow > endRow) || (startRow < 0) || (startRow >= this.rows) || (endRow < 0) || (endRow >= this.rows)) { - throw new RangeError('Argument out of range'); - } - - var newMatrix = new this.constructor[Symbol.species](endRow - startRow + 1, indices.length); - for (var i = 0; i < indices.length; i++) { - for (var j = startRow; j <= endRow; j++) { - if (indices[i] < 0 || indices[i] >= this.columns) { - throw new RangeError(`Column index out of range: ${indices[i]}`); - } - newMatrix.set(j - startRow, i, this.get(j, indices[i])); - } - } - return newMatrix; - } - - /** - * Set a part of the matrix to the given sub-matrix - * @param {Matrix|Array< Array >} matrix - The source matrix from which to extract values. - * @param {number} startRow - The index of the first row to set - * @param {number} startColumn - The index of the first column to set - * @return {Matrix} - */ - setSubMatrix(matrix, startRow, startColumn) { - matrix = this.constructor.checkMatrix(matrix); - var endRow = startRow + matrix.rows - 1; - var endColumn = startColumn + matrix.columns - 1; - checkRange(this, startRow, endRow, startColumn, endColumn); - for (var i = 0; i < matrix.rows; i++) { - for (var j = 0; j < matrix.columns; j++) { - this[startRow + i][startColumn + j] = matrix.get(i, j); - } - } - return this; - } - - /** - * Return a new matrix based on a selection of rows and columns - * @param {Array} rowIndices - The row indices to select. Order matters and an index can be more than once. - * @param {Array} columnIndices - The column indices to select. Order matters and an index can be use more than once. - * @return {Matrix} The new matrix - */ - selection(rowIndices, columnIndices) { - var indices = checkIndices(this, rowIndices, columnIndices); - var newMatrix = new this.constructor[Symbol.species](rowIndices.length, columnIndices.length); - for (var i = 0; i < indices.row.length; i++) { - var rowIndex = indices.row[i]; - for (var j = 0; j < indices.column.length; j++) { - var columnIndex = indices.column[j]; - newMatrix[i][j] = this.get(rowIndex, columnIndex); - } - } - return newMatrix; - } - - /** - * Returns the trace of the matrix (sum of the diagonal elements) - * @return {number} - */ - trace() { - var min = Math.min(this.rows, this.columns); - var trace = 0; - for (var i = 0; i < min; i++) { - trace += this.get(i, i); - } - return trace; - } - - /* - Matrix views - */ - - /** - * Returns a view of the transposition of the matrix - * @return {MatrixTransposeView} - */ - transposeView() { - return new transpose_MatrixTransposeView(this); - } - - /** - * Returns a view of the row vector with the given index - * @param {number} row - row index of the vector - * @return {MatrixRowView} - */ - rowView(row) { - checkRowIndex(this, row); - return new row_MatrixRowView(this, row); - } - - /** - * Returns a view of the column vector with the given index - * @param {number} column - column index of the vector - * @return {MatrixColumnView} - */ - columnView(column) { - checkColumnIndex(this, column); - return new column_MatrixColumnView(this, column); - } - - /** - * Returns a view of the matrix flipped in the row axis - * @return {MatrixFlipRowView} - */ - flipRowView() { - return new flipRow_MatrixFlipRowView(this); - } - - /** - * Returns a view of the matrix flipped in the column axis - * @return {MatrixFlipColumnView} - */ - flipColumnView() { - return new flipColumn_MatrixFlipColumnView(this); - } - - /** - * Returns a view of a submatrix giving the index boundaries - * @param {number} startRow - first row index of the submatrix - * @param {number} endRow - last row index of the submatrix - * @param {number} startColumn - first column index of the submatrix - * @param {number} endColumn - last column index of the submatrix - * @return {MatrixSubView} - */ - subMatrixView(startRow, endRow, startColumn, endColumn) { - return new sub_MatrixSubView(this, startRow, endRow, startColumn, endColumn); - } - - /** - * Returns a view of the cross of the row indices and the column indices - * @example - * // resulting vector is [[2], [2]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).selectionView([0, 0], [1]) - * @param {Array} rowIndices - * @param {Array} columnIndices - * @return {MatrixSelectionView} - */ - selectionView(rowIndices, columnIndices) { - return new selection_MatrixSelectionView(this, rowIndices, columnIndices); - } - - /** - * Returns a view of the row indices - * @example - * // resulting vector is [[1,2,3], [1,2,3]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).rowSelectionView([0, 0]) - * @param {Array} rowIndices - * @return {MatrixRowSelectionView} - */ - rowSelectionView(rowIndices) { - return new rowSelection_MatrixRowSelectionView(this, rowIndices); - } - - /** - * Returns a view of the column indices - * @example - * // resulting vector is [[2, 2], [5, 5]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).columnSelectionView([1, 1]) - * @param {Array} columnIndices - * @return {MatrixColumnSelectionView} - */ - columnSelectionView(columnIndices) { - return new columnSelection_MatrixColumnSelectionView(this, columnIndices); - } - - - /** - * Calculates and returns the determinant of a matrix as a Number - * @example - * new Matrix([[1,2,3], [4,5,6]]).det() - * @return {number} - */ - det() { - if (this.isSquare()) { - var a, b, c, d; - if (this.columns === 2) { - // 2 x 2 matrix - a = this.get(0, 0); - b = this.get(0, 1); - c = this.get(1, 0); - d = this.get(1, 1); - - return a * d - (b * c); - } else if (this.columns === 3) { - // 3 x 3 matrix - var subMatrix0, subMatrix1, subMatrix2; - subMatrix0 = this.selectionView([1, 2], [1, 2]); - subMatrix1 = this.selectionView([1, 2], [0, 2]); - subMatrix2 = this.selectionView([1, 2], [0, 1]); - a = this.get(0, 0); - b = this.get(0, 1); - c = this.get(0, 2); - - return a * subMatrix0.det() - b * subMatrix1.det() + c * subMatrix2.det(); - } else { - // general purpose determinant using the LU decomposition - return new lu_LuDecomposition(this).determinant; - } - } else { - throw Error('Determinant can only be calculated for a square matrix.'); - } - } - - /** - * Returns inverse of a matrix if it exists or the pseudoinverse - * @param {number} threshold - threshold for taking inverse of singular values (default = 1e-15) - * @return {Matrix} the (pseudo)inverted matrix. - */ - pseudoInverse(threshold) { - if (threshold === undefined) threshold = Number.EPSILON; - var svdSolution = new svd_SingularValueDecomposition(this, { autoTranspose: true }); - - var U = svdSolution.leftSingularVectors; - var V = svdSolution.rightSingularVectors; - var s = svdSolution.diagonal; - - for (var i = 0; i < s.length; i++) { - if (Math.abs(s[i]) > threshold) { - s[i] = 1.0 / s[i]; - } else { - s[i] = 0.0; - } - } - - // convert list to diagonal - s = this.constructor[Symbol.species].diag(s); - return V.mmul(s.mmul(U.transposeView())); - } - - /** - * Creates an exact and independent copy of the matrix - * @return {Matrix} - */ - clone() { - var newMatrix = new this.constructor[Symbol.species](this.rows, this.columns); - for (var row = 0; row < this.rows; row++) { - for (var column = 0; column < this.columns; column++) { - newMatrix.set(row, column, this.get(row, column)); - } - } - return newMatrix; - } - } - - Matrix.prototype.klass = 'Matrix'; - - function compareNumbers(a, b) { - return a - b; - } - - /* - Synonyms - */ - - Matrix.random = Matrix.rand; - Matrix.diagonal = Matrix.diag; - Matrix.prototype.diagonal = Matrix.prototype.diag; - Matrix.identity = Matrix.eye; - Matrix.prototype.negate = Matrix.prototype.neg; - Matrix.prototype.tensorProduct = Matrix.prototype.kroneckerProduct; - Matrix.prototype.determinant = Matrix.prototype.det; - - /* - Add dynamically instance and static methods for mathematical operations - */ - - var inplaceOperator = ` -(function %name%(value) { - if (typeof value === 'number') return this.%name%S(value); - return this.%name%M(value); -}) -`; - - var inplaceOperatorScalar = ` -(function %name%S(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) %op% value); - } - } - return this; -}) -`; - - var inplaceOperatorMatrix = ` -(function %name%M(matrix) { - matrix = this.constructor.checkMatrix(matrix); - if (this.rows !== matrix.rows || - this.columns !== matrix.columns) { - throw new RangeError('Matrices dimensions must be equal'); - } - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) %op% matrix.get(i, j)); - } - } - return this; -}) -`; - - var staticOperator = ` -(function %name%(matrix, value) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(value); -}) -`; - - var inplaceMethod = ` -(function %name%() { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j))); - } - } - return this; -}) -`; - - var staticMethod = ` -(function %name%(matrix) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(); -}) -`; - - var inplaceMethodWithArgs = ` -(function %name%(%args%) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), %args%)); - } - } - return this; -}) -`; - - var staticMethodWithArgs = ` -(function %name%(matrix, %args%) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(%args%); -}) -`; - - - var inplaceMethodWithOneArgScalar = ` -(function %name%S(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), value)); - } - } - return this; -}) -`; - var inplaceMethodWithOneArgMatrix = ` -(function %name%M(matrix) { - matrix = this.constructor.checkMatrix(matrix); - if (this.rows !== matrix.rows || - this.columns !== matrix.columns) { - throw new RangeError('Matrices dimensions must be equal'); - } - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), matrix.get(i, j))); - } - } - return this; -}) -`; - - var inplaceMethodWithOneArg = ` -(function %name%(value) { - if (typeof value === 'number') return this.%name%S(value); - return this.%name%M(value); -}) -`; - - var staticMethodWithOneArg = staticMethodWithArgs; - - var operators = [ - // Arithmetic operators - ['+', 'add'], - ['-', 'sub', 'subtract'], - ['*', 'mul', 'multiply'], - ['/', 'div', 'divide'], - ['%', 'mod', 'modulus'], - // Bitwise operators - ['&', 'and'], - ['|', 'or'], - ['^', 'xor'], - ['<<', 'leftShift'], - ['>>', 'signPropagatingRightShift'], - ['>>>', 'rightShift', 'zeroFillRightShift'] - ]; - - var i; - var eval2 = eval; // eslint-disable-line no-eval - for (var operator of operators) { - var inplaceOp = eval2(fillTemplateFunction(inplaceOperator, { name: operator[1], op: operator[0] })); - var inplaceOpS = eval2(fillTemplateFunction(inplaceOperatorScalar, { name: `${operator[1]}S`, op: operator[0] })); - var inplaceOpM = eval2(fillTemplateFunction(inplaceOperatorMatrix, { name: `${operator[1]}M`, op: operator[0] })); - var staticOp = eval2(fillTemplateFunction(staticOperator, { name: operator[1] })); - for (i = 1; i < operator.length; i++) { - Matrix.prototype[operator[i]] = inplaceOp; - Matrix.prototype[`${operator[i]}S`] = inplaceOpS; - Matrix.prototype[`${operator[i]}M`] = inplaceOpM; - Matrix[operator[i]] = staticOp; - } - } - - var methods = [['~', 'not']]; - - [ - 'abs', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atanh', 'cbrt', 'ceil', - 'clz32', 'cos', 'cosh', 'exp', 'expm1', 'floor', 'fround', 'log', 'log1p', - 'log10', 'log2', 'round', 'sign', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc' - ].forEach(function (mathMethod) { - methods.push([`Math.${mathMethod}`, mathMethod]); - }); - - for (var method of methods) { - var inplaceMeth = eval2(fillTemplateFunction(inplaceMethod, { name: method[1], method: method[0] })); - var staticMeth = eval2(fillTemplateFunction(staticMethod, { name: method[1] })); - for (i = 1; i < method.length; i++) { - Matrix.prototype[method[i]] = inplaceMeth; - Matrix[method[i]] = staticMeth; - } - } - - var methodsWithArgs = [['Math.pow', 1, 'pow']]; - - for (var methodWithArg of methodsWithArgs) { - var args = 'arg0'; - for (i = 1; i < methodWithArg[1]; i++) { - args += `, arg${i}`; - } - if (methodWithArg[1] !== 1) { - var inplaceMethWithArgs = eval2(fillTemplateFunction(inplaceMethodWithArgs, { - name: methodWithArg[2], - method: methodWithArg[0], - args: args - })); - var staticMethWithArgs = eval2(fillTemplateFunction(staticMethodWithArgs, { name: methodWithArg[2], args: args })); - for (i = 2; i < methodWithArg.length; i++) { - Matrix.prototype[methodWithArg[i]] = inplaceMethWithArgs; - Matrix[methodWithArg[i]] = staticMethWithArgs; - } - } else { - var tmplVar = { - name: methodWithArg[2], - args: args, - method: methodWithArg[0] - }; - var inplaceMethod2 = eval2(fillTemplateFunction(inplaceMethodWithOneArg, tmplVar)); - var inplaceMethodS = eval2(fillTemplateFunction(inplaceMethodWithOneArgScalar, tmplVar)); - var inplaceMethodM = eval2(fillTemplateFunction(inplaceMethodWithOneArgMatrix, tmplVar)); - var staticMethod2 = eval2(fillTemplateFunction(staticMethodWithOneArg, tmplVar)); - for (i = 2; i < methodWithArg.length; i++) { - Matrix.prototype[methodWithArg[i]] = inplaceMethod2; - Matrix.prototype[`${methodWithArg[i]}M`] = inplaceMethodM; - Matrix.prototype[`${methodWithArg[i]}S`] = inplaceMethodS; - Matrix[methodWithArg[i]] = staticMethod2; - } - } - } - - function fillTemplateFunction(template, values) { - for (var value in values) { - template = template.replace(new RegExp(`%${value}%`, 'g'), values[value]); - } - return template; - } - - return Matrix; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/matrix.js - - - -class matrix_Matrix extends AbstractMatrix(Array) { - constructor(nRows, nColumns) { - var i; - if (arguments.length === 1 && typeof nRows === 'number') { - return new Array(nRows); - } - if (matrix_Matrix.isMatrix(nRows)) { - return nRows.clone(); - } else if (Number.isInteger(nRows) && nRows > 0) { - // Create an empty matrix - super(nRows); - if (Number.isInteger(nColumns) && nColumns > 0) { - for (i = 0; i < nRows; i++) { - this[i] = new Array(nColumns); - } - } else { - throw new TypeError('nColumns must be a positive integer'); - } - } else if (Array.isArray(nRows)) { - // Copy the values from the 2D array - const matrix = nRows; - nRows = matrix.length; - nColumns = matrix[0].length; - if (typeof nColumns !== 'number' || nColumns === 0) { - throw new TypeError( - 'Data must be a 2D array with at least one element' - ); - } - super(nRows); - for (i = 0; i < nRows; i++) { - if (matrix[i].length !== nColumns) { - throw new RangeError('Inconsistent array dimensions'); - } - this[i] = [].concat(matrix[i]); - } - } else { - throw new TypeError( - 'First argument must be a positive number or an array' - ); - } - this.rows = nRows; - this.columns = nColumns; - return this; - } - - set(rowIndex, columnIndex, value) { - this[rowIndex][columnIndex] = value; - return this; - } - - get(rowIndex, columnIndex) { - return this[rowIndex][columnIndex]; - } - - /** - * Removes a row from the given index - * @param {number} index - Row index - * @return {Matrix} this - */ - removeRow(index) { - checkRowIndex(this, index); - if (this.rows === 1) { - throw new RangeError('A matrix cannot have less than one row'); - } - this.splice(index, 1); - this.rows -= 1; - return this; - } - - /** - * Adds a row at the given index - * @param {number} [index = this.rows] - Row index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - addRow(index, array) { - if (array === undefined) { - array = index; - index = this.rows; - } - checkRowIndex(this, index, true); - array = checkRowVector(this, array, true); - this.splice(index, 0, array); - this.rows += 1; - return this; - } - - /** - * Removes a column from the given index - * @param {number} index - Column index - * @return {Matrix} this - */ - removeColumn(index) { - checkColumnIndex(this, index); - if (this.columns === 1) { - throw new RangeError('A matrix cannot have less than one column'); - } - for (var i = 0; i < this.rows; i++) { - this[i].splice(index, 1); - } - this.columns -= 1; - return this; - } - - /** - * Adds a column at the given index - * @param {number} [index = this.columns] - Column index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - addColumn(index, array) { - if (typeof array === 'undefined') { - array = index; - index = this.columns; - } - checkColumnIndex(this, index, true); - array = checkColumnVector(this, array); - for (var i = 0; i < this.rows; i++) { - this[i].splice(index, 0, array[i]); - } - this.columns += 1; - return this; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/WrapperMatrix1D.js - - - -class WrapperMatrix1D_WrapperMatrix1D extends AbstractMatrix() { - /** - * @class WrapperMatrix1D - * @param {Array} data - * @param {object} [options] - * @param {object} [options.rows = 1] - */ - constructor(data, options = {}) { - const { rows = 1 } = options; - - if (data.length % rows !== 0) { - throw new Error('the data length is not divisible by the number of rows'); - } - super(); - this.rows = rows; - this.columns = data.length / rows; - this.data = data; - } - - set(rowIndex, columnIndex, value) { - var index = this._calculateIndex(rowIndex, columnIndex); - this.data[index] = value; - return this; - } - - get(rowIndex, columnIndex) { - var index = this._calculateIndex(rowIndex, columnIndex); - return this.data[index]; - } - - _calculateIndex(row, column) { - return row * this.columns + column; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/WrapperMatrix2D.js - - - -class WrapperMatrix2D_WrapperMatrix2D extends AbstractMatrix() { - /** - * @class WrapperMatrix2D - * @param {Array>} data - */ - constructor(data) { - super(); - this.data = data; - this.rows = data.length; - this.columns = data[0].length; - } - - set(rowIndex, columnIndex, value) { - this.data[rowIndex][columnIndex] = value; - return this; - } - - get(rowIndex, columnIndex) { - return this.data[rowIndex][columnIndex]; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/wrap.js - - - -/** - * @param {Array>|Array} array - * @param {object} [options] - * @param {object} [options.rows = 1] - * @return {WrapperMatrix1D|WrapperMatrix2D} - */ -function wrap(array, options) { - if (Array.isArray(array)) { - if (array[0] && Array.isArray(array[0])) { - return new WrapperMatrix2D_WrapperMatrix2D(array); - } else { - return new WrapperMatrix1D_WrapperMatrix1D(array, options); - } - } else { - throw new Error('the argument is not an array'); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/qr.js - - - - -/** - * @class QrDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/QrDecomposition.cs - * @param {Matrix} value - */ -class qr_QrDecomposition { - constructor(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var qr = value.clone(); - var m = value.rows; - var n = value.columns; - var rdiag = new Array(n); - var i, j, k, s; - - for (k = 0; k < n; k++) { - var nrm = 0; - for (i = k; i < m; i++) { - nrm = hypotenuse(nrm, qr.get(i, k)); - } - if (nrm !== 0) { - if (qr.get(k, k) < 0) { - nrm = -nrm; - } - for (i = k; i < m; i++) { - qr.set(i, k, qr.get(i, k) / nrm); - } - qr.set(k, k, qr.get(k, k) + 1); - for (j = k + 1; j < n; j++) { - s = 0; - for (i = k; i < m; i++) { - s += qr.get(i, k) * qr.get(i, j); - } - s = -s / qr.get(k, k); - for (i = k; i < m; i++) { - qr.set(i, j, qr.get(i, j) + s * qr.get(i, k)); - } - } - } - rdiag[k] = -nrm; - } - - this.QR = qr; - this.Rdiag = rdiag; - } - - /** - * Solve a problem of least square (Ax=b) by using the QR decomposition. Useful when A is rectangular, but not working when A is singular. - * Example : We search to approximate x, with A matrix shape m*n, x vector size n, b vector size m (m > n). We will use : - * var qr = QrDecomposition(A); - * var x = qr.solve(b); - * @param {Matrix} value - Matrix 1D which is the vector b (in the equation Ax = b) - * @return {Matrix} - The vector x - */ - solve(value) { - value = matrix_Matrix.checkMatrix(value); - - var qr = this.QR; - var m = qr.rows; - - if (value.rows !== m) { - throw new Error('Matrix row dimensions must agree'); - } - if (!this.isFullRank()) { - throw new Error('Matrix is rank deficient'); - } - - var count = value.columns; - var X = value.clone(); - var n = qr.columns; - var i, j, k, s; - - for (k = 0; k < n; k++) { - for (j = 0; j < count; j++) { - s = 0; - for (i = k; i < m; i++) { - s += qr[i][k] * X[i][j]; - } - s = -s / qr[k][k]; - for (i = k; i < m; i++) { - X[i][j] += s * qr[i][k]; - } - } - } - for (k = n - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - X[k][j] /= this.Rdiag[k]; - } - for (i = 0; i < k; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * qr[i][k]; - } - } - } - - return X.subMatrix(0, n - 1, 0, count - 1); - } - - /** - * - * @return {boolean} - */ - isFullRank() { - var columns = this.QR.columns; - for (var i = 0; i < columns; i++) { - if (this.Rdiag[i] === 0) { - return false; - } - } - return true; - } - - /** - * - * @return {Matrix} - */ - get upperTriangularMatrix() { - var qr = this.QR; - var n = qr.columns; - var X = new matrix_Matrix(n, n); - var i, j; - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - if (i < j) { - X[i][j] = qr[i][j]; - } else if (i === j) { - X[i][j] = this.Rdiag[i]; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Matrix} - */ - get orthogonalMatrix() { - var qr = this.QR; - var rows = qr.rows; - var columns = qr.columns; - var X = new matrix_Matrix(rows, columns); - var i, j, k, s; - - for (k = columns - 1; k >= 0; k--) { - for (i = 0; i < rows; i++) { - X[i][k] = 0; - } - X[k][k] = 1; - for (j = k; j < columns; j++) { - if (qr[k][k] !== 0) { - s = 0; - for (i = k; i < rows; i++) { - s += qr[i][k] * X[i][j]; - } - - s = -s / qr[k][k]; - - for (i = k; i < rows; i++) { - X[i][j] += s * qr[i][k]; - } - } - } - } - return X; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/decompositions.js - - - - - - -/** - * Computes the inverse of a Matrix - * @param {Matrix} matrix - * @param {boolean} [useSVD=false] - * @return {Matrix} - */ -function inverse(matrix, useSVD = false) { - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - if (useSVD) { - return new svd_SingularValueDecomposition(matrix).inverse(); - } else { - return solve(matrix, matrix_Matrix.eye(matrix.rows)); - } -} - -/** - * - * @param {Matrix} leftHandSide - * @param {Matrix} rightHandSide - * @param {boolean} [useSVD = false] - * @return {Matrix} - */ -function solve(leftHandSide, rightHandSide, useSVD = false) { - leftHandSide = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(leftHandSide); - rightHandSide = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(rightHandSide); - if (useSVD) { - return new svd_SingularValueDecomposition(leftHandSide).solve(rightHandSide); - } else { - return leftHandSide.isSquare() - ? new lu_LuDecomposition(leftHandSide).solve(rightHandSide) - : new qr_QrDecomposition(leftHandSide).solve(rightHandSide); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/linearDependencies.js - - - - - -// function used by rowsDependencies -function xrange(n, exception) { - var range = []; - for (var i = 0; i < n; i++) { - if (i !== exception) { - range.push(i); - } - } - return range; -} - -// function used by rowsDependencies -function dependenciesOneRow( - error, - matrix, - index, - thresholdValue = 10e-10, - thresholdError = 10e-10 -) { - if (error > thresholdError) { - return new Array(matrix.rows + 1).fill(0); - } else { - var returnArray = matrix.addRow(index, [0]); - for (var i = 0; i < returnArray.rows; i++) { - if (Math.abs(returnArray.get(i, 0)) < thresholdValue) { - returnArray.set(i, 0, 0); - } - } - return returnArray.to1DArray(); - } -} - -/** - * Creates a matrix which represents the dependencies between rows. - * If a row is a linear combination of others rows, the result will be a row with the coefficients of this combination. - * For example : for A = [[2, 0, 0, 1], [0, 1, 6, 0], [0, 3, 0, 1], [0, 0, 1, 0], [0, 1, 2, 0]], the result will be [[0, 0, 0, 0, 0], [0, 0, 0, 4, 1], [0, 0, 0, 0, 0], [0, 0.25, 0, 0, -0.25], [0, 1, 0, -4, 0]] - * @param {Matrix} matrix - * @param {Object} [options] includes thresholdValue and thresholdError. - * @param {number} [options.thresholdValue = 10e-10] If an absolute value is inferior to this threshold, it will equals zero. - * @param {number} [options.thresholdError = 10e-10] If the error is inferior to that threshold, the linear combination found is accepted and the row is dependent from other rows. - * @return {Matrix} the matrix which represents the dependencies between rows. - */ - -function linearDependencies(matrix, options = {}) { - const { thresholdValue = 10e-10, thresholdError = 10e-10 } = options; - - var n = matrix.rows; - var results = new matrix_Matrix(n, n); - - for (var i = 0; i < n; i++) { - var b = matrix_Matrix.columnVector(matrix.getRow(i)); - var Abis = matrix.subMatrixRow(xrange(n, i)).transposeView(); - var svd = new svd_SingularValueDecomposition(Abis); - var x = svd.solve(b); - var error = lib_es6( - matrix_Matrix.sub(b, Abis.mmul(x)) - .abs() - .to1DArray() - ); - results.setRow( - i, - dependenciesOneRow(error, x, i, thresholdValue, thresholdError) - ); - } - return results; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/evd.js - - - - -/** - * @class EigenvalueDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/EigenvalueDecomposition.cs - * @param {Matrix} matrix - * @param {object} [options] - * @param {boolean} [options.assumeSymmetric=false] - */ -class evd_EigenvalueDecomposition { - constructor(matrix, options = {}) { - const { assumeSymmetric = false } = options; - - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - if (!matrix.isSquare()) { - throw new Error('Matrix is not a square matrix'); - } - - var n = matrix.columns; - var V = getFilled2DArray(n, n, 0); - var d = new Array(n); - var e = new Array(n); - var value = matrix; - var i, j; - - var isSymmetric = false; - if (assumeSymmetric) { - isSymmetric = true; - } else { - isSymmetric = matrix.isSymmetric(); - } - - if (isSymmetric) { - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - V[i][j] = value.get(i, j); - } - } - tred2(n, e, d, V); - tql2(n, e, d, V); - } else { - var H = getFilled2DArray(n, n, 0); - var ort = new Array(n); - for (j = 0; j < n; j++) { - for (i = 0; i < n; i++) { - H[i][j] = value.get(i, j); - } - } - orthes(n, H, ort, V); - hqr2(n, e, d, V, H); - } - - this.n = n; - this.e = e; - this.d = d; - this.V = V; - } - - /** - * - * @return {Array} - */ - get realEigenvalues() { - return this.d; - } - - /** - * - * @return {Array} - */ - get imaginaryEigenvalues() { - return this.e; - } - - /** - * - * @return {Matrix} - */ - get eigenvectorMatrix() { - if (!matrix_Matrix.isMatrix(this.V)) { - this.V = new matrix_Matrix(this.V); - } - return this.V; - } - - /** - * - * @return {Matrix} - */ - get diagonalMatrix() { - var n = this.n; - var e = this.e; - var d = this.d; - var X = new matrix_Matrix(n, n); - var i, j; - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - X[i][j] = 0; - } - X[i][i] = d[i]; - if (e[i] > 0) { - X[i][i + 1] = e[i]; - } else if (e[i] < 0) { - X[i][i - 1] = e[i]; - } - } - return X; - } -} - -function tred2(n, e, d, V) { - var f, g, h, i, j, k, hh, scale; - - for (j = 0; j < n; j++) { - d[j] = V[n - 1][j]; - } - - for (i = n - 1; i > 0; i--) { - scale = 0; - h = 0; - for (k = 0; k < i; k++) { - scale = scale + Math.abs(d[k]); - } - - if (scale === 0) { - e[i] = d[i - 1]; - for (j = 0; j < i; j++) { - d[j] = V[i - 1][j]; - V[i][j] = 0; - V[j][i] = 0; - } - } else { - for (k = 0; k < i; k++) { - d[k] /= scale; - h += d[k] * d[k]; - } - - f = d[i - 1]; - g = Math.sqrt(h); - if (f > 0) { - g = -g; - } - - e[i] = scale * g; - h = h - f * g; - d[i - 1] = f - g; - for (j = 0; j < i; j++) { - e[j] = 0; - } - - for (j = 0; j < i; j++) { - f = d[j]; - V[j][i] = f; - g = e[j] + V[j][j] * f; - for (k = j + 1; k <= i - 1; k++) { - g += V[k][j] * d[k]; - e[k] += V[k][j] * f; - } - e[j] = g; - } - - f = 0; - for (j = 0; j < i; j++) { - e[j] /= h; - f += e[j] * d[j]; - } - - hh = f / (h + h); - for (j = 0; j < i; j++) { - e[j] -= hh * d[j]; - } - - for (j = 0; j < i; j++) { - f = d[j]; - g = e[j]; - for (k = j; k <= i - 1; k++) { - V[k][j] -= f * e[k] + g * d[k]; - } - d[j] = V[i - 1][j]; - V[i][j] = 0; - } - } - d[i] = h; - } - - for (i = 0; i < n - 1; i++) { - V[n - 1][i] = V[i][i]; - V[i][i] = 1; - h = d[i + 1]; - if (h !== 0) { - for (k = 0; k <= i; k++) { - d[k] = V[k][i + 1] / h; - } - - for (j = 0; j <= i; j++) { - g = 0; - for (k = 0; k <= i; k++) { - g += V[k][i + 1] * V[k][j]; - } - for (k = 0; k <= i; k++) { - V[k][j] -= g * d[k]; - } - } - } - - for (k = 0; k <= i; k++) { - V[k][i + 1] = 0; - } - } - - for (j = 0; j < n; j++) { - d[j] = V[n - 1][j]; - V[n - 1][j] = 0; - } - - V[n - 1][n - 1] = 1; - e[0] = 0; -} - -function tql2(n, e, d, V) { - var g, h, i, j, k, l, m, p, r, dl1, c, c2, c3, el1, s, s2, iter; - - for (i = 1; i < n; i++) { - e[i - 1] = e[i]; - } - - e[n - 1] = 0; - - var f = 0; - var tst1 = 0; - var eps = Number.EPSILON; - - for (l = 0; l < n; l++) { - tst1 = Math.max(tst1, Math.abs(d[l]) + Math.abs(e[l])); - m = l; - while (m < n) { - if (Math.abs(e[m]) <= eps * tst1) { - break; - } - m++; - } - - if (m > l) { - iter = 0; - do { - iter = iter + 1; - - g = d[l]; - p = (d[l + 1] - g) / (2 * e[l]); - r = hypotenuse(p, 1); - if (p < 0) { - r = -r; - } - - d[l] = e[l] / (p + r); - d[l + 1] = e[l] * (p + r); - dl1 = d[l + 1]; - h = g - d[l]; - for (i = l + 2; i < n; i++) { - d[i] -= h; - } - - f = f + h; - - p = d[m]; - c = 1; - c2 = c; - c3 = c; - el1 = e[l + 1]; - s = 0; - s2 = 0; - for (i = m - 1; i >= l; i--) { - c3 = c2; - c2 = c; - s2 = s; - g = c * e[i]; - h = c * p; - r = hypotenuse(p, e[i]); - e[i + 1] = s * r; - s = e[i] / r; - c = p / r; - p = c * d[i] - s * g; - d[i + 1] = h + s * (c * g + s * d[i]); - - for (k = 0; k < n; k++) { - h = V[k][i + 1]; - V[k][i + 1] = s * V[k][i] + c * h; - V[k][i] = c * V[k][i] - s * h; - } - } - - p = -s * s2 * c3 * el1 * e[l] / dl1; - e[l] = s * p; - d[l] = c * p; - } while (Math.abs(e[l]) > eps * tst1); - } - d[l] = d[l] + f; - e[l] = 0; - } - - for (i = 0; i < n - 1; i++) { - k = i; - p = d[i]; - for (j = i + 1; j < n; j++) { - if (d[j] < p) { - k = j; - p = d[j]; - } - } - - if (k !== i) { - d[k] = d[i]; - d[i] = p; - for (j = 0; j < n; j++) { - p = V[j][i]; - V[j][i] = V[j][k]; - V[j][k] = p; - } - } - } -} - -function orthes(n, H, ort, V) { - var low = 0; - var high = n - 1; - var f, g, h, i, j, m; - var scale; - - for (m = low + 1; m <= high - 1; m++) { - scale = 0; - for (i = m; i <= high; i++) { - scale = scale + Math.abs(H[i][m - 1]); - } - - if (scale !== 0) { - h = 0; - for (i = high; i >= m; i--) { - ort[i] = H[i][m - 1] / scale; - h += ort[i] * ort[i]; - } - - g = Math.sqrt(h); - if (ort[m] > 0) { - g = -g; - } - - h = h - ort[m] * g; - ort[m] = ort[m] - g; - - for (j = m; j < n; j++) { - f = 0; - for (i = high; i >= m; i--) { - f += ort[i] * H[i][j]; - } - - f = f / h; - for (i = m; i <= high; i++) { - H[i][j] -= f * ort[i]; - } - } - - for (i = 0; i <= high; i++) { - f = 0; - for (j = high; j >= m; j--) { - f += ort[j] * H[i][j]; - } - - f = f / h; - for (j = m; j <= high; j++) { - H[i][j] -= f * ort[j]; - } - } - - ort[m] = scale * ort[m]; - H[m][m - 1] = scale * g; - } - } - - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - V[i][j] = i === j ? 1 : 0; - } - } - - for (m = high - 1; m >= low + 1; m--) { - if (H[m][m - 1] !== 0) { - for (i = m + 1; i <= high; i++) { - ort[i] = H[i][m - 1]; - } - - for (j = m; j <= high; j++) { - g = 0; - for (i = m; i <= high; i++) { - g += ort[i] * V[i][j]; - } - - g = g / ort[m] / H[m][m - 1]; - for (i = m; i <= high; i++) { - V[i][j] += g * ort[i]; - } - } - } - } -} - -function hqr2(nn, e, d, V, H) { - var n = nn - 1; - var low = 0; - var high = nn - 1; - var eps = Number.EPSILON; - var exshift = 0; - var norm = 0; - var p = 0; - var q = 0; - var r = 0; - var s = 0; - var z = 0; - var iter = 0; - var i, j, k, l, m, t, w, x, y; - var ra, sa, vr, vi; - var notlast, cdivres; - - for (i = 0; i < nn; i++) { - if (i < low || i > high) { - d[i] = H[i][i]; - e[i] = 0; - } - - for (j = Math.max(i - 1, 0); j < nn; j++) { - norm = norm + Math.abs(H[i][j]); - } - } - - while (n >= low) { - l = n; - while (l > low) { - s = Math.abs(H[l - 1][l - 1]) + Math.abs(H[l][l]); - if (s === 0) { - s = norm; - } - if (Math.abs(H[l][l - 1]) < eps * s) { - break; - } - l--; - } - - if (l === n) { - H[n][n] = H[n][n] + exshift; - d[n] = H[n][n]; - e[n] = 0; - n--; - iter = 0; - } else if (l === n - 1) { - w = H[n][n - 1] * H[n - 1][n]; - p = (H[n - 1][n - 1] - H[n][n]) / 2; - q = p * p + w; - z = Math.sqrt(Math.abs(q)); - H[n][n] = H[n][n] + exshift; - H[n - 1][n - 1] = H[n - 1][n - 1] + exshift; - x = H[n][n]; - - if (q >= 0) { - z = p >= 0 ? p + z : p - z; - d[n - 1] = x + z; - d[n] = d[n - 1]; - if (z !== 0) { - d[n] = x - w / z; - } - e[n - 1] = 0; - e[n] = 0; - x = H[n][n - 1]; - s = Math.abs(x) + Math.abs(z); - p = x / s; - q = z / s; - r = Math.sqrt(p * p + q * q); - p = p / r; - q = q / r; - - for (j = n - 1; j < nn; j++) { - z = H[n - 1][j]; - H[n - 1][j] = q * z + p * H[n][j]; - H[n][j] = q * H[n][j] - p * z; - } - - for (i = 0; i <= n; i++) { - z = H[i][n - 1]; - H[i][n - 1] = q * z + p * H[i][n]; - H[i][n] = q * H[i][n] - p * z; - } - - for (i = low; i <= high; i++) { - z = V[i][n - 1]; - V[i][n - 1] = q * z + p * V[i][n]; - V[i][n] = q * V[i][n] - p * z; - } - } else { - d[n - 1] = x + p; - d[n] = x + p; - e[n - 1] = z; - e[n] = -z; - } - - n = n - 2; - iter = 0; - } else { - x = H[n][n]; - y = 0; - w = 0; - if (l < n) { - y = H[n - 1][n - 1]; - w = H[n][n - 1] * H[n - 1][n]; - } - - if (iter === 10) { - exshift += x; - for (i = low; i <= n; i++) { - H[i][i] -= x; - } - s = Math.abs(H[n][n - 1]) + Math.abs(H[n - 1][n - 2]); - x = y = 0.75 * s; - w = -0.4375 * s * s; - } - - if (iter === 30) { - s = (y - x) / 2; - s = s * s + w; - if (s > 0) { - s = Math.sqrt(s); - if (y < x) { - s = -s; - } - s = x - w / ((y - x) / 2 + s); - for (i = low; i <= n; i++) { - H[i][i] -= s; - } - exshift += s; - x = y = w = 0.964; - } - } - - iter = iter + 1; - - m = n - 2; - while (m >= l) { - z = H[m][m]; - r = x - z; - s = y - z; - p = (r * s - w) / H[m + 1][m] + H[m][m + 1]; - q = H[m + 1][m + 1] - z - r - s; - r = H[m + 2][m + 1]; - s = Math.abs(p) + Math.abs(q) + Math.abs(r); - p = p / s; - q = q / s; - r = r / s; - if (m === l) { - break; - } - if ( - Math.abs(H[m][m - 1]) * (Math.abs(q) + Math.abs(r)) < - eps * - (Math.abs(p) * - (Math.abs(H[m - 1][m - 1]) + - Math.abs(z) + - Math.abs(H[m + 1][m + 1]))) - ) { - break; - } - m--; - } - - for (i = m + 2; i <= n; i++) { - H[i][i - 2] = 0; - if (i > m + 2) { - H[i][i - 3] = 0; - } - } - - for (k = m; k <= n - 1; k++) { - notlast = k !== n - 1; - if (k !== m) { - p = H[k][k - 1]; - q = H[k + 1][k - 1]; - r = notlast ? H[k + 2][k - 1] : 0; - x = Math.abs(p) + Math.abs(q) + Math.abs(r); - if (x !== 0) { - p = p / x; - q = q / x; - r = r / x; - } - } - - if (x === 0) { - break; - } - - s = Math.sqrt(p * p + q * q + r * r); - if (p < 0) { - s = -s; - } - - if (s !== 0) { - if (k !== m) { - H[k][k - 1] = -s * x; - } else if (l !== m) { - H[k][k - 1] = -H[k][k - 1]; - } - - p = p + s; - x = p / s; - y = q / s; - z = r / s; - q = q / p; - r = r / p; - - for (j = k; j < nn; j++) { - p = H[k][j] + q * H[k + 1][j]; - if (notlast) { - p = p + r * H[k + 2][j]; - H[k + 2][j] = H[k + 2][j] - p * z; - } - - H[k][j] = H[k][j] - p * x; - H[k + 1][j] = H[k + 1][j] - p * y; - } - - for (i = 0; i <= Math.min(n, k + 3); i++) { - p = x * H[i][k] + y * H[i][k + 1]; - if (notlast) { - p = p + z * H[i][k + 2]; - H[i][k + 2] = H[i][k + 2] - p * r; - } - - H[i][k] = H[i][k] - p; - H[i][k + 1] = H[i][k + 1] - p * q; - } - - for (i = low; i <= high; i++) { - p = x * V[i][k] + y * V[i][k + 1]; - if (notlast) { - p = p + z * V[i][k + 2]; - V[i][k + 2] = V[i][k + 2] - p * r; - } - - V[i][k] = V[i][k] - p; - V[i][k + 1] = V[i][k + 1] - p * q; - } - } - } - } - } - - if (norm === 0) { - return; - } - - for (n = nn - 1; n >= 0; n--) { - p = d[n]; - q = e[n]; - - if (q === 0) { - l = n; - H[n][n] = 1; - for (i = n - 1; i >= 0; i--) { - w = H[i][i] - p; - r = 0; - for (j = l; j <= n; j++) { - r = r + H[i][j] * H[j][n]; - } - - if (e[i] < 0) { - z = w; - s = r; - } else { - l = i; - if (e[i] === 0) { - H[i][n] = w !== 0 ? -r / w : -r / (eps * norm); - } else { - x = H[i][i + 1]; - y = H[i + 1][i]; - q = (d[i] - p) * (d[i] - p) + e[i] * e[i]; - t = (x * s - z * r) / q; - H[i][n] = t; - H[i + 1][n] = - Math.abs(x) > Math.abs(z) ? (-r - w * t) / x : (-s - y * t) / z; - } - - t = Math.abs(H[i][n]); - if (eps * t * t > 1) { - for (j = i; j <= n; j++) { - H[j][n] = H[j][n] / t; - } - } - } - } - } else if (q < 0) { - l = n - 1; - - if (Math.abs(H[n][n - 1]) > Math.abs(H[n - 1][n])) { - H[n - 1][n - 1] = q / H[n][n - 1]; - H[n - 1][n] = -(H[n][n] - p) / H[n][n - 1]; - } else { - cdivres = cdiv(0, -H[n - 1][n], H[n - 1][n - 1] - p, q); - H[n - 1][n - 1] = cdivres[0]; - H[n - 1][n] = cdivres[1]; - } - - H[n][n - 1] = 0; - H[n][n] = 1; - for (i = n - 2; i >= 0; i--) { - ra = 0; - sa = 0; - for (j = l; j <= n; j++) { - ra = ra + H[i][j] * H[j][n - 1]; - sa = sa + H[i][j] * H[j][n]; - } - - w = H[i][i] - p; - - if (e[i] < 0) { - z = w; - r = ra; - s = sa; - } else { - l = i; - if (e[i] === 0) { - cdivres = cdiv(-ra, -sa, w, q); - H[i][n - 1] = cdivres[0]; - H[i][n] = cdivres[1]; - } else { - x = H[i][i + 1]; - y = H[i + 1][i]; - vr = (d[i] - p) * (d[i] - p) + e[i] * e[i] - q * q; - vi = (d[i] - p) * 2 * q; - if (vr === 0 && vi === 0) { - vr = - eps * - norm * - (Math.abs(w) + - Math.abs(q) + - Math.abs(x) + - Math.abs(y) + - Math.abs(z)); - } - cdivres = cdiv( - x * r - z * ra + q * sa, - x * s - z * sa - q * ra, - vr, - vi - ); - H[i][n - 1] = cdivres[0]; - H[i][n] = cdivres[1]; - if (Math.abs(x) > Math.abs(z) + Math.abs(q)) { - H[i + 1][n - 1] = (-ra - w * H[i][n - 1] + q * H[i][n]) / x; - H[i + 1][n] = (-sa - w * H[i][n] - q * H[i][n - 1]) / x; - } else { - cdivres = cdiv(-r - y * H[i][n - 1], -s - y * H[i][n], z, q); - H[i + 1][n - 1] = cdivres[0]; - H[i + 1][n] = cdivres[1]; - } - } - - t = Math.max(Math.abs(H[i][n - 1]), Math.abs(H[i][n])); - if (eps * t * t > 1) { - for (j = i; j <= n; j++) { - H[j][n - 1] = H[j][n - 1] / t; - H[j][n] = H[j][n] / t; - } - } - } - } - } - } - - for (i = 0; i < nn; i++) { - if (i < low || i > high) { - for (j = i; j < nn; j++) { - V[i][j] = H[i][j]; - } - } - } - - for (j = nn - 1; j >= low; j--) { - for (i = low; i <= high; i++) { - z = 0; - for (k = low; k <= Math.min(j, high); k++) { - z = z + V[i][k] * H[k][j]; - } - V[i][j] = z; - } - } -} - -function cdiv(xr, xi, yr, yi) { - var r, d; - if (Math.abs(yr) > Math.abs(yi)) { - r = yi / yr; - d = yr + r * yi; - return [(xr + r * xi) / d, (xi - r * xr) / d]; - } else { - r = yr / yi; - d = yi + r * yr; - return [(r * xr + xi) / d, (r * xi - xr) / d]; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/cholesky.js - - -/** - * @class CholeskyDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/CholeskyDecomposition.cs - * @param {Matrix} value - */ -class cholesky_CholeskyDecomposition { - constructor(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - if (!value.isSymmetric()) { - throw new Error('Matrix is not symmetric'); - } - - var a = value; - var dimension = a.rows; - var l = new matrix_Matrix(dimension, dimension); - var positiveDefinite = true; - var i, j, k; - - for (j = 0; j < dimension; j++) { - var Lrowj = l[j]; - var d = 0; - for (k = 0; k < j; k++) { - var Lrowk = l[k]; - var s = 0; - for (i = 0; i < k; i++) { - s += Lrowk[i] * Lrowj[i]; - } - Lrowj[k] = s = (a.get(j, k) - s) / l[k][k]; - d = d + s * s; - } - - d = a.get(j, j) - d; - - positiveDefinite &= d > 0; - l[j][j] = Math.sqrt(Math.max(d, 0)); - for (k = j + 1; k < dimension; k++) { - l[j][k] = 0; - } - } - - if (!positiveDefinite) { - throw new Error('Matrix is not positive definite'); - } - - this.L = l; - } - - /** - * - * @param {Matrix} value - * @return {Matrix} - */ - solve(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var l = this.L; - var dimension = l.rows; - - if (value.rows !== dimension) { - throw new Error('Matrix dimensions do not match'); - } - - var count = value.columns; - var B = value.clone(); - var i, j, k; - - for (k = 0; k < dimension; k++) { - for (j = 0; j < count; j++) { - for (i = 0; i < k; i++) { - B[k][j] -= B[i][j] * l[k][i]; - } - B[k][j] /= l[k][k]; - } - } - - for (k = dimension - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - for (i = k + 1; i < dimension; i++) { - B[k][j] -= B[i][j] * l[i][k]; - } - B[k][j] /= l[k][k]; - } - } - - return B; - } - - /** - * - * @return {Matrix} - */ - get lowerTriangularMatrix() { - return this.L; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/index.js -/* concated harmony reexport default */__webpack_require__.d(__webpack_exports__, "default", function() { return matrix_Matrix; }); -/* concated harmony reexport Matrix */__webpack_require__.d(__webpack_exports__, "Matrix", function() { return matrix_Matrix; }); -/* concated harmony reexport abstractMatrix */__webpack_require__.d(__webpack_exports__, "abstractMatrix", function() { return AbstractMatrix; }); -/* concated harmony reexport wrap */__webpack_require__.d(__webpack_exports__, "wrap", function() { return wrap; }); -/* concated harmony reexport WrapperMatrix2D */__webpack_require__.d(__webpack_exports__, "WrapperMatrix2D", function() { return WrapperMatrix2D_WrapperMatrix2D; }); -/* concated harmony reexport WrapperMatrix1D */__webpack_require__.d(__webpack_exports__, "WrapperMatrix1D", function() { return WrapperMatrix1D_WrapperMatrix1D; }); -/* concated harmony reexport solve */__webpack_require__.d(__webpack_exports__, "solve", function() { return solve; }); -/* concated harmony reexport inverse */__webpack_require__.d(__webpack_exports__, "inverse", function() { return inverse; }); -/* concated harmony reexport linearDependencies */__webpack_require__.d(__webpack_exports__, "linearDependencies", function() { return linearDependencies; }); -/* concated harmony reexport SingularValueDecomposition */__webpack_require__.d(__webpack_exports__, "SingularValueDecomposition", function() { return svd_SingularValueDecomposition; }); -/* concated harmony reexport SVD */__webpack_require__.d(__webpack_exports__, "SVD", function() { return svd_SingularValueDecomposition; }); -/* concated harmony reexport EigenvalueDecomposition */__webpack_require__.d(__webpack_exports__, "EigenvalueDecomposition", function() { return evd_EigenvalueDecomposition; }); -/* concated harmony reexport EVD */__webpack_require__.d(__webpack_exports__, "EVD", function() { return evd_EigenvalueDecomposition; }); -/* concated harmony reexport CholeskyDecomposition */__webpack_require__.d(__webpack_exports__, "CholeskyDecomposition", function() { return cholesky_CholeskyDecomposition; }); -/* concated harmony reexport CHO */__webpack_require__.d(__webpack_exports__, "CHO", function() { return cholesky_CholeskyDecomposition; }); -/* concated harmony reexport LuDecomposition */__webpack_require__.d(__webpack_exports__, "LuDecomposition", function() { return lu_LuDecomposition; }); -/* concated harmony reexport LU */__webpack_require__.d(__webpack_exports__, "LU", function() { return lu_LuDecomposition; }); -/* concated harmony reexport QrDecomposition */__webpack_require__.d(__webpack_exports__, "QrDecomposition", function() { return qr_QrDecomposition; }); -/* concated harmony reexport QR */__webpack_require__.d(__webpack_exports__, "QR", function() { return qr_QrDecomposition; }); - - - - - - - - - - - - - - - - -/***/ }) -/******/ ]); -}); \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/public/uncertainty-calibration/draw_weathergraph.js b/spaces/merve/measuring-fairness/public/uncertainty-calibration/draw_weathergraph.js deleted file mode 100644 index 068615fb14b8e5d27869a0d270d8f0c5580e4fcc..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/uncertainty-calibration/draw_weathergraph.js +++ /dev/null @@ -1,264 +0,0 @@ -window.drawWeatherGraph = function (graphSel, fig_height, fig_width){ - - var threshold = .4 - - var thresholds = [0, .2, .4, .6, .8, 1].map((val, i) => { - var isLocked = val == 0 || val == 1 - return {val, i, isLocked, origVal: val} - }) - - var c = d3.conventions({ - sel: graphSel.html('').append('div'), - height: fig_height, - totalWidth: fig_width, - margin: {top: 100, bottom: 100} - }); - - var {predictionSel, weatherGroupSel} = (function(){ - c.y.domain([0,9]).clamp(true); - - // x-Axis - c.xAxis.ticks(5).tickFormat(d3.format('.2f')) - c.yAxis.ticks(0) - d3.drawAxis(c) - c.svg.select('.x') - .translate(-40, 1) - .selectAll('line').translate(20, 1) - - // x-Axis label - c.svg.append('text.axis-label') - .translate([c.width/2, -50]) - .at({textAnchor: 'middle'}) - .at({fill: '#000', fontSize: 14}) - .text('Model Score'); - - // Weather icons - var weatherGroupSel = c.svg.appendMany('g.weatherdata', weatherdata) - .translate(d => [c.x(d.score), c.y(d.h)]) - //.call(d3.attachTooltip) - // .on("mouseover", function(d) { - // ttSel.html(""); - // var gtSel = ttSel.append("div").html(`ground truth: ${d.label}`); - // ttSel.classed("tt-text", true); - // }) - - weatherGroupSel.append('text.icon') - .text(function(d,i){return emojis[d.label];}) - .at({fontSize: 18, textAnchor: 'middle', dy: 8}) - - // Add prediction circles - weatherGroupSel.append('circle.prediction') - .at({cx: 0, cy: 0, r: 14, opacity: 0, fillOpacity: 0, stroke: 'red'}); - weatherGroupSel.append('path.prediction') - .at({d: d => ['M', -10, 10, 'L', 10, -10].join(' '), stroke: 'red', opacity: 0}) - - var predictionSel = c.svg.selectAll('.prediction'); - - return {predictionSel, weatherGroupSel} - })() - - var {thresholdSel, messageSel, setThreshold} = (function(){ - var thresholdSel = c.svg.append('g.threshold') - - var thresholdGroupSel = thresholdSel.append('g') - .call(d3.drag().on('drag', - () => renderThreshold(c.x.invert(d3.clamp(0, d3.event.x, c.width)))) - ) - - var thesholdTextSel = thresholdGroupSel.append('g.axis').append('text') - .at({ - textAnchor: 'middle', - dy: '.33em', - y: c.height + 30 - }) - .text('Threshold') - - var rw = 16 - thresholdGroupSel.append('rect') - .at({ - width: rw, - x: -rw/2, - y: -10, - height: c.height + 30, - fillOpacity: .07, - }) - - var pathSel = thresholdGroupSel.append('path') - .at({ - stroke: '#000', - strokeDasharray: '2 2', - fill: 'none', - d: `M 0 -10 V ` + (c.height + 20), - }) - - - var accuracyValBox = thresholdSel.append('rect.val-box') - .at({width: 55, height: 20, x: c.width/2 + 32.5, y: c.height + 65, rx: 3, ry: 3}) - - var accuracySel = thresholdSel.append('text.big-text') - .at({x: c.width/2 - 10, y: c.height + 80, textAnchor: 'middle'}) - - var accuracyValSel = thresholdSel.append('text.val-text') - .at({x: c.width/2 + 60, y: c.height + 80, textAnchor: 'middle'}) - - - var messageSel = thresholdSel.append('text.tmessage') - .at({x: c.width/2, y: c.height + 120, textAnchor: 'middle'}) - - function renderThreshold(t){ - if (isNaN(t)) return // TODO debug this - - thresholdGroupSel.translate(c.x(t), 0) - - predictionSel.at({opacity: d => isClassifiedCorrectly(d, t) ? 0 : 1}) - - var acc = d3.mean( - weatherdata, - d => isClassifiedCorrectly(d, t) - ) - accuracySel.text('Accuracy: '); - accuracyValSel.text(d3.format('.1%')(acc)) - messageSel.text('Try dragging the threshold to find the highest accuracy.') - thesholdTextSel.text('Threshold: ' + d3.format('.2f')(t)) - - threshold = t - - function isClassifiedCorrectly(d,t) { - return d.score >= t ? d.label == 1 : d.label == 0; - }; - } - - renderThreshold(threshold) - - var timer = null - function setThreshold(newThreshold, duration){ - var interpolateFn = d3.interpolate(threshold, newThreshold) - - if (timer) timer.stop() - timer = d3.timer(ms => { - var t = Math.min(ms/duration, 1) - if (t == 1) timer.stop() - - renderThreshold(interpolateFn(t)) - }) - } - - return {thresholdSel, messageSel, setThreshold} - })() - - function drawTrueLegend(c){ - var truthAxis = c.svg.append('g').translate([fig_width + 40, 1]) - truthAxis.append('text.legend-title').text('Truth') // TODO: Maybe more of a label? "what actually happened?" or just remove this legend - .at({textAnchor: 'middle', fontWeight: 500, x: 20}) - - truthAxis.append('g').translate([20, 40]) - .append('text.legend-text').text('Sunny').parent() - .at({fontSize: 15}) - .append('text').text(emojis[0]) - .at({fontSize: 25, x: -30, y: 5}) - - truthAxis.append('g').translate([20, 80]) - .append('text.legend-text').text('Rainy').parent() - .at({fontSize: 15}) - .append('text').text(emojis[1]) - .at({fontSize: 25, x: -30, y: 5}) - } - drawTrueLegend(c); - - - var {thresholdsGroupSel, renderThresholds, setThresholds} = (function(){ - var valsCache = [] - var drag = d3.drag() - .on('drag', function(){ - var val = d3.clamp(0, c.x.invert(d3.mouse(c.svg.node())[0]), 1) - - // Force thresholds to stay sorted - valsCache[valsCache.activeIndex] = val - _.sortBy(valsCache).forEach((val, i) => thresholds[i].val = val) - - renderThresholds() - }) - .on('start', d => { - valsCache = thresholds.map(d => d.val) - valsCache.activeIndex = d.i - }) - - var thresholdsGroupSel = c.svg.append('g') - - thresholdsGroupSel.append('text.axis-label') - .text('Calibrated Model Score') - .translate([c.width/2, c.height + 50]) - .at({textAnchor: 'middle'}) - .at({fill: '#000', fontSize: 14}) - - thresholdsSel = thresholdsGroupSel.appendMany('g.thresholds', thresholds) - .call(drag) - .st({pointerEvents: d => d.isLocked ? 'none' : ''}) - - thresholdsSel.append('g.axis').append('text') - .at({ - textAnchor: 'middle', - dy: '.33em', - y: c.height + 20 - }) - .text(d => d3.format('.2f')(d.origVal)) - - var rw = 16 - thresholdsSel.append('rect') - .at({ - width: rw, - x: -rw/2, - height: c.height + 10, - fillOpacity: d => d.isLocked ? 0 : .07, - }) - - var pathSel = thresholdsSel.append('path') - .at({ - stroke: '#000', - strokeDasharray: '2 2', - fill: 'none', - }) - - function renderThresholds(){ - if (thresholds.some(d => isNaN(d.val))) return - - thresholdsSel - .translate(d => c.x(d.val) + .5, 0) - - pathSel.at({ - d: d => [ - 'M', 0, c.height + 10, - 'L', 0, 0, - 'L', c.x(d.origVal - d.val), -12, - ].join(' ') - }) - - if (window.calibrationCurve) calibrationCurve.renderBuckets() - } - - renderThresholds() - - var timer = null - function setThresholds(newThresholds, duration){ - var interpolateFns = thresholds - .map((d, i) => d3.interpolate(d.val, newThresholds[i])) - - if (timer) timer.stop() - timer = d3.timer(ms => { - var t = Math.min(ms/duration, 1) - if (t == 1) timer.stop() - - thresholds.forEach((d, i) => d.val = interpolateFns[i](t)) - - renderThresholds() - }) - } - - return {thresholdsGroupSel, renderThresholds, setThresholds} - })() - - return {c, thresholdSel, messageSel, setThreshold, predictionSel, thresholds, thresholdsGroupSel, renderThresholds, setThresholds, weatherGroupSel}; - -} - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/mfrashad/ClothingGAN/models/stylegan/stylegan_tf/generate_figures.py b/spaces/mfrashad/ClothingGAN/models/stylegan/stylegan_tf/generate_figures.py deleted file mode 100644 index 45b68b86146198c701a66fb8ba7a363d901d6951..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/stylegan/stylegan_tf/generate_figures.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Minimal script for reproducing the figures of the StyleGAN paper using pre-trained generators.""" - -import os -import pickle -import numpy as np -import PIL.Image -import dnnlib -import dnnlib.tflib as tflib -import config - -#---------------------------------------------------------------------------- -# Helpers for loading and using pre-trained generators. - -url_ffhq = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # karras2019stylegan-ffhq-1024x1024.pkl -url_celebahq = 'https://drive.google.com/uc?id=1MGqJl28pN4t7SAtSrPdSRJSQJqahkzUf' # karras2019stylegan-celebahq-1024x1024.pkl -url_bedrooms = 'https://drive.google.com/uc?id=1MOSKeGF0FJcivpBI7s63V9YHloUTORiF' # karras2019stylegan-bedrooms-256x256.pkl -url_cars = 'https://drive.google.com/uc?id=1MJ6iCfNtMIRicihwRorsM3b7mmtmK9c3' # karras2019stylegan-cars-512x384.pkl -url_cats = 'https://drive.google.com/uc?id=1MQywl0FNt6lHu8E_EUqnRbviagS7fbiJ' # karras2019stylegan-cats-256x256.pkl - -synthesis_kwargs = dict(output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True), minibatch_size=8) - -_Gs_cache = dict() - -def load_Gs(url): - if url not in _Gs_cache: - with dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f: - _G, _D, Gs = pickle.load(f) - _Gs_cache[url] = Gs - return _Gs_cache[url] - -#---------------------------------------------------------------------------- -# Figures 2, 3, 10, 11, 12: Multi-resolution grid of uncurated result images. - -def draw_uncurated_result_figure(png, Gs, cx, cy, cw, ch, rows, lods, seed): - print(png) - latents = np.random.RandomState(seed).randn(sum(rows * 2**lod for lod in lods), Gs.input_shape[1]) - images = Gs.run(latents, None, **synthesis_kwargs) # [seed, y, x, rgb] - - canvas = PIL.Image.new('RGB', (sum(cw // 2**lod for lod in lods), ch * rows), 'white') - image_iter = iter(list(images)) - for col, lod in enumerate(lods): - for row in range(rows * 2**lod): - image = PIL.Image.fromarray(next(image_iter), 'RGB') - image = image.crop((cx, cy, cx + cw, cy + ch)) - image = image.resize((cw // 2**lod, ch // 2**lod), PIL.Image.ANTIALIAS) - canvas.paste(image, (sum(cw // 2**lod for lod in lods[:col]), row * ch // 2**lod)) - canvas.save(png) - -#---------------------------------------------------------------------------- -# Figure 3: Style mixing. - -def draw_style_mixing_figure(png, Gs, w, h, src_seeds, dst_seeds, style_ranges): - print(png) - src_latents = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in src_seeds) - dst_latents = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in dst_seeds) - src_dlatents = Gs.components.mapping.run(src_latents, None) # [seed, layer, component] - dst_dlatents = Gs.components.mapping.run(dst_latents, None) # [seed, layer, component] - src_images = Gs.components.synthesis.run(src_dlatents, randomize_noise=False, **synthesis_kwargs) - dst_images = Gs.components.synthesis.run(dst_dlatents, randomize_noise=False, **synthesis_kwargs) - - canvas = PIL.Image.new('RGB', (w * (len(src_seeds) + 1), h * (len(dst_seeds) + 1)), 'white') - for col, src_image in enumerate(list(src_images)): - canvas.paste(PIL.Image.fromarray(src_image, 'RGB'), ((col + 1) * w, 0)) - for row, dst_image in enumerate(list(dst_images)): - canvas.paste(PIL.Image.fromarray(dst_image, 'RGB'), (0, (row + 1) * h)) - row_dlatents = np.stack([dst_dlatents[row]] * len(src_seeds)) - row_dlatents[:, style_ranges[row]] = src_dlatents[:, style_ranges[row]] - row_images = Gs.components.synthesis.run(row_dlatents, randomize_noise=False, **synthesis_kwargs) - for col, image in enumerate(list(row_images)): - canvas.paste(PIL.Image.fromarray(image, 'RGB'), ((col + 1) * w, (row + 1) * h)) - canvas.save(png) - -#---------------------------------------------------------------------------- -# Figure 4: Noise detail. - -def draw_noise_detail_figure(png, Gs, w, h, num_samples, seeds): - print(png) - canvas = PIL.Image.new('RGB', (w * 3, h * len(seeds)), 'white') - for row, seed in enumerate(seeds): - latents = np.stack([np.random.RandomState(seed).randn(Gs.input_shape[1])] * num_samples) - images = Gs.run(latents, None, truncation_psi=1, **synthesis_kwargs) - canvas.paste(PIL.Image.fromarray(images[0], 'RGB'), (0, row * h)) - for i in range(4): - crop = PIL.Image.fromarray(images[i + 1], 'RGB') - crop = crop.crop((650, 180, 906, 436)) - crop = crop.resize((w//2, h//2), PIL.Image.NEAREST) - canvas.paste(crop, (w + (i%2) * w//2, row * h + (i//2) * h//2)) - diff = np.std(np.mean(images, axis=3), axis=0) * 4 - diff = np.clip(diff + 0.5, 0, 255).astype(np.uint8) - canvas.paste(PIL.Image.fromarray(diff, 'L'), (w * 2, row * h)) - canvas.save(png) - -#---------------------------------------------------------------------------- -# Figure 5: Noise components. - -def draw_noise_components_figure(png, Gs, w, h, seeds, noise_ranges, flips): - print(png) - Gsc = Gs.clone() - noise_vars = [var for name, var in Gsc.components.synthesis.vars.items() if name.startswith('noise')] - noise_pairs = list(zip(noise_vars, tflib.run(noise_vars))) # [(var, val), ...] - latents = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in seeds) - all_images = [] - for noise_range in noise_ranges: - tflib.set_vars({var: val * (1 if i in noise_range else 0) for i, (var, val) in enumerate(noise_pairs)}) - range_images = Gsc.run(latents, None, truncation_psi=1, randomize_noise=False, **synthesis_kwargs) - range_images[flips, :, :] = range_images[flips, :, ::-1] - all_images.append(list(range_images)) - - canvas = PIL.Image.new('RGB', (w * 2, h * 2), 'white') - for col, col_images in enumerate(zip(*all_images)): - canvas.paste(PIL.Image.fromarray(col_images[0], 'RGB').crop((0, 0, w//2, h)), (col * w, 0)) - canvas.paste(PIL.Image.fromarray(col_images[1], 'RGB').crop((w//2, 0, w, h)), (col * w + w//2, 0)) - canvas.paste(PIL.Image.fromarray(col_images[2], 'RGB').crop((0, 0, w//2, h)), (col * w, h)) - canvas.paste(PIL.Image.fromarray(col_images[3], 'RGB').crop((w//2, 0, w, h)), (col * w + w//2, h)) - canvas.save(png) - -#---------------------------------------------------------------------------- -# Figure 8: Truncation trick. - -def draw_truncation_trick_figure(png, Gs, w, h, seeds, psis): - print(png) - latents = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in seeds) - dlatents = Gs.components.mapping.run(latents, None) # [seed, layer, component] - dlatent_avg = Gs.get_var('dlatent_avg') # [component] - - canvas = PIL.Image.new('RGB', (w * len(psis), h * len(seeds)), 'white') - for row, dlatent in enumerate(list(dlatents)): - row_dlatents = (dlatent[np.newaxis] - dlatent_avg) * np.reshape(psis, [-1, 1, 1]) + dlatent_avg - row_images = Gs.components.synthesis.run(row_dlatents, randomize_noise=False, **synthesis_kwargs) - for col, image in enumerate(list(row_images)): - canvas.paste(PIL.Image.fromarray(image, 'RGB'), (col * w, row * h)) - canvas.save(png) - -#---------------------------------------------------------------------------- -# Main program. - -def main(): - tflib.init_tf() - os.makedirs(config.result_dir, exist_ok=True) - draw_uncurated_result_figure(os.path.join(config.result_dir, 'figure02-uncurated-ffhq.png'), load_Gs(url_ffhq), cx=0, cy=0, cw=1024, ch=1024, rows=3, lods=[0,1,2,2,3,3], seed=5) - draw_style_mixing_figure(os.path.join(config.result_dir, 'figure03-style-mixing.png'), load_Gs(url_ffhq), w=1024, h=1024, src_seeds=[639,701,687,615,2268], dst_seeds=[888,829,1898,1733,1614,845], style_ranges=[range(0,4)]*3+[range(4,8)]*2+[range(8,18)]) - draw_noise_detail_figure(os.path.join(config.result_dir, 'figure04-noise-detail.png'), load_Gs(url_ffhq), w=1024, h=1024, num_samples=100, seeds=[1157,1012]) - draw_noise_components_figure(os.path.join(config.result_dir, 'figure05-noise-components.png'), load_Gs(url_ffhq), w=1024, h=1024, seeds=[1967,1555], noise_ranges=[range(0, 18), range(0, 0), range(8, 18), range(0, 8)], flips=[1]) - draw_truncation_trick_figure(os.path.join(config.result_dir, 'figure08-truncation-trick.png'), load_Gs(url_ffhq), w=1024, h=1024, seeds=[91,388], psis=[1, 0.7, 0.5, 0, -0.5, -1]) - draw_uncurated_result_figure(os.path.join(config.result_dir, 'figure10-uncurated-bedrooms.png'), load_Gs(url_bedrooms), cx=0, cy=0, cw=256, ch=256, rows=5, lods=[0,0,1,1,2,2,2], seed=0) - draw_uncurated_result_figure(os.path.join(config.result_dir, 'figure11-uncurated-cars.png'), load_Gs(url_cars), cx=0, cy=64, cw=512, ch=384, rows=4, lods=[0,1,2,2,3,3], seed=2) - draw_uncurated_result_figure(os.path.join(config.result_dir, 'figure12-uncurated-cats.png'), load_Gs(url_cats), cx=0, cy=0, cw=256, ch=256, rows=5, lods=[0,0,1,1,2,2,2], seed=1) - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - main() - -#---------------------------------------------------------------------------- diff --git a/spaces/mikeee/convbot/push-hf.bat b/spaces/mikeee/convbot/push-hf.bat deleted file mode 100644 index 253049878f1e685d5f6d5a94552c416baad73f09..0000000000000000000000000000000000000000 --- a/spaces/mikeee/convbot/push-hf.bat +++ /dev/null @@ -1,2 +0,0 @@ -REM git push hf master:main -git push hf diff --git a/spaces/mikeee/radiobee-aligner/radiobee/align_texts.py b/spaces/mikeee/radiobee-aligner/radiobee/align_texts.py deleted file mode 100644 index ff27e93c1cc0fca2fb86bb140e71cfb66c637d7e..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/radiobee/align_texts.py +++ /dev/null @@ -1,59 +0,0 @@ -"""Align texts based on aset, src_text, tgt_text.""" -# pylint: disable=unused-variable - -from typing import List, Tuple, Union -from logzero import logger - - -# fmt: off -def align_texts( - aset: List[Tuple[Union[str, float], Union[str, float], Union[str, float]]], - src_text: List[str], - tgt_text: List[str], -) -> List[Tuple[Union[str], Union[str], Union[str, float]]]: - # fmt: on - """Align texts (paras/sents) based on aset, src_text, tgt_text. - - Args: - aset: align set - src_text: source text - tgt_text: target text - - Returns: - aligned texts with possible mertics - """ - xset, yset, metrics = zip(*aset) # unzip aset - xset = [elm for elm in xset if elm != ""] - yset = [elm for elm in yset if elm != ""] - - if (len(xset), len(yset)) != (len(tgt_text), len(src_text)): - logger.warning( - " (%s, %s) != (%s, %s) ", len(xset), len(yset), len(tgt_text), len(src_text) - ) - # raise Exception(" See previous message") - - texts = [] - for elm in aset: - elm0, elm1, elm2 = elm - _ = [] - - # src_text first - if isinstance(elm1, str): - _.append("") - else: - _.append(src_text[int(elm1)]) - - if isinstance(elm0, str): - _.append("") - else: - _.append(tgt_text[int(elm0)]) - - if isinstance(elm2, str): - _.append("") - else: - _.append(round(elm2, 2)) - - texts.append(tuple(_)) - - # return [("", "", 0.)] - return texts diff --git a/spaces/mixcard/image-2-details/app.py b/spaces/mixcard/image-2-details/app.py deleted file mode 100644 index feaf0922e55ec8938c81c04e3ce2c679743c508e..0000000000000000000000000000000000000000 --- a/spaces/mixcard/image-2-details/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/microsoft/resnet-50").launch() \ No newline at end of file diff --git a/spaces/miyu0609/gsdf-Counterfeit-V2.0/app.py b/spaces/miyu0609/gsdf-Counterfeit-V2.0/app.py deleted file mode 100644 index 9f9e01c4a704f4bf5f8569b778c62b55160fa270..0000000000000000000000000000000000000000 --- a/spaces/miyu0609/gsdf-Counterfeit-V2.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/gsdf/Counterfeit-V2.0").launch() \ No newline at end of file diff --git a/spaces/mlpc-lab/BLIVA/bliva/models/vit.py b/spaces/mlpc-lab/BLIVA/bliva/models/vit.py deleted file mode 100644 index fae84aa2eec3b26b97b0574575c3717780db96ec..0000000000000000000000000000000000000000 --- a/spaces/mlpc-lab/BLIVA/bliva/models/vit.py +++ /dev/null @@ -1,527 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause - - Based on timm code base - https://github.com/rwightman/pytorch-image-models/tree/master/timm -""" - -import math -import torch -import torch.nn as nn -import torch.nn.functional as F -from functools import partial - -from timm.models.vision_transformer import _cfg, PatchEmbed -from timm.models.registry import register_model -from timm.models.layers import trunc_normal_, DropPath -from timm.models.helpers import named_apply, adapt_input_conv - -from fairscale.nn.checkpoint.checkpoint_activations import checkpoint_wrapper -from bliva.models.base_model import BaseEncoder - - -class Mlp(nn.Module): - """MLP as used in Vision Transformer, MLP-Mixer and related networks""" - - def __init__( - self, - in_features, - hidden_features=None, - out_features=None, - act_layer=nn.GELU, - drop=0.0, - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - def __init__( - self, - dim, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim**-0.5 - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - self.attn_gradients = None - self.attention_map = None - - def save_attn_gradients(self, attn_gradients): - self.attn_gradients = attn_gradients - - def get_attn_gradients(self): - return self.attn_gradients - - def save_attention_map(self, attention_map): - self.attention_map = attention_map - - def get_attention_map(self): - return self.attention_map - - def forward(self, x, register_hook=False): - B, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - if register_hook: - self.save_attention_map(attn) - attn.register_hook(self.save_attn_gradients) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - def __init__( - self, - dim, - num_heads, - mlp_ratio=4.0, - qkv_bias=False, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - use_grad_checkpointing=False, - ): - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_layer=act_layer, - drop=drop, - ) - - if use_grad_checkpointing: - self.attn = checkpoint_wrapper(self.attn) - self.mlp = checkpoint_wrapper(self.mlp) - - def forward(self, x, register_hook=False): - x = x + self.drop_path(self.attn(self.norm1(x), register_hook=register_hook)) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class VisionTransformer(nn.Module): - """Vision Transformer - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - - https://arxiv.org/abs/2010.11929 - """ - - def __init__( - self, - img_size=224, - patch_size=16, - in_chans=3, - num_classes=1000, - embed_dim=768, - depth=12, - num_heads=12, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - representation_size=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.0, - norm_layer=None, - use_grad_checkpointing=False, - ckpt_layer=0, - ): - """ - Args: - img_size (int, tuple): input image size - patch_size (int, tuple): patch size - in_chans (int): number of input channels - num_classes (int): number of classes for classification head - embed_dim (int): embedding dimension - depth (int): depth of transformer - num_heads (int): number of attention heads - mlp_ratio (int): ratio of mlp hidden dim to embedding dim - qkv_bias (bool): enable bias for qkv if True - qk_scale (float): override default qk scale of head_dim ** -0.5 if set - representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set - drop_rate (float): dropout rate - attn_drop_rate (float): attention dropout rate - drop_path_rate (float): stochastic depth rate - norm_layer: (nn.Module): normalization layer - """ - super().__init__() - self.num_features = ( - self.embed_dim - ) = embed_dim # num_features for consistency with other models - norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) - - self.patch_embed = PatchEmbed( - img_size=img_size, - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - ) - - num_patches = self.patch_embed.num_patches - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim)) - self.pos_drop = nn.Dropout(p=drop_rate) - - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, depth) - ] # stochastic depth decay rule - self.blocks = nn.ModuleList( - [ - Block( - dim=embed_dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[i], - norm_layer=norm_layer, - use_grad_checkpointing=( - use_grad_checkpointing and i >= depth - ckpt_layer - ), - ) - for i in range(depth) - ] - ) - self.norm = norm_layer(embed_dim) - - trunc_normal_(self.pos_embed, std=0.02) - trunc_normal_(self.cls_token, std=0.02) - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {"pos_embed", "cls_token"} - - def forward(self, x, register_blk=-1): - B = x.shape[0] - x = self.patch_embed(x) - - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + self.pos_embed[:, : x.size(1), :] - x = self.pos_drop(x) - - for i, blk in enumerate(self.blocks): - x = blk(x, register_blk == i) - x = self.norm(x) - - return x - - @torch.jit.ignore() - def load_pretrained(self, checkpoint_path, prefix=""): - _load_weights(self, checkpoint_path, prefix) - - -@torch.no_grad() -def _load_weights(model: VisionTransformer, checkpoint_path: str, prefix: str = ""): - """Load weights from .npz checkpoints for official Google Brain Flax implementation""" - import numpy as np - - def _n2p(w, t=True): - if w.ndim == 4 and w.shape[0] == w.shape[1] == w.shape[2] == 1: - w = w.flatten() - if t: - if w.ndim == 4: - w = w.transpose([3, 2, 0, 1]) - elif w.ndim == 3: - w = w.transpose([2, 0, 1]) - elif w.ndim == 2: - w = w.transpose([1, 0]) - return torch.from_numpy(w) - - w = np.load(checkpoint_path) - if not prefix and "opt/target/embedding/kernel" in w: - prefix = "opt/target/" - - if hasattr(model.patch_embed, "backbone"): - # hybrid - backbone = model.patch_embed.backbone - stem_only = not hasattr(backbone, "stem") - stem = backbone if stem_only else backbone.stem - stem.conv.weight.copy_( - adapt_input_conv( - stem.conv.weight.shape[1], _n2p(w[f"{prefix}conv_root/kernel"]) - ) - ) - stem.norm.weight.copy_(_n2p(w[f"{prefix}gn_root/scale"])) - stem.norm.bias.copy_(_n2p(w[f"{prefix}gn_root/bias"])) - if not stem_only: - for i, stage in enumerate(backbone.stages): - for j, block in enumerate(stage.blocks): - bp = f"{prefix}block{i + 1}/unit{j + 1}/" - for r in range(3): - getattr(block, f"conv{r + 1}").weight.copy_( - _n2p(w[f"{bp}conv{r + 1}/kernel"]) - ) - getattr(block, f"norm{r + 1}").weight.copy_( - _n2p(w[f"{bp}gn{r + 1}/scale"]) - ) - getattr(block, f"norm{r + 1}").bias.copy_( - _n2p(w[f"{bp}gn{r + 1}/bias"]) - ) - if block.downsample is not None: - block.downsample.conv.weight.copy_( - _n2p(w[f"{bp}conv_proj/kernel"]) - ) - block.downsample.norm.weight.copy_( - _n2p(w[f"{bp}gn_proj/scale"]) - ) - block.downsample.norm.bias.copy_(_n2p(w[f"{bp}gn_proj/bias"])) - embed_conv_w = _n2p(w[f"{prefix}embedding/kernel"]) - else: - embed_conv_w = adapt_input_conv( - model.patch_embed.proj.weight.shape[1], _n2p(w[f"{prefix}embedding/kernel"]) - ) - model.patch_embed.proj.weight.copy_(embed_conv_w) - model.patch_embed.proj.bias.copy_(_n2p(w[f"{prefix}embedding/bias"])) - model.cls_token.copy_(_n2p(w[f"{prefix}cls"], t=False)) - pos_embed_w = _n2p(w[f"{prefix}Transformer/posembed_input/pos_embedding"], t=False) - if pos_embed_w.shape != model.pos_embed.shape: - pos_embed_w = resize_pos_embed( # resize pos embedding when different size from pretrained weights - pos_embed_w, - model.pos_embed, - getattr(model, "num_tokens", 1), - model.patch_embed.grid_size, - ) - model.pos_embed.copy_(pos_embed_w) - model.norm.weight.copy_(_n2p(w[f"{prefix}Transformer/encoder_norm/scale"])) - model.norm.bias.copy_(_n2p(w[f"{prefix}Transformer/encoder_norm/bias"])) - # if isinstance(model.head, nn.Linear) and model.head.bias.shape[0] == w[f'{prefix}head/bias'].shape[-1]: - # model.head.weight.copy_(_n2p(w[f'{prefix}head/kernel'])) - # model.head.bias.copy_(_n2p(w[f'{prefix}head/bias'])) - # if isinstance(getattr(model.pre_logits, 'fc', None), nn.Linear) and f'{prefix}pre_logits/bias' in w: - # model.pre_logits.fc.weight.copy_(_n2p(w[f'{prefix}pre_logits/kernel'])) - # model.pre_logits.fc.bias.copy_(_n2p(w[f'{prefix}pre_logits/bias'])) - for i, block in enumerate(model.blocks.children()): - block_prefix = f"{prefix}Transformer/encoderblock_{i}/" - mha_prefix = block_prefix + "MultiHeadDotProductAttention_1/" - block.norm1.weight.copy_(_n2p(w[f"{block_prefix}LayerNorm_0/scale"])) - block.norm1.bias.copy_(_n2p(w[f"{block_prefix}LayerNorm_0/bias"])) - block.attn.qkv.weight.copy_( - torch.cat( - [ - _n2p(w[f"{mha_prefix}{n}/kernel"], t=False).flatten(1).T - for n in ("query", "key", "value") - ] - ) - ) - block.attn.qkv.bias.copy_( - torch.cat( - [ - _n2p(w[f"{mha_prefix}{n}/bias"], t=False).reshape(-1) - for n in ("query", "key", "value") - ] - ) - ) - block.attn.proj.weight.copy_(_n2p(w[f"{mha_prefix}out/kernel"]).flatten(1)) - block.attn.proj.bias.copy_(_n2p(w[f"{mha_prefix}out/bias"])) - for r in range(2): - getattr(block.mlp, f"fc{r + 1}").weight.copy_( - _n2p(w[f"{block_prefix}MlpBlock_3/Dense_{r}/kernel"]) - ) - getattr(block.mlp, f"fc{r + 1}").bias.copy_( - _n2p(w[f"{block_prefix}MlpBlock_3/Dense_{r}/bias"]) - ) - block.norm2.weight.copy_(_n2p(w[f"{block_prefix}LayerNorm_2/scale"])) - block.norm2.bias.copy_(_n2p(w[f"{block_prefix}LayerNorm_2/bias"])) - - -def resize_pos_embed(posemb, posemb_new, num_tokens=1, gs_new=()): - # Rescale the grid of position embeddings when loading from state_dict. Adapted from - # https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224 - print("Resized position embedding: %s to %s", posemb.shape, posemb_new.shape) - ntok_new = posemb_new.shape[1] - if num_tokens: - posemb_tok, posemb_grid = posemb[:, :num_tokens], posemb[0, num_tokens:] - ntok_new -= num_tokens - else: - posemb_tok, posemb_grid = posemb[:, :0], posemb[0] - gs_old = int(math.sqrt(len(posemb_grid))) - if not len(gs_new): # backwards compatibility - gs_new = [int(math.sqrt(ntok_new))] * 2 - assert len(gs_new) >= 2 - print("Position embedding grid-size from %s to %s", [gs_old, gs_old], gs_new) - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate( - posemb_grid, size=gs_new, mode="bicubic", align_corners=False - ) - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_new[0] * gs_new[1], -1) - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - return - - -def interpolate_pos_embed(pos_embed_checkpoint, visual_encoder): - # interpolate position embedding - embedding_size = pos_embed_checkpoint.shape[-1] - num_patches = visual_encoder.patch_embed.num_patches - num_extra_tokens = visual_encoder.pos_embed.shape[-2] - num_patches - # height (== width) for the checkpoint position embedding - orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5) - # height (== width) for the new position embedding - new_size = int(num_patches**0.5) - - if orig_size != new_size: - # class_token and dist_token are kept unchanged - extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens] - # only the position tokens are interpolated - pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:] - pos_tokens = pos_tokens.reshape( - -1, orig_size, orig_size, embedding_size - ).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, size=(new_size, new_size), mode="bicubic", align_corners=False - ) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - print( - "reshape position embedding from %d to %d" % (orig_size**2, new_size**2) - ) - - return new_pos_embed - else: - return pos_embed_checkpoint - - -class VisionTransformerEncoder(VisionTransformer, BaseEncoder): - @classmethod - def from_config(cls, cfg, from_pretrained=False): - - vit_type = cfg.get("vit_type", "base") - image_size = cfg.get("image_size", 384) - ckpt_layer = cfg.get("vit_ckpt_layer", 0) - drop_path_rate = cfg.get("vit_drop_path_rate", 0) - norm_layer_eps = cfg.get("vit_layer_norm_epsilon", -1) - use_grad_checkpointing = cfg.get("vit_grad_ckpt", False) - - if norm_layer_eps == -1: - norm_layer = None - else: - norm_layer = partial(nn.LayerNorm, eps=norm_layer_eps) - - # norm_layer=partial(nn.LayerNorm, eps=1e-6), - assert vit_type in ["base", "large"], "vit parameter must be base or large" - if vit_type == "base": - vision_width = 768 - visual_encoder = cls( - img_size=image_size, - patch_size=16, - embed_dim=vision_width, - depth=12, - num_heads=12, - use_grad_checkpointing=use_grad_checkpointing, - ckpt_layer=ckpt_layer, - drop_path_rate=0 or drop_path_rate, - norm_layer=norm_layer, - ) - - if from_pretrained: - checkpoint = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth", - map_location="cpu", - check_hash=True, - ) - state_dict = checkpoint["model"] - state_dict["pos_embed"] = interpolate_pos_embed( - state_dict["pos_embed"], visual_encoder - ) - msg = visual_encoder.load_state_dict(state_dict, strict=False) - - elif vit_type == "large": - vision_width = 1024 - visual_encoder = cls( - img_size=image_size, - patch_size=16, - embed_dim=vision_width, - depth=24, - num_heads=16, - use_grad_checkpointing=use_grad_checkpointing, - ckpt_layer=ckpt_layer, - drop_path_rate=0.1 or drop_path_rate, - norm_layer=norm_layer, - ) - if from_pretrained: - from timm.models.helpers import load_custom_pretrained - from timm.models.vision_transformer import default_cfgs - - load_custom_pretrained( - visual_encoder, default_cfgs["vit_large_patch16_224_in21k"] - ) - - visual_encoder.vision_width = vision_width - return visual_encoder - - def forward_features(self, x, register_blk=-1): - return super().forward(x, register_blk) diff --git a/spaces/mnauf/detect-bees/utils/loggers/wandb/log_dataset.py b/spaces/mnauf/detect-bees/utils/loggers/wandb/log_dataset.py deleted file mode 100644 index 06e81fb693072c99703e5c52b169892b7fd9a8cc..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/utils/loggers/wandb/log_dataset.py +++ /dev/null @@ -1,27 +0,0 @@ -import argparse - -from wandb_utils import WandbLogger - -from utils.general import LOGGER - -WANDB_ARTIFACT_PREFIX = 'wandb-artifact://' - - -def create_dataset_artifact(opt): - logger = WandbLogger(opt, None, job_type='Dataset Creation') # TODO: return value unused - if not logger.wandb: - LOGGER.info("install wandb using `pip install wandb` to log the dataset") - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path') - parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset') - parser.add_argument('--project', type=str, default='YOLOv5', help='name of W&B Project') - parser.add_argument('--entity', default=None, help='W&B entity') - parser.add_argument('--name', type=str, default='log dataset', help='name of W&B run') - - opt = parser.parse_args() - opt.resume = False # Explicitly disallow resume check for dataset upload job - - create_dataset_artifact(opt) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/adaptive_span/adaptive_span_attention.py b/spaces/mshukor/UnIVAL/fairseq/examples/adaptive_span/adaptive_span_attention.py deleted file mode 100644 index 07f757bb8e1a8a67b1124175ee338c8735aa8d65..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/adaptive_span/adaptive_span_attention.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class AdaptiveMask(nn.Module): - """Soft masking function for adaptive size. - It masks out the last K values of an input. The masking value - goes from 1 to 0 gradually, so K can be learned with - back-propagation. - Args: - max_size: maximum size (i.e. input dimension) - ramp_size: size of the ramp going from 0 to 1 - init_val: initial size proportion not to be masked out - shape: learn multiple sizes independent of each other - """ - - def __init__(self, max_size, ramp_size, init_val=0, shape=(1,)): - nn.Module.__init__(self) - self._max_size = max_size - self._ramp_size = ramp_size - self.current_val = nn.Parameter(torch.zeros(*shape) + init_val) - mask_template = torch.linspace(1 - max_size, 0, steps=max_size) - self.register_buffer("mask_template", mask_template) - - def forward(self, x): - mask = self.mask_template.float() + self.current_val.float() * self._max_size - mask = mask / self._ramp_size + 1 - mask = mask.clamp(0, 1) - if x.size(-1) < self._max_size: - # the input could have been trimmed beforehand to save computation - mask = mask.narrow(-1, self._max_size - x.size(-1), x.size(-1)) - x = (x * mask).type_as(x) - return x - - def get_current_max_size(self, include_ramp=True): - current_size = math.ceil(self.current_val.max().item() * self._max_size) - if include_ramp: - current_size += self._ramp_size - current_size = max(0, min(self._max_size, current_size)) - return current_size - - def get_current_avg_size(self, include_ramp=True): - current_size = math.ceil( - self.current_val.float().mean().item() * self._max_size - ) - if include_ramp: - current_size += self._ramp_size - current_size = max(0, min(self._max_size, current_size)) - return current_size - - def clamp_param(self): - """this need to be called after each update""" - self.current_val.data.clamp_(0, 1) - - -class AdaptiveSpan(nn.Module): - """Adaptive attention span for Transformerself. - This module learns an attention span length from data for each - self-attention head. - Args: - attn_span: maximum attention span - adapt_span_loss: loss coefficient for the span length - adapt_span_ramp: length of the masking ramp - adapt_span_init: initial size ratio - adapt_span_cache: adapt cache size to reduce memory usage - """ - - def __init__( - self, - attn_span, - adapt_span_ramp, - adapt_span_init, - n_head, - adapt_span_layer, - **kargs - ): - nn.Module.__init__(self) - self._max_span = attn_span - self._n_head = n_head - self._adapt_span_layer = adapt_span_layer - if self._adapt_span_layer: - self._mask = AdaptiveMask( - max_size=self._max_span, - ramp_size=adapt_span_ramp, - init_val=adapt_span_init, - ) - else: - self._mask = AdaptiveMask( - max_size=self._max_span, - ramp_size=adapt_span_ramp, - init_val=adapt_span_init, - shape=(n_head, 1, 1), - ) - - def forward(self, attn, normalize=True): - """mask attention with the right span""" - # batch and head dimensions are merged together, so separate them first - self.clamp_param() - if self._adapt_span_layer: - attn = self._mask(attn) - else: - B = attn.size(0) # batch size - M = attn.size(1) # block size - attn = attn.reshape(B // self._n_head, self._n_head, M, -1) - attn = self._mask(attn) - attn = attn.view(B, M, -1) - return attn - - def get_trim_len(self): - """how much of memory can be trimmed to reduce computation""" - L = self._max_span - trim_len = min(L - 1, L - self._mask.get_current_max_size()) - # too fine granularity might be bad for the memory management - trim_len = math.floor(trim_len / 64) * 64 - return trim_len - - def trim_memory(self, query, key, value, key_pe): - """trim out unnecessary memory beforehand to reduce computation""" - trim_len = self.get_trim_len() - cache_size = key.size(1) - query.size(1) - trim_len_cache = trim_len - (self._max_span - cache_size) - if trim_len_cache > 0: - key = key[:, trim_len_cache:, :] - value = value[:, trim_len_cache:, :] - elif trim_len_cache < 0: - # cache is too short! this happens when validation resumes - # after a lot of updates. - key = F.pad(key, [0, 0, -trim_len_cache, 0]) - value = F.pad(value, [0, 0, -trim_len_cache, 0]) - if trim_len > 0: - if key_pe is not None: - key_pe = key_pe[:, :, trim_len:] - return key, value, key_pe - - def get_cache_size(self): - """determine how long the cache should be""" - trim_len = self.get_trim_len() - # give a buffer of 64 steps since a span might increase - # in future updates - return min(self._max_span, self._max_span - trim_len + 64) - - def get_loss(self): - """a loss term for regularizing the span length""" - return self._max_span * self._mask.current_val.float().mean() - - def get_current_max_span(self): - return self._mask.get_current_max_size() - - def get_current_avg_span(self): - return self._mask.get_current_avg_size() - - def clamp_param(self): - self._mask.clamp_param() diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py b/spaces/mshukor/UnIVAL/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py deleted file mode 100644 index 3991414aed3800f301e4097e819d3064bb549c37..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py +++ /dev/null @@ -1,190 +0,0 @@ -from functools import partial - -import torch -from torch import Tensor -import math -import torch.nn.functional as F - -from . import register_monotonic_attention -from .monotonic_multihead_attention import ( - MonotonicAttention, - MonotonicInfiniteLookbackAttention, - WaitKAttention -) -from typing import Dict, Optional - - -def fixed_pooling_monotonic_attention(monotonic_attention): - def create_model(monotonic_attention, klass): - class FixedStrideMonotonicAttention(monotonic_attention): - def __init__(self, args): - self.waitk_lagging = 0 - self.num_heads = 0 - self.noise_mean = 0.0 - self.noise_var = 0.0 - super().__init__(args) - self.pre_decision_type = args.fixed_pre_decision_type - self.pre_decision_ratio = args.fixed_pre_decision_ratio - self.pre_decision_pad_threshold = args.fixed_pre_decision_pad_threshold - assert self.pre_decision_ratio > 1 - - if args.fixed_pre_decision_type == "average": - self.pooling_layer = torch.nn.AvgPool1d( - kernel_size=self.pre_decision_ratio, - stride=self.pre_decision_ratio, - ceil_mode=True, - ) - elif args.fixed_pre_decision_type == "last": - - def last(key): - if key.size(2) < self.pre_decision_ratio: - return key - else: - k = key[ - :, - :, - self.pre_decision_ratio - 1:: self.pre_decision_ratio, - ].contiguous() - if key.size(-1) % self.pre_decision_ratio != 0: - k = torch.cat([k, key[:, :, -1:]], dim=-1).contiguous() - return k - - self.pooling_layer = last - else: - raise NotImplementedError - - @staticmethod - def add_args(parser): - super( - FixedStrideMonotonicAttention, FixedStrideMonotonicAttention - ).add_args(parser) - parser.add_argument( - "--fixed-pre-decision-ratio", - type=int, - required=True, - help=( - "Ratio for the fixed pre-decision," - "indicating how many encoder steps will start" - "simultaneous decision making process." - ), - ) - parser.add_argument( - "--fixed-pre-decision-type", - default="average", - choices=["average", "last"], - help="Pooling type", - ) - parser.add_argument( - "--fixed-pre-decision-pad-threshold", - type=float, - default=0.3, - help="If a part of the sequence has pad" - ",the threshold the pooled part is a pad.", - ) - - def insert_zeros(self, x): - bsz_num_heads, tgt_len, src_len = x.size() - stride = self.pre_decision_ratio - weight = F.pad(torch.ones(1, 1, 1).to(x), (stride - 1, 0)) - x_upsample = F.conv_transpose1d( - x.view(-1, src_len).unsqueeze(1), - weight, - stride=stride, - padding=0, - ) - return x_upsample.squeeze(1).view(bsz_num_heads, tgt_len, -1) - - def p_choose( - self, - query: Optional[Tensor], - key: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - assert key is not None - assert query is not None - src_len = key.size(0) - tgt_len = query.size(0) - batch_size = query.size(1) - - key_pool = self.pooling_layer(key.transpose(0, 2)).transpose(0, 2) - - if key_padding_mask is not None: - key_padding_mask_pool = ( - self.pooling_layer(key_padding_mask.unsqueeze(0).float()) - .squeeze(0) - .gt(self.pre_decision_pad_threshold) - ) - # Make sure at least one element is not pad - key_padding_mask_pool[:, 0] = 0 - else: - key_padding_mask_pool = None - - if incremental_state is not None: - # The floor instead of ceil is used for inference - # But make sure the length key_pool at least 1 - if ( - max(1, math.floor(key.size(0) / self.pre_decision_ratio)) - ) < key_pool.size(0): - key_pool = key_pool[:-1] - if key_padding_mask_pool is not None: - key_padding_mask_pool = key_padding_mask_pool[:-1] - - p_choose_pooled = self.p_choose_from_qk( - query, - key_pool, - key_padding_mask_pool, - incremental_state=incremental_state, - ) - - # Upsample, interpolate zeros - p_choose = self.insert_zeros(p_choose_pooled) - - if p_choose.size(-1) < src_len: - # Append zeros if the upsampled p_choose is shorter than src_len - p_choose = torch.cat( - [ - p_choose, - torch.zeros( - p_choose.size(0), - tgt_len, - src_len - p_choose.size(-1) - ).to(p_choose) - ], - dim=2 - ) - else: - # can be larger than src_len because we used ceil before - p_choose = p_choose[:, :, :src_len] - p_choose[:, :, -1] = p_choose_pooled[:, :, -1] - - assert list(p_choose.size()) == [ - batch_size * self.num_heads, - tgt_len, - src_len, - ] - - return p_choose - - FixedStrideMonotonicAttention.__name__ = klass.__name__ - return FixedStrideMonotonicAttention - - return partial(create_model, monotonic_attention) - - -@register_monotonic_attention("waitk_fixed_pre_decision") -@fixed_pooling_monotonic_attention(WaitKAttention) -class WaitKAttentionFixedStride: - pass - - -@register_monotonic_attention("hard_aligned_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicAttention) -class MonotonicAttentionFixedStride: - pass - - -@register_monotonic_attention("infinite_lookback_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicInfiniteLookbackAttention) -class MonotonicInfiniteLookbackAttentionFixedStride: - pass diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/roberta/model_gottbert.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/roberta/model_gottbert.py deleted file mode 100644 index 2e8c66354ac7ce7309226bb091a7baa4776fbfdc..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/roberta/model_gottbert.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -GottBERT: a pure German Language Model -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model('gottbert') -class GottbertModel(RobertaModel): - - @classmethod - def hub_models(cls): - return { - 'gottbert-base': 'https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz', - } - - @classmethod - def from_pretrained(cls, - model_name_or_path, - checkpoint_file='model.pt', - data_name_or_path='.', - bpe='hf_byte_bpe', - bpe_vocab='vocab.json', - bpe_merges='merges.txt', - bpe_add_prefix_space=False, - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - bpe_vocab=bpe_vocab, - bpe_merges=bpe_merges, - bpe_add_prefix_space=bpe_add_prefix_space, - **kwargs, - ) - return RobertaHubInterface(x['args'], x['task'], x['models'][0]) diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/test_multihead_attention.py b/spaces/mshukor/UnIVAL/fairseq/tests/test_multihead_attention.py deleted file mode 100644 index 620a2d679147bbbb8d15f3323374a39939686ec2..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/test_multihead_attention.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import torch -from fairseq.modules.multihead_attention import MultiheadAttention - - -class TestMultiheadAttention(unittest.TestCase): - def test_append_prev_key_padding_mask(self): - bsz = 1 - src_len = 4 - - cases = [ - # no padding mask - (None, None, None), - # current padding mask only - ( - torch.tensor([[1]]).bool(), - None, - torch.tensor([[0, 0, 0, 1]]).bool(), - ), - # previous padding mask only - ( - None, - torch.tensor([[0, 1, 0]]).bool(), - torch.tensor([[0, 1, 0, 0]]).bool(), - ), - # both padding masks - ( - torch.tensor([[1]]).bool(), - torch.tensor([[0, 1, 0]]).bool(), - torch.tensor([[0, 1, 0, 1]]).bool(), - ), - # prev_key_padding_mask already full - ( - torch.tensor([[0, 1, 0, 1]]).bool(), - None, - torch.tensor([[0, 1, 0, 1]]).bool(), - ), - # key_padding_mask already full - ( - None, - torch.tensor([[0, 1, 0, 1]]).bool(), - torch.tensor([[0, 1, 0, 1]]).bool(), - ), - ] - for c in cases: - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - c[0], - c[1], - batch_size=bsz, - src_len=src_len, - static_kv=False, - ) - - if key_padding_mask is not None: - self.assertTrue( - torch.all(torch.eq(key_padding_mask, c[2])), - f"Unexpected resultant key padding mask: {key_padding_mask}" - f" given current: {c[0]} and previous: {c[1]}", - ) - self.assertEqual(key_padding_mask.size(0), bsz) - self.assertEqual(key_padding_mask.size(1), src_len) - else: - self.assertIsNone(c[2]) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/vqa/video/video_vqa_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_initvideocaption.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/vqa/video/video_vqa_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_initvideocaption.sh deleted file mode 100644 index ee244d4c83a967f922d1b3cc96106feff9c33140..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/vqa/video/video_vqa_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_initvideocaption.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=video_vqa_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_initvideocaption -#SBATCH --nodes=2 -#SBATCH --ntasks=2 -#SBATCH --gpus=16 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -####SBATCH --nodelist=x1004c4s1b0n0,x1004c4s1b1n0 -#SBATCH --time=24:00:00 -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/video_vqa_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_initvideocaption.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 2 -n 2 -c 128 --gpus=16 --gpu-bind=closest bash averaging/ratatouille/scaling_best/vqa/video/video_vqa_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_initvideocaption.sh - - diff --git a/spaces/mueller-franzes/medfusion-app/scripts/helpers/export_example_gifs.py b/spaces/mueller-franzes/medfusion-app/scripts/helpers/export_example_gifs.py deleted file mode 100644 index 0c752556ede085c0c017c3a18e628114cc4e9932..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/scripts/helpers/export_example_gifs.py +++ /dev/null @@ -1,34 +0,0 @@ - -from pathlib import Path -from PIL import Image -import numpy as np - - - -if __name__ == "__main__": - path_out = Path.cwd()/'media/' - path_out.mkdir(parents=True, exist_ok=True) - - # imgs = [] - # for img_i in range(50): - # for label_a, label_b, label_c in [('NRG', 'No_Cardiomegaly', 'nonMSIH'), ('RG', 'Cardiomegaly', 'MSIH')]: - # img_a = Image.open(f'/mnt/hdd/datasets/eye/AIROGS/data_generated_diffusion/{label_a}/fake_{img_i}.png').quantize(200, 0).convert('RGB') - # img_b = Image.open(f'/mnt/hdd/datasets/chest/CheXpert/ChecXpert-v10/generated_diffusion2_150/{label_b}/fake_{img_i}.png').quantize(50, 0).convert('RGB') - # img_c = Image.open(f'/mnt/hdd/datasets/pathology/kather_msi_mss_2/synthetic_data/diffusion2_150/{label_c}/fake_{img_i}.png').resize((256, 256)).quantize(10, 0).convert('RGB') - - # img = Image.fromarray(np.concatenate([np.array(img_a), np.array(img_b), np.array(img_c)], axis=1), 'RGB').quantize(256, 1) - # imgs.append(img) - - # imgs[0].save(fp=path_out/f'animation.gif', format='GIF', append_images=imgs[1:], optimize=False, save_all=True, duration=500, loop=0) - - imgs = [] - path_root = Path('/mnt/hdd/datasets/pathology/kather_msi_mss_2/synthetic_data/diffusion2_150') - for img_i in range(50): - for path_label in path_root.iterdir(): - img = Image.open(path_label/f'fake_{img_i}.png').resize((256, 256)) - imgs.append(img) - - imgs[0].save(fp=path_out/f'animation_histo.gif', format='GIF', append_images=imgs[1:], optimize=False, save_all=True, duration=500, loop=0) - - - \ No newline at end of file diff --git a/spaces/mustapha/ACSR/app.py b/spaces/mustapha/ACSR/app.py deleted file mode 100644 index 20f5e098520d4dd37ead4039b9611aeedc8febff..0000000000000000000000000000000000000000 --- a/spaces/mustapha/ACSR/app.py +++ /dev/null @@ -1,122 +0,0 @@ - -# %% -from cProfile import label -import gradio as gr -import numpy as np -# import random as rn -# import os -import tensorflow as tf -import cv2 - -tf.config.experimental.set_visible_devices([], 'GPU') - -#%% constantes -COLOR = np.array([163, 23, 252])/255.0 -ALPHA = 0.8 - -#%% -def parse_image(image): - image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) - image = cv2.resize(image, (100, 100)) - image = image.astype(np.float32) - image = image / 255.0 - image = np.expand_dims(image, axis=0) - image = np.expand_dims(image, axis=-1) - return image - -#%% - -def cnn(input_shape, output_shape): - num_classes = output_shape[0] - dropout_seed = 708090 - kernel_seed = 42 - - model = tf.keras.models.Sequential([ - tf.keras.layers.Conv2D(16, 3, activation='relu', input_shape=input_shape, kernel_initializer=tf.keras.initializers.GlorotUniform(seed=kernel_seed)), - tf.keras.layers.MaxPooling2D(), - tf.keras.layers.Dropout(0.1, seed=dropout_seed), - tf.keras.layers.Conv2D(32, 5, activation='relu', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=kernel_seed)), - tf.keras.layers.MaxPooling2D(), - tf.keras.layers.Dropout(0.1, seed=dropout_seed), - tf.keras.layers.Conv2D(64, 10, activation='relu', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=kernel_seed)), - tf.keras.layers.MaxPooling2D(), - tf.keras.layers.Dropout(0.1, seed=dropout_seed), - tf.keras.layers.Flatten(), - tf.keras.layers.Dense(128, activation='relu', kernel_regularizer='l2', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=kernel_seed)), - tf.keras.layers.Dropout(0.2, seed=dropout_seed), - tf.keras.layers.Dense(16, activation='relu', kernel_regularizer='l2', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=kernel_seed)), - tf.keras.layers.Dropout(0.2, seed=dropout_seed), - tf.keras.layers.Dense(num_classes, activation='sigmoid', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=kernel_seed)) - ]) - - return model - -#%% -model = cnn((100, 100, 1), (1,)) -model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=False), optimizer='Adam', metrics='accuracy') - -model.load_weights('weights.h5') - -#%% - -def saliency_map(img): - """ - return the normalized gradients overs the image, and also the prediction of the model - """ - inp = tf.convert_to_tensor( - img[None, :, :, None], - dtype = tf.float32 - ) - inp_var = tf.Variable(inp) - - with tf.GradientTape() as tape: - pred = model(inp_var, training=False) - loss = pred[0][0] - grads = tape.gradient(loss, inp_var) - grads = tf.math.abs(grads) / (tf.math.reduce_max(tf.math.abs(grads))+1e-14) - return grads, round(float(model(inp_var, training = False))) - -#%% -def segment(image): - # c = image - print(image.shape) - image = parse_image(image) - print(image.shape) - output = model.predict(image) - # print(output) - labels = { - "Farsi" : 1-float(output), - "Ruqaa" : float(output) - } - grads, _ = saliency_map(image[0, :, :, 0]) - s_map = grads.numpy()[0, :, :, 0] - reconstructed_image = cv2.cvtColor(image.squeeze(0), cv2.COLOR_GRAY2RGB) - for i in range(reconstructed_image.shape[0]): - for j in range(reconstructed_image.shape[1]): - reconstructed_image[i, j, :] = reconstructed_image[i, j, :] * (1-ALPHA) + s_map[i, j]* COLOR * ALPHA - # reconstructed_image = reconstructed_image.astype(np.uint8) - V = reconstructed_image - # print("i shape:", i.shape) - # print("type(i):", type(i)) - return labels, reconstructed_image - -iface = gr.Interface(fn=segment, - description=""" - This is an Arab Calligraphy Style Recognition. - This model predicts the style (binary classification) of the image. - The model also outputs the Saliency map. - """, - inputs="image", - outputs=[ - gr.outputs.Label(num_top_classes=2, label="Style"), - gr.outputs.Image(label = "Saliency map") - ], - examples=[["images/Farsi_1.jpg"], - ["images/Farsi_2.jpg"], - ["images/real_Farsi.jpg"], - ["images/Ruqaa_1.jpg"], - ["images/Ruqaa_2.jpg"], - ["images/Ruqaa_3.jpg"], - ["images/real_Ruqaa.jpg"], - ]).launch() -# %% diff --git a/spaces/mylesai/mylesAI_test/README.md b/spaces/mylesai/mylesAI_test/README.md deleted file mode 100644 index 81be741fe90d953772a98e4baea5275fa5b9a8be..0000000000000000000000000000000000000000 --- a/spaces/mylesai/mylesAI_test/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dropshipt Test -emoji: 🏢 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/modules/transformer.py b/spaces/nakas/MusicGenDemucs/audiocraft/modules/transformer.py deleted file mode 100644 index e69cca829d774d0b8b36c0de9b7924373da81b43..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/audiocraft/modules/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - -_efficient_attention_backend: str = 'torch' - - -def set_efficient_attention_backend(backend: str = 'torch'): - # Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster). - global _efficient_attention_backend - assert _efficient_attention_backend in ['xformers', 'torch'] - _efficient_attention_backend = backend - - -def _get_attention_time_dimension() -> int: - if _efficient_attention_backend == 'torch': - return 2 - else: - return 1 - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers""" - if n_rep == 1: - return x - if _efficient_attention_backend == 'torch': - bs, n_kv_heads, slen, head_dim = x.shape - return ( - x[:, :, None, :, :] - .expand(bs, n_kv_heads, n_rep, slen, head_dim) - .reshape(bs, n_kv_heads * n_rep, slen, head_dim) - ) - else: - bs, slen, n_kv_heads, head_dim = x.shape - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonaly the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype or None): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding` or None): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - intepret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Sevice on which to initialize. - dtype (torch.dtype or None): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - time_dim = _get_attention_time_dimension() - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError('Not supported at the moment') - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[time_dim] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - time_dim = _get_attention_time_dimension() - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=time_dim) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=time_dim) - else: - nk = k - nv = v - - assert nk.shape[time_dim] == nv.shape[time_dim] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[time_dim] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # TODO: fix and verify layout. - assert _efficient_attention_backend == 'xformers', 'Rope not supported with torch attn.' - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("new param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - time_dim = _get_attention_time_dimension() - if time_dim == 2: - layout = "b h t d" - else: - layout = "b t h d" - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - if time_dim == 2: - bound_layout = "b h p t d" - else: - bound_layout = "b t p h d" - packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads) - k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads) - v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - if _efficient_attention_backend == 'torch': - x = torch.nn.functional.scaled_dot_product_attention( - q, k, v, is_causal=attn_mask is not None, dropout_p=p) - else: - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - key_layout = layout.replace('t', 'k') - query_layout = layout - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - else: - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - # Key and value have the same format. - x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v) - x = x.to(dtype) - x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float or None): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding` or None): Rope embedding to use. - attention_dropout (float or None): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float or None): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float or None): learning rate override through the `make_optim_group` API. - weight_decay (float or None): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of Audiocraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/nateraw/deepafx-st/deepafx_st/processors/spsa/eps_scheduler.py b/spaces/nateraw/deepafx-st/deepafx_st/processors/spsa/eps_scheduler.py deleted file mode 100644 index abcee2274d86b146726f413bb6fcd5980863f109..0000000000000000000000000000000000000000 --- a/spaces/nateraw/deepafx-st/deepafx_st/processors/spsa/eps_scheduler.py +++ /dev/null @@ -1,32 +0,0 @@ -import torch - - -class EpsilonScheduler: - def __init__( - self, - epsilon: float = 0.001, - patience: int = 10, - factor: float = 0.5, - verbose: bool = False, - ): - self.epsilon = epsilon - self.patience = patience - self.factor = factor - self.best = 1e16 - self.count = 0 - self.verbose = verbose - - def step(self, metric: float): - - if metric < self.best: - self.best = metric - self.count = 0 - else: - self.count += 1 - if self.verbose: - print(f"Train loss has not improved for {self.count} epochs.") - if self.count >= self.patience: - self.count = 0 - self.epsilon *= self.factor - if self.verbose: - print(f"Reducing epsilon to {self.epsilon:0.2e}...") diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Asoosama Afaan Oromoo Pdf Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Asoosama Afaan Oromoo Pdf Download.md deleted file mode 100644 index 4f48131587f11150ca4e2550711b1fcaea79ce22..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Asoosama Afaan Oromoo Pdf Download.md +++ /dev/null @@ -1,24 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Asoosama Afaan Oromoo Pdf Download": - -

    Asoosama Afaan Oromoo Pdf Download: How to Find and Read Oromo Novels Online

    -

    Asoosama Afaan Oromoo, or Oromo novels, are a form of literature that expresses the culture, history, and identity of the Oromo people. Oromo novels are written in Afaan Oromoo, the native language of the Oromo people, which is spoken by about 40 million people in Ethiopia, Kenya, Somalia, and other countries. Oromo novels cover various genres, such as historical fiction, romance, mystery, fantasy, and more.

    -

    However, finding and reading Oromo novels online can be challenging for some readers. Many Oromo novels are not widely available or accessible on the internet, and some may require a fee or a subscription to download. Moreover, some readers may not have the necessary software or devices to read pdf files or other formats of ebooks. Therefore, this article will provide some tips and resources on how to find and read Oromo novels online.

    -

    Asoosama Afaan Oromoo Pdf Download


    Download Zip ::: https://urlcod.com/2uIb1c



    -

    How to Find Oromo Novels Online

    -

    One of the easiest ways to find Oromo novels online is to use a search engine like Google or Bing. By typing in keywords such as "Asoosama Afaan Oromoo Pdf Download" or "Oromo Novels Pdf", you can get a list of websites that offer Oromo novels for download or online reading. Some of these websites are:

    -
      -
    • Godaannisa: asoosama by Dhaabaa Wayyeessaa: This is a novel by a renowned Oromo writer that tells the story of a young man who struggles with his identity and his love for a woman from a different ethnic group. The novel is available on Google Books for preview and purchase.
    • -
    • Barreessaan asoosama Afaan Oromoo 'jalqabaa' Kuusaa Gadoo Gaaddisaa Birruu dhukkubsataniiru by Gaaddisaa Birruu: This is an article by BBC Afaan Oromoo that features an interview with an Oromo novelist who has written several books in Afaan Oromoo. The article also provides links to some of his books that can be downloaded for free.
    • -
    • ABJUU Asoosama Afaan Oromoo. Afan Oromo Fiction by Motuma Kaba: This is a short story collection by an Oromo writer that explores various themes and issues related to the Oromo people. The book is available on Goodreads for purchase and review.
    • -
    -

    Another way to find Oromo novels online is to use social media platforms like Facebook or Twitter. By following pages or accounts that share or promote Oromo literature, you can get updates on new releases, recommendations, reviews, and discussions. Some of these pages or accounts are:

    -
      -
    • Oromia Media Network: This is a media organization that broadcasts news and programs in Afaan Oromoo and other languages. It also posts links to Oromo novels and other cultural products on its Facebook page.
    • -
    • Oromia Art Institute: This is an institute that aims to preserve and promote the art and culture of the Oromo people. It also tweets about Oromo novels and other artistic works on its Twitter account.
    • -
    • Oromian Writers Association: This is an association that supports and encourages Oromo writers and poets. It also shares information and resources on Oromo novels and other literary works on its Facebook page.
    • -
    -

    How to Read Oromo Novels Online

    -

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cinema 4d R13 Cloth Tag Plugin Torrent.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cinema 4d R13 Cloth Tag Plugin Torrent.md deleted file mode 100644 index 61ae5c043c90dc248ef26e07e90cfa8177593664..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cinema 4d R13 Cloth Tag Plugin Torrent.md +++ /dev/null @@ -1,28 +0,0 @@ - -

    How to Use Cloth Tag Plugin in Cinema 4D R13

    -

    Cinema 4D R13 is a powerful 3D software that can create stunning animations, models and effects. One of the features that makes Cinema 4D stand out is its plugin ecosystem, which offers a variety of extensions and tools to enhance the workflow and capabilities of the software. In this article, we will introduce one of the most useful plugins for Cinema 4D R13: the Cloth Tag plugin.

    -

    The Cloth Tag plugin is a modifier that allows you to simulate realistic cloth behavior on any polygon object. You can use it to create flags, curtains, clothes, banners, and more. The plugin works by adding a tag to the object that you want to turn into cloth, and then adjusting the parameters and settings to achieve the desired effect. You can also use forces, collisions, belts, pins, and other features to control the cloth simulation.

    -

    Cinema 4d R13 Cloth Tag Plugin Torrent


    Download File ○○○ https://urlcod.com/2uI9Qy



    -

    To use the Cloth Tag plugin in Cinema 4D R13, you need to follow these steps:

    -
      -
    1. Download the plugin from this link [^1^] and install it in your Cinema 4D plugins folder.
    2. -
    3. Open Cinema 4D and create a new project.
    4. -
    5. Create a polygon object that you want to turn into cloth. For example, you can use a plane object and subdivide it to increase the resolution.
    6. -
    7. Select the object and go to Tags > Simulation Tags > Cloth.
    8. -
    9. A new tag will appear on the object. Click on it to open the Cloth Tag settings in the Attributes Manager.
    10. -
    11. Here you can adjust various parameters such as stiffness, flexion, mass, damping, friction, etc. to change the properties of the cloth. You can also enable or disable self-collisions, gravity, wind, etc.
    12. -
    13. To animate the cloth, you can either use keyframes or dynamics. For keyframes, you can use the Point tool or the Magnet tool to move or deform the vertices of the object. For dynamics, you can use forces such as turbulence or attractor to affect the cloth movement.
    14. -
    15. To make the cloth interact with other objects, you need to add Collision tags to them. Go to Tags > Simulation Tags > Collision and adjust the settings such as bounce, friction, etc.
    16. -
    17. To fix some parts of the cloth to other objects or points, you can use Belts or Pins. Belts are used to attach edges of the cloth to other objects using splines. Pins are used to fix individual points of the cloth using nulls or other objects.
    18. -
    19. To render the cloth simulation, you can either use the standard renderer or any third-party renderer that supports Cinema 4D plugins. For example, you can use Octane, Arnold or Redshift [^1^].
    20. -
    -

    That's it! You have successfully used the Cloth Tag plugin in Cinema 4D R13. You can experiment with different settings and objects to create amazing cloth effects. Have fun!

    If you want to see some examples of cloth simulation in Cinema 4D, you can check out these links:

    -
      -
    • Maxon - Cloth Simulation [^1^]: This page showcases some of the new features and improvements of the cloth and rope dynamics in Cinema 4D S26. You can see how the cloth simulation can be used for different scenarios such as flags, curtains, balloons, etc.
    • -
    • Tutorial | How to Set Up Dynamic Cloth In Cinema 4D | Greyscalegorilla [^2^]: This video tutorial by Nick Campbell shows you how to set up a simple cloth animation in Cinema 4D using the latest cloth simulation dynamics. You can learn about the basic settings and how to art direct the cloth simulation with fields.
    • -
    • Cinema 4D Tutorial - Cloth Simulation Using Soft Body Dynamics [^3^]: This video tutorial by EJ Hassenfratz shows you how to create cloth-like simulations using soft body dynamics and dynamic connectors in Cinema 4D. You can learn how to make a realistic flag animation with wind and gravity.
    • -
    -

    These are just some of the many examples of cloth simulation in Cinema 4D. You can find more tutorials and resources online or experiment with your own ideas. Cloth simulation is a fun and creative way to add realism and motion to your 3D projects.

    -

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flame Assist 2018 Scaricare Key Generator 64 Bits Italiano EXCLUSIVE.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flame Assist 2018 Scaricare Key Generator 64 Bits Italiano EXCLUSIVE.md deleted file mode 100644 index 6a9ee4e605dbdf19faf18c1358e9c7ca3a06ca67..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flame Assist 2018 Scaricare Key Generator 64 Bits Italiano EXCLUSIVE.md +++ /dev/null @@ -1,35 +0,0 @@ -
    -

    Flame Assist 2018: come scaricare e attivare il key generator a 64 bit

    -

    Flame Assist 2018 è un software di post-produzione video che offre funzionalità avanzate di editing, compositing, color grading e audio. Per utilizzare il software è necessario avere una licenza valida, che si può ottenere tramite il key generator a 64 bit disponibile sul sito ufficiale di Autodesk.

    -

    Flame Assist 2018 scaricare key generator 64 bits Italiano


    DOWNLOADhttps://urlcod.com/2uIbZn



    -

    Il key generator è un programma che genera un codice di attivazione unico per il software, basato sul numero di serie e sul codice di richiesta forniti dal programma di installazione. Per scaricare e attivare il key generator a 64 bit per Flame Assist 2018, seguire questi passi:

    -
      -
    1. Scaricare il file .zip del key generator a 64 bit dal sito ufficiale di Autodesk, scegliendo la lingua italiana.
    2. -
    3. Estrarre il file .zip in una cartella sul computer.
    4. -
    5. Avviare il programma di installazione di Flame Assist 2018 e seguire le istruzioni a schermo.
    6. -
    7. Quando richiesto, inserire il numero di serie e il codice di prodotto forniti da Autodesk al momento dell'acquisto o della sottoscrizione del software.
    8. -
    9. Selezionare l'opzione "Attiva" e cliccare su "Avanti".
    10. -
    11. Copiare il codice di richiesta che appare nella finestra di attivazione.
    12. -
    13. Avviare il key generator a 64 bit dalla cartella dove è stato estratto.
    14. -
    15. Incollare il codice di richiesta nella casella apposita e cliccare su "Generate".
    16. -
    17. Copiare il codice di attivazione generato dal key generator.
    18. -
    19. Tornare alla finestra di attivazione di Flame Assist 2018 e incollare il codice di attivazione nella casella apposita.
    20. -
    21. Cliccare su "Avanti" e poi su "Fine" per completare l'attivazione del software.
    22. -
    -

    Ora è possibile utilizzare Flame Assist 2018 con tutte le sue funzionalità. Si raccomanda di non condividere il codice di attivazione con altre persone, in quanto potrebbe violare i termini di licenza del software.

    - -

    Flame Assist 2018: le principali funzionalità

    -

    Flame Assist 2018 è un software di post-produzione video che offre una serie di funzionalità per migliorare la qualità e la creatività dei progetti. Tra le principali funzionalità si possono citare:

    -
      -
    • Un'interfaccia intuitiva e personalizzabile, che permette di accedere facilmente agli strumenti e ai pannelli necessari.
    • -
    • Un sistema di editing non lineare, che supporta vari formati e risoluzioni di video, audio e immagini.
    • -
    • Un motore di compositing avanzato, che consente di creare effetti visivi complessi e realistici, utilizzando nodi, maschere, chiavi, tracker e altri strumenti.
    • -
    • Un modulo di color grading professionale, che offre una gamma di opzioni per correggere e ottimizzare i colori dei video, con curve, ruote, vettoricopi e altri strumenti.
    • -
    • Un modulo di audio editing integrato, che permette di registrare, modificare e mixare l'audio dei video, con effetti, transizioni e altri strumenti.
    • -
    • Una funzione di rendering in background, che consente di esportare i video in vari formati e codec, senza interrompere il flusso di lavoro.
    • -
    • Una funzione di condivisione e collaborazione online, che permette di inviare i progetti a Flame 2018 o ad altri software compatibili, per ulteriori modifiche o revisioni.
    • -
    -

    Flame Assist 2018 è un software ideale per i professionisti della post-produzione video, che vogliono creare progetti di alta qualità e originalità. Per maggiori informazioni sul software e sulle sue funzionalità, si può visitare il sito ufficiale di Autodesk.

    -

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/How To Download Final Cut Pro Free For Mac EXCLUSIVE.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/How To Download Final Cut Pro Free For Mac EXCLUSIVE.md deleted file mode 100644 index f4fe3bd97fec1cf72a06efe4997a65dc42b7a641..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/How To Download Final Cut Pro Free For Mac EXCLUSIVE.md +++ /dev/null @@ -1,39 +0,0 @@ - -

    How To Download Final Cut Pro Free For Mac: A Step-By-Step Guide

    -

    Final Cut Pro is one of the most popular and powerful video editing software for Mac users. It offers a range of features and tools to help you create stunning videos with ease. Whether you want to edit 4K footage, add effects and transitions, or color grade your clips, Final Cut Pro can handle it all.

    -

    But how can you get Final Cut Pro for free on your Mac? Is there a way to download and install it without paying anything? In this article, we will show you how to download Final Cut Pro free for Mac in a few simple steps. We will also cover some of the benefits and drawbacks of using the free trial version of Final Cut Pro.

    -

    How To Download Final Cut Pro Free For Mac


    Download File 🆗 https://urlcod.com/2uIbhz



    -

    How To Download Final Cut Pro Free For Mac

    -

    The official way to download Final Cut Pro free for Mac is to use the 90-day trial version offered by Apple. This is a fully functional version of the software that you can use for three months without any limitations. You can access all the features and tools of Final Cut Pro, as well as work with any format and resolution supported by the software.

    -

    To download Final Cut Pro free for Mac, follow these steps:

    -
      -
    1. Go to the official Final Cut Pro trial page on Apple's website[^1^].
    2. -
    3. Click on the "Download Now" button to get the .dmg file to set up the application.
    4. -
    5. Open the disk image and double-click on the installer package. You will be guided through the installation process.
    6. -
    7. Launch Final Cut Pro and enjoy your free trial for 90 days.
    8. -
    -

    Note that you need to have a Mac computer with macOS 10.15.6 or later, 4GB of RAM (8GB recommended), 3.8GB of disk space, and a graphics card that supports Metal to run Final Cut Pro. You can check the minimum system requirements for Final Cut Pro on Apple's website[^1^].

    -

    The Pros And Cons Of Using Final Cut Pro Free For Mac

    -

    Using the free trial version of Final Cut Pro has some advantages and disadvantages that you should be aware of before you download it. Here are some of them:

    -

    The Pros

    -
      -
    • You can use Final Cut Pro free for Mac for 90 days, which is longer than most other video editing software trials.
    • -
    • You can access all the features and tools of Final Cut Pro, including advanced editing, motion graphics, color correction, audio editing, and more.
    • -
    • You can work with any format and resolution supported by Final Cut Pro, including 4K, 8K, HDR, 360°, and more.
    • -
    • You can export your projects in various formats and upload them to websites such as YouTube and Vimeo.
    • -
    • You can extend the capabilities of Final Cut Pro with third-party workflow extensions and plug-ins.
    • -
    -

    The Cons

    -
      -
    • You need to have a compatible Mac computer with enough disk space and RAM to run Final Cut Pro smoothly.
    • -
    • You need to have an internet connection to download and install Final Cut Pro, as well as to access some online features and resources.
    • -
    • You cannot use Final Cut Pro on multiple devices with the same Apple ID. You need to sign in with a different Apple ID on each device you want to use it on.
    • -
    • You cannot update Final Cut Pro to newer versions during the trial period. You need to purchase the full version of the software to get updates and bug fixes.
    • -
    • You cannot get technical support or customer service from Apple during the trial period. You need to rely on online forums and tutorials for help.
    • -
    -

    Conclusion

    -

    Final Cut Pro is a great video editing software for Mac users who want to create professional-looking videos with ease. It offers a range of features and tools that can help you edit, enhance, and share your videos with your audience.

    -

    -

    If

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/GETTING_STARTED.md b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/GETTING_STARTED.md deleted file mode 100644 index a5c86f3ab5e66dc3dee4f7836aa79bd5d41b68f2..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/GETTING_STARTED.md +++ /dev/null @@ -1,76 +0,0 @@ -# Getting Started with DensePose - -## Inference with Pre-trained Models - -1. Pick a model and its config file from [Model Zoo(IUV)](DENSEPOSE_IUV.md#ModelZoo), [Model Zoo(CSE)](DENSEPOSE_CSE.md#ModelZoo), for example [densepose_rcnn_R_50_FPN_s1x.yaml](../configs/densepose_rcnn_R_50_FPN_s1x.yaml) -2. Run the [Apply Net](TOOL_APPLY_NET.md) tool to visualize the results or save the to disk. For example, to use contour visualization for DensePose, one can run: -```bash -python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml densepose_rcnn_R_50_FPN_s1x.pkl image.jpg dp_contour,bbox --output image_densepose_contour.png -``` -Please see [Apply Net](TOOL_APPLY_NET.md) for more details on the tool. - -## Training - -First, prepare the [dataset](http://densepose.org/#dataset) into the following structure under the directory you'll run training scripts: -
    -datasets/coco/
    -  annotations/
    -    densepose_{train,minival,valminusminival}2014.json
    -    densepose_minival2014_100.json   (optional, for testing only)
    -  {train,val}2014/
    -    # image files that are mentioned in the corresponding json
    -
    - -To train a model one can use the [train_net.py](../train_net.py) script. -This script was used to train all DensePose models in [Model Zoo(IUV)](DENSEPOSE_IUV.md#ModelZoo), [Model Zoo(CSE)](DENSEPOSE_CSE.md#ModelZoo). -For example, to launch end-to-end DensePose-RCNN training with ResNet-50 FPN backbone -on 8 GPUs following the s1x schedule, one can run -```bash -python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml --num-gpus 8 -``` -The configs are made for 8-GPU training. To train on 1 GPU, one can apply the -[linear learning rate scaling rule](https://arxiv.org/abs/1706.02677): -```bash -python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml \ - SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 -``` - -## Evaluation - -Model testing can be done in the same way as training, except for an additional flag `--eval-only` and -model location specification through `MODEL.WEIGHTS model.pth` in the command line -```bash -python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml \ - --eval-only MODEL.WEIGHTS model.pth -``` - -## Tools - -We provide tools which allow one to: - - easily view DensePose annotated data in a dataset; - - perform DensePose inference on a set of images; - - visualize DensePose model results; - -`query_db` is a tool to print or visualize DensePose data in a dataset. -Please refer to [Query DB](TOOL_QUERY_DB.md) for more details on this tool - -`apply_net` is a tool to print or visualize DensePose results. -Please refer to [Apply Net](TOOL_APPLY_NET.md) for more details on this tool - - -## Installation as a package - -DensePose can also be installed as a Python package for integration with other software. - -The following dependencies are needed: -- Python >= 3.7 -- [PyTorch](https://pytorch.org/get-started/locally/#start-locally) >= 1.7 (to match [detectron2 requirements](https://detectron2.readthedocs.io/en/latest/tutorials/install.html#requirements)) -- [torchvision](https://pytorch.org/vision/stable/) version [compatible with your version of PyTorch](https://github.com/pytorch/vision#installation) - -DensePose can then be installed from this repository with: - -``` -pip install git+https://github.com/facebookresearch/detectron2@main#subdirectory=projects/DensePose -``` - -After installation, the package will be importable as `densepose`. diff --git a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_gibberish.py b/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_gibberish.py deleted file mode 100644 index 99bb97042e8f9cbdc2213b2d980ffea63bc776a8..0000000000000000000000000000000000000000 --- a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_gibberish.py +++ /dev/null @@ -1,9 +0,0 @@ -from src.gibberish_detection.GibberishDetector import GibberishDetector -from gibberish_detector import detector -from src.config import config - - -def create_gibberish_detector(): - model = detector.create_from_model(config['gibberish_model']) - - return GibberishDetector(model) diff --git a/spaces/nupurkmr9/concept-ablation/README.md b/spaces/nupurkmr9/concept-ablation/README.md deleted file mode 100644 index 4486cdbdd621123dac8cdb8b95a917327713400d..0000000000000000000000000000000000000000 --- a/spaces/nupurkmr9/concept-ablation/README.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Ablating Concepts in Text-to-Image Diffusion Models -emoji: 💡 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit ---- - - - -# Ablating Concepts in Text-to-Image Diffusion Models - - Project Website [https://www.cs.cmu.edu/~concept-ablation/](https://www.cs.cmu.edu/~concept-ablation/)
    - Arxiv Preprint [https://arxiv.org/abs/2303.13516](https://arxiv.org/abs/2303.13516)
    -
    - -
    - -Large-scale text-to-image diffusion models can generate high-fidelity images with powerful compositional ability. However, these models are typically trained on an enormous amount of Internet data, often containing copyrighted material, licensed images, and personal photos. Furthermore, they have been found to replicate the style of various living artists or memorize exact training samples. How can we remove such copyrighted concepts or images without retraining the model from scratch? - -We propose an efficient method of ablating concepts in the pretrained model, i.e., preventing the generation of a target concept. Our algorithm learns to match the image distribution for a given target style, instance, or text prompt we wish to ablate to the distribution corresponding to an anchor concept, e.g., Grumpy Cat to Cats. - -## Demo vs github - -This demo uses different hyper-parameters than the github version for faster training. - -## Running locally - -1.) Create an environment using the packages included in the requirements.txt file - -2.) Run `python app.py` - -3.) Open the application in browser at `http://127.0.0.1:7860/` - -4.) Train, evaluate, and save models - -## Citing our work -The preprint can be cited as follows -``` -@inproceedings{kumari2023conceptablation, - author = {Kumari, Nupur and Zhang, Bingliang and Wang, Sheng-Yu and Shechtman, Eli and Zhang, Richard and Zhu, Jun-Yan}, - title = {Ablating Concepts in Text-to-Image Diffusion Models}, - booktitle = ICCV, - year = {2023}, -} -``` \ No newline at end of file diff --git a/spaces/oliver2023/chatgpt-on-wechat/voice/voice.py b/spaces/oliver2023/chatgpt-on-wechat/voice/voice.py deleted file mode 100644 index 52d8aaa5262468099cb371e5097fa189075bd95f..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/voice/voice.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -Voice service abstract class -""" - -class Voice(object): - def voiceToText(self, voice_file): - """ - Send voice to voice service and get text - """ - raise NotImplementedError - - def textToVoice(self, text): - """ - Send text to voice service and get voice - """ - raise NotImplementedError \ No newline at end of file diff --git a/spaces/osanseviero/esmfold/README.md b/spaces/osanseviero/esmfold/README.md deleted file mode 100644 index e5d190156f1d8f56b8418ee9c6505cec0a977677..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/esmfold/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Esmfold -emoji: 👀 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablo1n7/iberianGAN/README.md b/spaces/pablo1n7/iberianGAN/README.md deleted file mode 100644 index 65e52a1dceaf336518b82f5539d407605ed36324..0000000000000000000000000000000000000000 --- a/spaces/pablo1n7/iberianGAN/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: IberianGAN -emoji: 🏺 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_vae_diff_to_onnx.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_vae_diff_to_onnx.py deleted file mode 100644 index e023e04b94973f26ff6a93b6fa3e2b7b3661b829..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_vae_diff_to_onnx.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -from pathlib import Path - -import torch -from packaging import version -from torch.onnx import export - -from diffusers import AutoencoderKL - - -is_torch_less_than_1_11 = version.parse(version.parse(torch.__version__).base_version) < version.parse("1.11") - - -def onnx_export( - model, - model_args: tuple, - output_path: Path, - ordered_input_names, - output_names, - dynamic_axes, - opset, - use_external_data_format=False, -): - output_path.parent.mkdir(parents=True, exist_ok=True) - # PyTorch deprecated the `enable_onnx_checker` and `use_external_data_format` arguments in v1.11, - # so we check the torch version for backwards compatibility - if is_torch_less_than_1_11: - export( - model, - model_args, - f=output_path.as_posix(), - input_names=ordered_input_names, - output_names=output_names, - dynamic_axes=dynamic_axes, - do_constant_folding=True, - use_external_data_format=use_external_data_format, - enable_onnx_checker=True, - opset_version=opset, - ) - else: - export( - model, - model_args, - f=output_path.as_posix(), - input_names=ordered_input_names, - output_names=output_names, - dynamic_axes=dynamic_axes, - do_constant_folding=True, - opset_version=opset, - ) - - -@torch.no_grad() -def convert_models(model_path: str, output_path: str, opset: int, fp16: bool = False): - dtype = torch.float16 if fp16 else torch.float32 - if fp16 and torch.cuda.is_available(): - device = "cuda" - elif fp16 and not torch.cuda.is_available(): - raise ValueError("`float16` model export is only supported on GPUs with CUDA") - else: - device = "cpu" - output_path = Path(output_path) - - # VAE DECODER - vae_decoder = AutoencoderKL.from_pretrained(model_path + "/vae") - vae_latent_channels = vae_decoder.config.latent_channels - # forward only through the decoder part - vae_decoder.forward = vae_decoder.decode - onnx_export( - vae_decoder, - model_args=( - torch.randn(1, vae_latent_channels, 25, 25).to(device=device, dtype=dtype), - False, - ), - output_path=output_path / "vae_decoder" / "model.onnx", - ordered_input_names=["latent_sample", "return_dict"], - output_names=["sample"], - dynamic_axes={ - "latent_sample": {0: "batch", 1: "channels", 2: "height", 3: "width"}, - }, - opset=opset, - ) - del vae_decoder - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--model_path", - type=str, - required=True, - help="Path to the `diffusers` checkpoint to convert (either a local directory or on the Hub).", - ) - - parser.add_argument("--output_path", type=str, required=True, help="Path to the output model.") - parser.add_argument( - "--opset", - default=14, - type=int, - help="The version of the ONNX operator set to use.", - ) - parser.add_argument("--fp16", action="store_true", default=False, help="Export the models in `float16` mode") - - args = parser.parse_args() - print(args.output_path) - convert_models(args.model_path, args.output_path, args.opset, args.fp16) - print("SD: Done: ONNX") diff --git a/spaces/pdehaye/EleutherAI-llemma_34b/app.py b/spaces/pdehaye/EleutherAI-llemma_34b/app.py deleted file mode 100644 index 32aed1044b2a037a19a7713f44d6471036032c3f..0000000000000000000000000000000000000000 --- a/spaces/pdehaye/EleutherAI-llemma_34b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/EleutherAI/llemma_34b").launch() \ No newline at end of file diff --git a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/decode.py b/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/decode.py deleted file mode 100644 index 9e593d57457b10dd47bac4c2747811eb7a64d243..0000000000000000000000000000000000000000 --- a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/decode.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright 2022 Xiaomi Corp. (authors: Fangjun Kuang) -# -# Copied from https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/conformer_rnnt/decode.py -# -# See LICENSE for clarification regarding multiple authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from typing import List - -import torch -from sherpa import RnntConformerModel, greedy_search, modified_beam_search -from torch.nn.utils.rnn import pad_sequence - -LOG_EPS = math.log(1e-10) - - -@torch.no_grad() -def run_model_and_do_greedy_search( - model: RnntConformerModel, - features: List[torch.Tensor], -) -> List[List[int]]: - """Run RNN-T model with the given features and use greedy search - to decode the output of the model. - - Args: - model: - The RNN-T model. - features: - A list of 2-D tensors. Each entry is of shape - (num_frames, feature_dim). - Returns: - Return a list-of-list containing the decoding token IDs. - """ - features_length = torch.tensor( - [f.size(0) for f in features], - dtype=torch.int64, - ) - features = pad_sequence( - features, - batch_first=True, - padding_value=LOG_EPS, - ) - - device = model.device - features = features.to(device) - features_length = features_length.to(device) - - encoder_out, encoder_out_length = model.encoder( - features=features, - features_length=features_length, - ) - - hyp_tokens = greedy_search( - model=model, - encoder_out=encoder_out, - encoder_out_length=encoder_out_length.cpu(), - ) - return hyp_tokens - - -@torch.no_grad() -def run_model_and_do_modified_beam_search( - model: RnntConformerModel, - features: List[torch.Tensor], - num_active_paths: int, -) -> List[List[int]]: - """Run RNN-T model with the given features and use greedy search - to decode the output of the model. - - Args: - model: - The RNN-T model. - features: - A list of 2-D tensors. Each entry is of shape - (num_frames, feature_dim). - num_active_paths: - Used only when decoding_method is modified_beam_search. - It specifies number of active paths for each utterance. Due to - merging paths with identical token sequences, the actual number - may be less than "num_active_paths". - Returns: - Return a list-of-list containing the decoding token IDs. - """ - features_length = torch.tensor( - [f.size(0) for f in features], - dtype=torch.int64, - ) - features = pad_sequence( - features, - batch_first=True, - padding_value=LOG_EPS, - ) - - device = model.device - features = features.to(device) - features_length = features_length.to(device) - - encoder_out, encoder_out_length = model.encoder( - features=features, - features_length=features_length, - ) - - hyp_tokens = modified_beam_search( - model=model, - encoder_out=encoder_out, - encoder_out_length=encoder_out_length.cpu(), - num_active_paths=num_active_paths, - ) - return hyp_tokens diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/LLM_PaLM.py b/spaces/phyloforfun/VoucherVision/vouchervision/LLM_PaLM.py deleted file mode 100644 index c2e32fcdb7feb461b080ac7d24abfe482b2c57aa..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/VoucherVision/vouchervision/LLM_PaLM.py +++ /dev/null @@ -1,209 +0,0 @@ -import os -import sys -import inspect -import json -from json import JSONDecodeError -import tiktoken -import random -import google.generativeai as palm - -currentdir = os.path.dirname(os.path.abspath( - inspect.getfile(inspect.currentframe()))) -parentdir = os.path.dirname(currentdir) -sys.path.append(parentdir) - -from prompt_catalog import PromptCatalog -from general_utils import num_tokens_from_string - -""" -DEPRECATED: - Safety setting regularly block a response, so set to 4 to disable - - class HarmBlockThreshold(Enum): - HARM_BLOCK_THRESHOLD_UNSPECIFIED = 0 - BLOCK_LOW_AND_ABOVE = 1 - BLOCK_MEDIUM_AND_ABOVE = 2 - BLOCK_ONLY_HIGH = 3 - BLOCK_NONE = 4 -""" - -SAFETY_SETTINGS = [ - { - "category": "HARM_CATEGORY_DEROGATORY", - "threshold": "BLOCK_NONE", - }, - { - "category": "HARM_CATEGORY_TOXICITY", - "threshold": "BLOCK_NONE", - }, - { - "category": "HARM_CATEGORY_VIOLENCE", - "threshold": "BLOCK_NONE", - }, - { - "category": "HARM_CATEGORY_SEXUAL", - "threshold": "BLOCK_NONE", - }, - { - "category": "HARM_CATEGORY_MEDICAL", - "threshold": "BLOCK_NONE", - }, - { - "category": "HARM_CATEGORY_DANGEROUS", - "threshold": "BLOCK_NONE", - }, -] - -PALM_SETTINGS = { - 'model': 'models/text-bison-001', - 'temperature': 0, - 'candidate_count': 1, - 'top_k': 40, - 'top_p': 0.95, - 'max_output_tokens': 8000, - 'stop_sequences': [], - 'safety_settings': SAFETY_SETTINGS, -} - -PALM_SETTINGS_REDO = { - 'model': 'models/text-bison-001', - 'temperature': 0.05, - 'candidate_count': 1, - 'top_k': 40, - 'top_p': 0.95, - 'max_output_tokens': 8000, - 'stop_sequences': [], - 'safety_settings': SAFETY_SETTINGS, -} - -def OCR_to_dict_PaLM(logger, OCR, prompt_version, VVE): - try: - logger.info(f'Length of OCR raw -- {len(OCR)}') - except: - print(f'Length of OCR raw -- {len(OCR)}') - - # prompt = PROMPT_PaLM_UMICH_skeleton_all_asia(OCR, in_list, out_list) # must provide examples to PaLM differently than for chatGPT, at least 2 examples - Prompt = PromptCatalog(OCR) - if prompt_version in ['prompt_v2_palm2']: - version = 'v2' - prompt = Prompt.prompt_v2_palm2(OCR) - - elif prompt_version in ['prompt_v1_palm2',]: - version = 'v1' - # create input: output: for PaLM - # Find a similar example from the domain knowledge - domain_knowledge_example = VVE.query_db(OCR, 4) - similarity= VVE.get_similarity() - domain_knowledge_example_string = json.dumps(domain_knowledge_example) - in_list, out_list = create_OCR_analog_for_input(domain_knowledge_example) - prompt = Prompt.prompt_v1_palm2(in_list, out_list, OCR) - - elif prompt_version in ['prompt_v1_palm2_noDomainKnowledge',]: - version = 'v1' - prompt = Prompt.prompt_v1_palm2_noDomainKnowledge(OCR) - else: - version = 'custom' - prompt, n_fields, xlsx_headers = Prompt.prompt_v2_custom(prompt_version, OCR=OCR, is_palm=True) - # raise - - nt = num_tokens_from_string(prompt, "cl100k_base") - # try: - logger.info(f'Prompt token length --- {nt}') - # except: - # print(f'Prompt token length --- {nt}') - - do_use_SOP = False ######## - - if do_use_SOP: - '''TODO: Check back later to see if LangChain will support PaLM''' - # logger.info(f'Waiting for PaLM API call --- Using StructuredOutputParser') - # response = structured_output_parser(OCR, prompt, logger) - # return response['Dictionary'] - pass - - else: - # try: - logger.info(f'Waiting for PaLM 2 API call') - # except: - # print(f'Waiting for PaLM 2 API call --- Content') - - # safety_thresh = 4 - # PaLM_settings = {'model': 'models/text-bison-001','temperature': 0,'candidate_count': 1,'top_k': 40,'top_p': 0.95,'max_output_tokens': 8000,'stop_sequences': [], - # 'safety_settings': [{"category":"HARM_CATEGORY_DEROGATORY","threshold":safety_thresh},{"category":"HARM_CATEGORY_TOXICITY","threshold":safety_thresh},{"category":"HARM_CATEGORY_VIOLENCE","threshold":safety_thresh},{"category":"HARM_CATEGORY_SEXUAL","threshold":safety_thresh},{"category":"HARM_CATEGORY_MEDICAL","threshold":safety_thresh},{"category":"HARM_CATEGORY_DANGEROUS","threshold":safety_thresh}],} - response = palm.generate_text(prompt=prompt, **PALM_SETTINGS) - - - if response and response.result: - if isinstance(response.result, (str, bytes)): - response_valid = check_and_redo_JSON(response, logger, version) - else: - response_valid = {} - else: - response_valid = {} - - logger.info(f'Candidate JSON\n{response.result}') - return response_valid, nt - -def check_and_redo_JSON(response, logger, version): - try: - response_valid = json.loads(response.result) - logger.info(f'Response --- First call passed') - return response_valid - except JSONDecodeError: - - try: - response_valid = json.loads(response.result.strip('```').replace('json\n', '', 1).replace('json', '', 1)) - logger.info(f'Response --- Manual removal of ```json succeeded') - return response_valid - except: - logger.info(f'Response --- First call failed. Redo...') - Prompt = PromptCatalog() - if version == 'v1': - prompt_redo = Prompt.prompt_palm_redo_v1(response.result) - elif version == 'v2': - prompt_redo = Prompt.prompt_palm_redo_v2(response.result) - elif version == 'custom': - prompt_redo = Prompt.prompt_v2_custom_redo(response.result, is_palm=True) - - - # prompt_redo = PROMPT_PaLM_Redo(response.result) - try: - response = palm.generate_text(prompt=prompt_redo, **PALM_SETTINGS) - response_valid = json.loads(response.result) - logger.info(f'Response --- Second call passed') - return response_valid - except JSONDecodeError: - logger.info(f'Response --- Second call failed. Final redo. Temperature changed to 0.05') - try: - response = palm.generate_text(prompt=prompt_redo, **PALM_SETTINGS_REDO) - response_valid = json.loads(response.result) - logger.info(f'Response --- Third call passed') - return response_valid - except JSONDecodeError: - return None - - -def create_OCR_analog_for_input(domain_knowledge_example): - in_list = [] - out_list = [] - # Iterate over the domain_knowledge_example (list of dictionaries) - for row_dict in domain_knowledge_example: - # Convert the dictionary to a JSON string and add it to the out_list - domain_knowledge_example_string = json.dumps(row_dict) - out_list.append(domain_knowledge_example_string) - - # Create a single string from all values in the row_dict - row_text = '||'.join(str(v) for v in row_dict.values()) - - # Split the row text by '||', shuffle the parts, and then re-join with a single space - parts = row_text.split('||') - random.shuffle(parts) - shuffled_text = ' '.join(parts) - - # Add the shuffled_text to the in_list - in_list.append(shuffled_text) - return in_list, out_list - - -def strip_problematic_chars(s): - return ''.join(c for c in s if c.isprintable()) diff --git a/spaces/pknez/face-swap-docker/installer/windows_run.bat b/spaces/pknez/face-swap-docker/installer/windows_run.bat deleted file mode 100644 index fa45b8ba8f47cf7645e35a57b5c829312be38c47..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/installer/windows_run.bat +++ /dev/null @@ -1,80 +0,0 @@ -@echo off -REM Please set the following commandline arguments to your prefered settings -set COMMANDLINE_ARGS=--execution-provider cuda --frame-processor face_swapper face_enhancer --video-encoder libvpx-vp9 - -cd /D "%~dp0" - -echo "%CD%"| findstr /C:" " >nul && echo This script relies on Miniconda which can not be silently installed under a path with spaces. && goto end - -set PATH=%PATH%;%SystemRoot%\system32 - -@rem config -set INSTALL_DIR=%cd%\installer_files -set CONDA_ROOT_PREFIX=%cd%\installer_files\conda -set INSTALL_ENV_DIR=%cd%\installer_files\env -set MINICONDA_DOWNLOAD_URL=https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe -set FFMPEG_DOWNLOAD_URL=https://github.com/GyanD/codexffmpeg/releases/download/2023-06-21-git-1bcb8a7338/ffmpeg-2023-06-21-git-1bcb8a7338-essentials_build.zip -set INSTALL_FFMPEG_DIR=%cd%\installer_files\ffmpeg -set conda_exists=F - -@rem figure out whether git and conda needs to be installed -call "%CONDA_ROOT_PREFIX%\_conda.exe" --version >nul 2>&1 -if "%ERRORLEVEL%" EQU "0" set conda_exists=T - -@rem (if necessary) install git and conda into a contained environment -@rem download conda -if "%conda_exists%" == "F" ( - echo Downloading Miniconda from %MINICONDA_DOWNLOAD_URL% to %INSTALL_DIR%\miniconda_installer.exe - - mkdir "%INSTALL_DIR%" - call curl -Lk "%MINICONDA_DOWNLOAD_URL%" > "%INSTALL_DIR%\miniconda_installer.exe" || ( echo. && echo Miniconda failed to download. && goto end ) - - echo Installing Miniconda to %CONDA_ROOT_PREFIX% - start /wait "" "%INSTALL_DIR%\miniconda_installer.exe" /InstallationType=JustMe /NoShortcuts=1 /AddToPath=0 /RegisterPython=0 /NoRegistry=1 /S /D=%CONDA_ROOT_PREFIX% - - @rem test the conda binary - echo Miniconda version: - call "%CONDA_ROOT_PREFIX%\_conda.exe" --version || ( echo. && echo Miniconda not found. && goto end ) -) - -@rem create the installer env -if not exist "%INSTALL_ENV_DIR%" ( - echo Packages to install: %PACKAGES_TO_INSTALL% - call "%CONDA_ROOT_PREFIX%\_conda.exe" create --no-shortcuts -y -k --prefix "%INSTALL_ENV_DIR%" python=3.10 || ( echo. && echo Conda environment creation failed. && goto end ) -) - -if not exist "%INSTALL_FFMPEG_DIR%" ( - echo Downloading ffmpeg from %FFMPEG_DOWNLOAD_URL% to %INSTALL_DIR% - call curl -Lk "%FFMPEG_DOWNLOAD_URL%" > "%INSTALL_DIR%\ffmpeg.zip" || ( echo. && echo ffmpeg failed to download. && goto end ) - call powershell -command "Expand-Archive -Force '%INSTALL_DIR%\ffmpeg.zip' '%INSTALL_DIR%\'" - - cd "installer_files" - setlocal EnableExtensions EnableDelayedExpansion - - for /f "tokens=*" %%f in ('dir /s /b /ad "ffmpeg*"') do ( - ren "%%f" "ffmpeg" - ) - endlocal - setx PATH "%INSTALL_FFMPEG_DIR%\bin\;%PATH%" - echo To use videos, you need to restart roop after this installation. - cd .. -) - -@rem check if conda environment was actually created -if not exist "%INSTALL_ENV_DIR%\python.exe" ( echo. && echo ERROR: Conda environment is empty. && goto end ) - -@rem activate installer env -call "%CONDA_ROOT_PREFIX%\condabin\conda.bat" activate "%INSTALL_ENV_DIR%" || ( echo. && echo Miniconda hook not found. && goto end ) - -@rem setup installer env -echo Launching roop unleashed - please edit windows_run.bat to customize commandline arguments -call python installer.py %COMMANDLINE_ARGS% - -echo. -echo Done! - -:end -pause - - - diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/cache.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/cache.py deleted file mode 100644 index 2a965f595ff0756002e2a2c79da551fa8c8fff25..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/cache.py +++ /dev/null @@ -1,65 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -""" -The cache object API for implementing caches. The default is a thread -safe in-memory dictionary. -""" -from threading import Lock - - -class BaseCache(object): - - def get(self, key): - raise NotImplementedError() - - def set(self, key, value, expires=None): - raise NotImplementedError() - - def delete(self, key): - raise NotImplementedError() - - def close(self): - pass - - -class DictCache(BaseCache): - - def __init__(self, init_dict=None): - self.lock = Lock() - self.data = init_dict or {} - - def get(self, key): - return self.data.get(key, None) - - def set(self, key, value, expires=None): - with self.lock: - self.data.update({key: value}) - - def delete(self, key): - with self.lock: - if key in self.data: - self.data.pop(key) - - -class SeparateBodyBaseCache(BaseCache): - """ - In this variant, the body is not stored mixed in with the metadata, but is - passed in (as a bytes-like object) in a separate call to ``set_body()``. - - That is, the expected interaction pattern is:: - - cache.set(key, serialized_metadata) - cache.set_body(key) - - Similarly, the body should be loaded separately via ``get_body()``. - """ - def set_body(self, key, body): - raise NotImplementedError() - - def get_body(self, key): - """ - Return the body as file-like object. - """ - raise NotImplementedError() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py deleted file mode 100644 index 6fb19b30bb53c18f38a9ef02dd7c4478670fb962..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -ELF file parser. - -This provides a class ``ELFFile`` that parses an ELF executable in a similar -interface to ``ZipFile``. Only the read interface is implemented. - -Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca -ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html -""" - -import enum -import os -import struct -from typing import IO, Optional, Tuple - - -class ELFInvalid(ValueError): - pass - - -class EIClass(enum.IntEnum): - C32 = 1 - C64 = 2 - - -class EIData(enum.IntEnum): - Lsb = 1 - Msb = 2 - - -class EMachine(enum.IntEnum): - I386 = 3 - S390 = 22 - Arm = 40 - X8664 = 62 - AArc64 = 183 - - -class ELFFile: - """ - Representation of an ELF executable. - """ - - def __init__(self, f: IO[bytes]) -> None: - self._f = f - - try: - ident = self._read("16B") - except struct.error: - raise ELFInvalid("unable to parse identification") - magic = bytes(ident[:4]) - if magic != b"\x7fELF": - raise ELFInvalid(f"invalid magic: {magic!r}") - - self.capacity = ident[4] # Format for program header (bitness). - self.encoding = ident[5] # Data structure encoding (endianness). - - try: - # e_fmt: Format for program header. - # p_fmt: Format for section header. - # p_idx: Indexes to find p_type, p_offset, and p_filesz. - e_fmt, self._p_fmt, self._p_idx = { - (1, 1): ("HHIIIIIHHH", ">IIIIIIII", (0, 1, 4)), # 32-bit MSB. - (2, 1): ("HHIQQQIHHH", ">IIQQQQQQ", (0, 2, 5)), # 64-bit MSB. - }[(self.capacity, self.encoding)] - except KeyError: - raise ELFInvalid( - f"unrecognized capacity ({self.capacity}) or " - f"encoding ({self.encoding})" - ) - - try: - ( - _, - self.machine, # Architecture type. - _, - _, - self._e_phoff, # Offset of program header. - _, - self.flags, # Processor-specific flags. - _, - self._e_phentsize, # Size of section. - self._e_phnum, # Number of sections. - ) = self._read(e_fmt) - except struct.error as e: - raise ELFInvalid("unable to parse machine and section information") from e - - def _read(self, fmt: str) -> Tuple[int, ...]: - return struct.unpack(fmt, self._f.read(struct.calcsize(fmt))) - - @property - def interpreter(self) -> Optional[str]: - """ - The path recorded in the ``PT_INTERP`` section header. - """ - for index in range(self._e_phnum): - self._f.seek(self._e_phoff + self._e_phentsize * index) - try: - data = self._read(self._p_fmt) - except struct.error: - continue - if data[self._p_idx[0]] != 3: # Not PT_INTERP. - continue - self._f.seek(data[self._p_idx[1]]) - return os.fsdecode(self._f.read(data[self._p_idx[2]])).strip("\0") - return None diff --git a/spaces/predictive-singularity/Singularity/README.md b/spaces/predictive-singularity/Singularity/README.md deleted file mode 100644 index 3f8ad23968282d38ba3d7e0b9490574d712631e2..0000000000000000000000000000000000000000 --- a/spaces/predictive-singularity/Singularity/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Singularity -emoji: 🌖 -colorFrom: black -colorTo: white -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: true -license: unlicense ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/_version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/_version.py deleted file mode 100644 index a82b376d2d72e66e1eb1b713f181f287dcea47a1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/_version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "1.1.1" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/teePen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/teePen.py deleted file mode 100644 index 2828175a7c02c1858db5cbfc45c8686f3187a50e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/teePen.py +++ /dev/null @@ -1,54 +0,0 @@ -"""Pen multiplexing drawing to one or more pens.""" -from fontTools.pens.basePen import AbstractPen - - -__all__ = ["TeePen"] - - -class TeePen(AbstractPen): - """Pen multiplexing drawing to one or more pens. - - Use either as TeePen(pen1, pen2, ...) or TeePen(iterableOfPens).""" - - def __init__(self, *pens): - if len(pens) == 1: - pens = pens[0] - self.pens = pens - - def moveTo(self, p0): - for pen in self.pens: - pen.moveTo(p0) - - def lineTo(self, p1): - for pen in self.pens: - pen.lineTo(p1) - - def qCurveTo(self, *points): - for pen in self.pens: - pen.qCurveTo(*points) - - def curveTo(self, *points): - for pen in self.pens: - pen.curveTo(*points) - - def closePath(self): - for pen in self.pens: - pen.closePath() - - def endPath(self): - for pen in self.pens: - pen.endPath() - - def addComponent(self, glyphName, transformation): - for pen in self.pens: - pen.addComponent(glyphName, transformation) - - -if __name__ == "__main__": - from fontTools.pens.basePen import _TestPen - - pen = TeePen(_TestPen(), _TestPen()) - pen.moveTo((0, 0)) - pen.lineTo((0, 100)) - pen.curveTo((50, 75), (60, 50), (50, 25)) - pen.closePath() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/commands/components/app.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/commands/components/app.py deleted file mode 100644 index e8984c0fb40caa77b25628c8219a1631824328f8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/commands/components/app.py +++ /dev/null @@ -1,22 +0,0 @@ -from typer import Typer - -from .build import _build -from .create import _create -from .dev import _dev -from .install_component import _install -from .publish import _publish -from .show import _show - -app = Typer(help="Create and publish a new Gradio component") - -app.command("create", help="Create a new component.")(_create) -app.command( - "build", - help="Build the component for distribution. Must be called from the component directory.", -)(_build) -app.command("dev", help="Launch the custom component demo in development mode.")(_dev) -app.command("show", help="Show the list of available templates")(_show) -app.command("install", help="Install the custom component in the current environment")( - _install -) -app.command("publish", help="Publish a component to PyPI and HuggingFace Hub")(_publish) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/idna/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/idna/__init__.py deleted file mode 100644 index a40eeafcc914108ca79c5d83d6e81da1b29c6e80..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/idna/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -from .package_data import __version__ -from .core import ( - IDNABidiError, - IDNAError, - InvalidCodepoint, - InvalidCodepointContext, - alabel, - check_bidi, - check_hyphen_ok, - check_initial_combiner, - check_label, - check_nfc, - decode, - encode, - ulabel, - uts46_remap, - valid_contextj, - valid_contexto, - valid_label_length, - valid_string_length, -) -from .intranges import intranges_contain - -__all__ = [ - "IDNABidiError", - "IDNAError", - "InvalidCodepoint", - "InvalidCodepointContext", - "alabel", - "check_bidi", - "check_hyphen_ok", - "check_initial_combiner", - "check_label", - "check_nfc", - "decode", - "encode", - "intranges_contain", - "ulabel", - "uts46_remap", - "valid_contextj", - "valid_contexto", - "valid_label_length", - "valid_string_length", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/sdist.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/sdist.py deleted file mode 100644 index e34193883dea739b09792a86bfc3d4c03b42cb5a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/sdist.py +++ /dev/null @@ -1,27 +0,0 @@ -import sys -if 'setuptools' in sys.modules: - from setuptools.command.sdist import sdist as old_sdist -else: - from distutils.command.sdist import sdist as old_sdist - -from numpy.distutils.misc_util import get_data_files - -class sdist(old_sdist): - - def add_defaults (self): - old_sdist.add_defaults(self) - - dist = self.distribution - - if dist.has_data_files(): - for data in dist.data_files: - self.filelist.extend(get_data_files(data)) - - if dist.has_headers(): - headers = [] - for h in dist.headers: - if isinstance(h, str): headers.append(h) - else: headers.append(h[1]) - self.filelist.extend(headers) - - return diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/_version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/_version.py deleted file mode 100644 index bfac5f814501767bd311084af41edddaf1db7b71..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/_version.py +++ /dev/null @@ -1,155 +0,0 @@ -"""Utility to compare (NumPy) version strings. - -The NumpyVersion class allows properly comparing numpy version strings. -The LooseVersion and StrictVersion classes that distutils provides don't -work; they don't recognize anything like alpha/beta/rc/dev versions. - -""" -import re - - -__all__ = ['NumpyVersion'] - - -class NumpyVersion(): - """Parse and compare numpy version strings. - - NumPy has the following versioning scheme (numbers given are examples; they - can be > 9 in principle): - - - Released version: '1.8.0', '1.8.1', etc. - - Alpha: '1.8.0a1', '1.8.0a2', etc. - - Beta: '1.8.0b1', '1.8.0b2', etc. - - Release candidates: '1.8.0rc1', '1.8.0rc2', etc. - - Development versions: '1.8.0.dev-f1234afa' (git commit hash appended) - - Development versions after a1: '1.8.0a1.dev-f1234afa', - '1.8.0b2.dev-f1234afa', - '1.8.1rc1.dev-f1234afa', etc. - - Development versions (no git hash available): '1.8.0.dev-Unknown' - - Comparing needs to be done against a valid version string or other - `NumpyVersion` instance. Note that all development versions of the same - (pre-)release compare equal. - - .. versionadded:: 1.9.0 - - Parameters - ---------- - vstring : str - NumPy version string (``np.__version__``). - - Examples - -------- - >>> from numpy.lib import NumpyVersion - >>> if NumpyVersion(np.__version__) < '1.7.0': - ... print('skip') - >>> # skip - - >>> NumpyVersion('1.7') # raises ValueError, add ".0" - Traceback (most recent call last): - ... - ValueError: Not a valid numpy version string - - """ - - def __init__(self, vstring): - self.vstring = vstring - ver_main = re.match(r'\d+\.\d+\.\d+', vstring) - if not ver_main: - raise ValueError("Not a valid numpy version string") - - self.version = ver_main.group() - self.major, self.minor, self.bugfix = [int(x) for x in - self.version.split('.')] - if len(vstring) == ver_main.end(): - self.pre_release = 'final' - else: - alpha = re.match(r'a\d', vstring[ver_main.end():]) - beta = re.match(r'b\d', vstring[ver_main.end():]) - rc = re.match(r'rc\d', vstring[ver_main.end():]) - pre_rel = [m for m in [alpha, beta, rc] if m is not None] - if pre_rel: - self.pre_release = pre_rel[0].group() - else: - self.pre_release = '' - - self.is_devversion = bool(re.search(r'.dev', vstring)) - - def _compare_version(self, other): - """Compare major.minor.bugfix""" - if self.major == other.major: - if self.minor == other.minor: - if self.bugfix == other.bugfix: - vercmp = 0 - elif self.bugfix > other.bugfix: - vercmp = 1 - else: - vercmp = -1 - elif self.minor > other.minor: - vercmp = 1 - else: - vercmp = -1 - elif self.major > other.major: - vercmp = 1 - else: - vercmp = -1 - - return vercmp - - def _compare_pre_release(self, other): - """Compare alpha/beta/rc/final.""" - if self.pre_release == other.pre_release: - vercmp = 0 - elif self.pre_release == 'final': - vercmp = 1 - elif other.pre_release == 'final': - vercmp = -1 - elif self.pre_release > other.pre_release: - vercmp = 1 - else: - vercmp = -1 - - return vercmp - - def _compare(self, other): - if not isinstance(other, (str, NumpyVersion)): - raise ValueError("Invalid object to compare with NumpyVersion.") - - if isinstance(other, str): - other = NumpyVersion(other) - - vercmp = self._compare_version(other) - if vercmp == 0: - # Same x.y.z version, check for alpha/beta/rc - vercmp = self._compare_pre_release(other) - if vercmp == 0: - # Same version and same pre-release, check if dev version - if self.is_devversion is other.is_devversion: - vercmp = 0 - elif self.is_devversion: - vercmp = -1 - else: - vercmp = 1 - - return vercmp - - def __lt__(self, other): - return self._compare(other) < 0 - - def __le__(self, other): - return self._compare(other) <= 0 - - def __eq__(self, other): - return self._compare(other) == 0 - - def __ne__(self, other): - return self._compare(other) != 0 - - def __gt__(self, other): - return self._compare(other) > 0 - - def __ge__(self, other): - return self._compare(other) >= 0 - - def __repr__(self): - return "NumpyVersion(%s)" % self.vstring diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/indexing/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/indexing/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_tz_localize.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_tz_localize.py deleted file mode 100644 index ed2b0b247e62c55b4b2c5fc84fb1ee0cb7f564ab..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_tz_localize.py +++ /dev/null @@ -1,68 +0,0 @@ -from datetime import timezone - -import numpy as np -import pytest - -from pandas import ( - DataFrame, - Series, - date_range, -) -import pandas._testing as tm - - -class TestTZLocalize: - # See also: - # test_tz_convert_and_localize in test_tz_convert - - def test_tz_localize(self, frame_or_series): - rng = date_range("1/1/2011", periods=100, freq="H") - - obj = DataFrame({"a": 1}, index=rng) - obj = tm.get_obj(obj, frame_or_series) - - result = obj.tz_localize("utc") - expected = DataFrame({"a": 1}, rng.tz_localize("UTC")) - expected = tm.get_obj(expected, frame_or_series) - - assert result.index.tz is timezone.utc - tm.assert_equal(result, expected) - - def test_tz_localize_axis1(self): - rng = date_range("1/1/2011", periods=100, freq="H") - - df = DataFrame({"a": 1}, index=rng) - - df = df.T - result = df.tz_localize("utc", axis=1) - assert result.columns.tz is timezone.utc - - expected = DataFrame({"a": 1}, rng.tz_localize("UTC")) - - tm.assert_frame_equal(result, expected.T) - - def test_tz_localize_naive(self, frame_or_series): - # Can't localize if already tz-aware - rng = date_range("1/1/2011", periods=100, freq="H", tz="utc") - ts = Series(1, index=rng) - ts = frame_or_series(ts) - - with pytest.raises(TypeError, match="Already tz-aware"): - ts.tz_localize("US/Eastern") - - @pytest.mark.parametrize("copy", [True, False]) - def test_tz_localize_copy_inplace_mutate(self, copy, frame_or_series): - # GH#6326 - obj = frame_or_series( - np.arange(0, 5), index=date_range("20131027", periods=5, freq="1H", tz=None) - ) - orig = obj.copy() - result = obj.tz_localize("UTC", copy=copy) - expected = frame_or_series( - np.arange(0, 5), - index=date_range("20131027", periods=5, freq="1H", tz="UTC"), - ) - tm.assert_equal(result, expected) - tm.assert_equal(obj, orig) - assert result.index is not obj.index - assert result is not obj diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_astype.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_astype.py deleted file mode 100644 index 59c555b9644a1230dc60d622fb5fdb80a1743afe..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_astype.py +++ /dev/null @@ -1,248 +0,0 @@ -import re - -import numpy as np -import pytest - -from pandas.core.dtypes.dtypes import ( - CategoricalDtype, - IntervalDtype, -) - -from pandas import ( - CategoricalIndex, - Index, - IntervalIndex, - NaT, - Timedelta, - Timestamp, - interval_range, -) -import pandas._testing as tm - - -class AstypeTests: - """Tests common to IntervalIndex with any subtype""" - - def test_astype_idempotent(self, index): - result = index.astype("interval") - tm.assert_index_equal(result, index) - - result = index.astype(index.dtype) - tm.assert_index_equal(result, index) - - def test_astype_object(self, index): - result = index.astype(object) - expected = Index(index.values, dtype="object") - tm.assert_index_equal(result, expected) - assert not result.equals(index) - - def test_astype_category(self, index): - result = index.astype("category") - expected = CategoricalIndex(index.values) - tm.assert_index_equal(result, expected) - - result = index.astype(CategoricalDtype()) - tm.assert_index_equal(result, expected) - - # non-default params - categories = index.dropna().unique().values[:-1] - dtype = CategoricalDtype(categories=categories, ordered=True) - result = index.astype(dtype) - expected = CategoricalIndex(index.values, categories=categories, ordered=True) - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize( - "dtype", - [ - "int64", - "uint64", - "float64", - "complex128", - "period[M]", - "timedelta64", - "timedelta64[ns]", - "datetime64", - "datetime64[ns]", - "datetime64[ns, US/Eastern]", - ], - ) - def test_astype_cannot_cast(self, index, dtype): - msg = "Cannot cast IntervalIndex to dtype" - with pytest.raises(TypeError, match=msg): - index.astype(dtype) - - def test_astype_invalid_dtype(self, index): - msg = "data type [\"']fake_dtype[\"'] not understood" - with pytest.raises(TypeError, match=msg): - index.astype("fake_dtype") - - -class TestIntSubtype(AstypeTests): - """Tests specific to IntervalIndex with integer-like subtype""" - - indexes = [ - IntervalIndex.from_breaks(np.arange(-10, 11, dtype="int64")), - IntervalIndex.from_breaks(np.arange(100, dtype="uint64"), closed="left"), - ] - - @pytest.fixture(params=indexes) - def index(self, request): - return request.param - - @pytest.mark.parametrize( - "subtype", ["float64", "datetime64[ns]", "timedelta64[ns]"] - ) - def test_subtype_conversion(self, index, subtype): - dtype = IntervalDtype(subtype, index.closed) - result = index.astype(dtype) - expected = IntervalIndex.from_arrays( - index.left.astype(subtype), index.right.astype(subtype), closed=index.closed - ) - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize( - "subtype_start, subtype_end", [("int64", "uint64"), ("uint64", "int64")] - ) - def test_subtype_integer(self, subtype_start, subtype_end): - index = IntervalIndex.from_breaks(np.arange(100, dtype=subtype_start)) - dtype = IntervalDtype(subtype_end, index.closed) - result = index.astype(dtype) - expected = IntervalIndex.from_arrays( - index.left.astype(subtype_end), - index.right.astype(subtype_end), - closed=index.closed, - ) - tm.assert_index_equal(result, expected) - - @pytest.mark.xfail(reason="GH#15832") - def test_subtype_integer_errors(self): - # int64 -> uint64 fails with negative values - index = interval_range(-10, 10) - dtype = IntervalDtype("uint64", "right") - - # Until we decide what the exception message _should_ be, we - # assert something that it should _not_ be. - # We should _not_ be getting a message suggesting that the -10 - # has been wrapped around to a large-positive integer - msg = "^(?!(left side of interval must be <= right side))" - with pytest.raises(ValueError, match=msg): - index.astype(dtype) - - -class TestFloatSubtype(AstypeTests): - """Tests specific to IntervalIndex with float subtype""" - - indexes = [ - interval_range(-10.0, 10.0, closed="neither"), - IntervalIndex.from_arrays( - [-1.5, np.nan, 0.0, 0.0, 1.5], [-0.5, np.nan, 1.0, 1.0, 3.0], closed="both" - ), - ] - - @pytest.fixture(params=indexes) - def index(self, request): - return request.param - - @pytest.mark.parametrize("subtype", ["int64", "uint64"]) - def test_subtype_integer(self, subtype): - index = interval_range(0.0, 10.0) - dtype = IntervalDtype(subtype, "right") - result = index.astype(dtype) - expected = IntervalIndex.from_arrays( - index.left.astype(subtype), index.right.astype(subtype), closed=index.closed - ) - tm.assert_index_equal(result, expected) - - # raises with NA - msg = r"Cannot convert non-finite values \(NA or inf\) to integer" - with pytest.raises(ValueError, match=msg): - index.insert(0, np.nan).astype(dtype) - - @pytest.mark.parametrize("subtype", ["int64", "uint64"]) - def test_subtype_integer_with_non_integer_borders(self, subtype): - index = interval_range(0.0, 3.0, freq=0.25) - dtype = IntervalDtype(subtype, "right") - result = index.astype(dtype) - expected = IntervalIndex.from_arrays( - index.left.astype(subtype), index.right.astype(subtype), closed=index.closed - ) - tm.assert_index_equal(result, expected) - - def test_subtype_integer_errors(self): - # float64 -> uint64 fails with negative values - index = interval_range(-10.0, 10.0) - dtype = IntervalDtype("uint64", "right") - msg = re.escape( - "Cannot convert interval[float64, right] to interval[uint64, right]; " - "subtypes are incompatible" - ) - with pytest.raises(TypeError, match=msg): - index.astype(dtype) - - @pytest.mark.parametrize("subtype", ["datetime64[ns]", "timedelta64[ns]"]) - def test_subtype_datetimelike(self, index, subtype): - dtype = IntervalDtype(subtype, "right") - msg = "Cannot convert .* to .*; subtypes are incompatible" - with pytest.raises(TypeError, match=msg): - index.astype(dtype) - - -class TestDatetimelikeSubtype(AstypeTests): - """Tests specific to IntervalIndex with datetime-like subtype""" - - indexes = [ - interval_range(Timestamp("2018-01-01"), periods=10, closed="neither"), - interval_range(Timestamp("2018-01-01"), periods=10).insert(2, NaT), - interval_range(Timestamp("2018-01-01", tz="US/Eastern"), periods=10), - interval_range(Timedelta("0 days"), periods=10, closed="both"), - interval_range(Timedelta("0 days"), periods=10).insert(2, NaT), - ] - - @pytest.fixture(params=indexes) - def index(self, request): - return request.param - - @pytest.mark.parametrize("subtype", ["int64", "uint64"]) - def test_subtype_integer(self, index, subtype): - dtype = IntervalDtype(subtype, "right") - - if subtype != "int64": - msg = ( - r"Cannot convert interval\[(timedelta64|datetime64)\[ns.*\], .*\] " - r"to interval\[uint64, .*\]" - ) - with pytest.raises(TypeError, match=msg): - index.astype(dtype) - return - - result = index.astype(dtype) - new_left = index.left.astype(subtype) - new_right = index.right.astype(subtype) - - expected = IntervalIndex.from_arrays(new_left, new_right, closed=index.closed) - tm.assert_index_equal(result, expected) - - def test_subtype_float(self, index): - dtype = IntervalDtype("float64", "right") - msg = "Cannot convert .* to .*; subtypes are incompatible" - with pytest.raises(TypeError, match=msg): - index.astype(dtype) - - def test_subtype_datetimelike(self): - # datetime -> timedelta raises - dtype = IntervalDtype("timedelta64[ns]", "right") - msg = "Cannot convert .* to .*; subtypes are incompatible" - - index = interval_range(Timestamp("2018-01-01"), periods=10) - with pytest.raises(TypeError, match=msg): - index.astype(dtype) - - index = interval_range(Timestamp("2018-01-01", tz="CET"), periods=10) - with pytest.raises(TypeError, match=msg): - index.astype(dtype) - - # timedelta -> datetime raises - dtype = IntervalDtype("datetime64[ns]", "right") - index = interval_range(Timedelta("0 days"), periods=10) - with pytest.raises(TypeError, match=msg): - index.astype(dtype) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/cmdline.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/cmdline.py deleted file mode 100644 index 435231e65192712cd757e9e425d1b3cb6aada35f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/cmdline.py +++ /dev/null @@ -1,668 +0,0 @@ -""" - pygments.cmdline - ~~~~~~~~~~~~~~~~ - - Command line interface. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import os -import sys -import shutil -import argparse -from textwrap import dedent - -from pygments import __version__, highlight -from pygments.util import ClassNotFound, OptionError, docstring_headline, \ - guess_decode, guess_decode_from_terminal, terminal_encoding, \ - UnclosingTextIOWrapper -from pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \ - load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename -from pygments.lexers.special import TextLexer -from pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter -from pygments.formatters import get_all_formatters, get_formatter_by_name, \ - load_formatter_from_file, get_formatter_for_filename, find_formatter_class -from pygments.formatters.terminal import TerminalFormatter -from pygments.formatters.terminal256 import Terminal256Formatter, TerminalTrueColorFormatter -from pygments.filters import get_all_filters, find_filter_class -from pygments.styles import get_all_styles, get_style_by_name - - -def _parse_options(o_strs): - opts = {} - if not o_strs: - return opts - for o_str in o_strs: - if not o_str.strip(): - continue - o_args = o_str.split(',') - for o_arg in o_args: - o_arg = o_arg.strip() - try: - o_key, o_val = o_arg.split('=', 1) - o_key = o_key.strip() - o_val = o_val.strip() - except ValueError: - opts[o_arg] = True - else: - opts[o_key] = o_val - return opts - - -def _parse_filters(f_strs): - filters = [] - if not f_strs: - return filters - for f_str in f_strs: - if ':' in f_str: - fname, fopts = f_str.split(':', 1) - filters.append((fname, _parse_options([fopts]))) - else: - filters.append((f_str, {})) - return filters - - -def _print_help(what, name): - try: - if what == 'lexer': - cls = get_lexer_by_name(name) - print("Help on the %s lexer:" % cls.name) - print(dedent(cls.__doc__)) - elif what == 'formatter': - cls = find_formatter_class(name) - print("Help on the %s formatter:" % cls.name) - print(dedent(cls.__doc__)) - elif what == 'filter': - cls = find_filter_class(name) - print("Help on the %s filter:" % name) - print(dedent(cls.__doc__)) - return 0 - except (AttributeError, ValueError): - print("%s not found!" % what, file=sys.stderr) - return 1 - - -def _print_list(what): - if what == 'lexer': - print() - print("Lexers:") - print("~~~~~~~") - - info = [] - for fullname, names, exts, _ in get_all_lexers(): - tup = (', '.join(names)+':', fullname, - exts and '(filenames ' + ', '.join(exts) + ')' or '') - info.append(tup) - info.sort() - for i in info: - print(('* %s\n %s %s') % i) - - elif what == 'formatter': - print() - print("Formatters:") - print("~~~~~~~~~~~") - - info = [] - for cls in get_all_formatters(): - doc = docstring_headline(cls) - tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and - '(filenames ' + ', '.join(cls.filenames) + ')' or '') - info.append(tup) - info.sort() - for i in info: - print(('* %s\n %s %s') % i) - - elif what == 'filter': - print() - print("Filters:") - print("~~~~~~~~") - - for name in get_all_filters(): - cls = find_filter_class(name) - print("* " + name + ':') - print(" %s" % docstring_headline(cls)) - - elif what == 'style': - print() - print("Styles:") - print("~~~~~~~") - - for name in get_all_styles(): - cls = get_style_by_name(name) - print("* " + name + ':') - print(" %s" % docstring_headline(cls)) - - -def _print_list_as_json(requested_items): - import json - result = {} - if 'lexer' in requested_items: - info = {} - for fullname, names, filenames, mimetypes in get_all_lexers(): - info[fullname] = { - 'aliases': names, - 'filenames': filenames, - 'mimetypes': mimetypes - } - result['lexers'] = info - - if 'formatter' in requested_items: - info = {} - for cls in get_all_formatters(): - doc = docstring_headline(cls) - info[cls.name] = { - 'aliases': cls.aliases, - 'filenames': cls.filenames, - 'doc': doc - } - result['formatters'] = info - - if 'filter' in requested_items: - info = {} - for name in get_all_filters(): - cls = find_filter_class(name) - info[name] = { - 'doc': docstring_headline(cls) - } - result['filters'] = info - - if 'style' in requested_items: - info = {} - for name in get_all_styles(): - cls = get_style_by_name(name) - info[name] = { - 'doc': docstring_headline(cls) - } - result['styles'] = info - - json.dump(result, sys.stdout) - -def main_inner(parser, argns): - if argns.help: - parser.print_help() - return 0 - - if argns.V: - print('Pygments version %s, (c) 2006-2023 by Georg Brandl, Matthäus ' - 'Chajdas and contributors.' % __version__) - return 0 - - def is_only_option(opt): - return not any(v for (k, v) in vars(argns).items() if k != opt) - - # handle ``pygmentize -L`` - if argns.L is not None: - arg_set = set() - for k, v in vars(argns).items(): - if v: - arg_set.add(k) - - arg_set.discard('L') - arg_set.discard('json') - - if arg_set: - parser.print_help(sys.stderr) - return 2 - - # print version - if not argns.json: - main(['', '-V']) - allowed_types = {'lexer', 'formatter', 'filter', 'style'} - largs = [arg.rstrip('s') for arg in argns.L] - if any(arg not in allowed_types for arg in largs): - parser.print_help(sys.stderr) - return 0 - if not largs: - largs = allowed_types - if not argns.json: - for arg in largs: - _print_list(arg) - else: - _print_list_as_json(largs) - return 0 - - # handle ``pygmentize -H`` - if argns.H: - if not is_only_option('H'): - parser.print_help(sys.stderr) - return 2 - what, name = argns.H - if what not in ('lexer', 'formatter', 'filter'): - parser.print_help(sys.stderr) - return 2 - return _print_help(what, name) - - # parse -O options - parsed_opts = _parse_options(argns.O or []) - - # parse -P options - for p_opt in argns.P or []: - try: - name, value = p_opt.split('=', 1) - except ValueError: - parsed_opts[p_opt] = True - else: - parsed_opts[name] = value - - # encodings - inencoding = parsed_opts.get('inencoding', parsed_opts.get('encoding')) - outencoding = parsed_opts.get('outencoding', parsed_opts.get('encoding')) - - # handle ``pygmentize -N`` - if argns.N: - lexer = find_lexer_class_for_filename(argns.N) - if lexer is None: - lexer = TextLexer - - print(lexer.aliases[0]) - return 0 - - # handle ``pygmentize -C`` - if argns.C: - inp = sys.stdin.buffer.read() - try: - lexer = guess_lexer(inp, inencoding=inencoding) - except ClassNotFound: - lexer = TextLexer - - print(lexer.aliases[0]) - return 0 - - # handle ``pygmentize -S`` - S_opt = argns.S - a_opt = argns.a - if S_opt is not None: - f_opt = argns.f - if not f_opt: - parser.print_help(sys.stderr) - return 2 - if argns.l or argns.INPUTFILE: - parser.print_help(sys.stderr) - return 2 - - try: - parsed_opts['style'] = S_opt - fmter = get_formatter_by_name(f_opt, **parsed_opts) - except ClassNotFound as err: - print(err, file=sys.stderr) - return 1 - - print(fmter.get_style_defs(a_opt or '')) - return 0 - - # if no -S is given, -a is not allowed - if argns.a is not None: - parser.print_help(sys.stderr) - return 2 - - # parse -F options - F_opts = _parse_filters(argns.F or []) - - # -x: allow custom (eXternal) lexers and formatters - allow_custom_lexer_formatter = bool(argns.x) - - # select lexer - lexer = None - - # given by name? - lexername = argns.l - if lexername: - # custom lexer, located relative to user's cwd - if allow_custom_lexer_formatter and '.py' in lexername: - try: - filename = None - name = None - if ':' in lexername: - filename, name = lexername.rsplit(':', 1) - - if '.py' in name: - # This can happen on Windows: If the lexername is - # C:\lexer.py -- return to normal load path in that case - name = None - - if filename and name: - lexer = load_lexer_from_file(filename, name, - **parsed_opts) - else: - lexer = load_lexer_from_file(lexername, **parsed_opts) - except ClassNotFound as err: - print('Error:', err, file=sys.stderr) - return 1 - else: - try: - lexer = get_lexer_by_name(lexername, **parsed_opts) - except (OptionError, ClassNotFound) as err: - print('Error:', err, file=sys.stderr) - return 1 - - # read input code - code = None - - if argns.INPUTFILE: - if argns.s: - print('Error: -s option not usable when input file specified', - file=sys.stderr) - return 2 - - infn = argns.INPUTFILE - try: - with open(infn, 'rb') as infp: - code = infp.read() - except Exception as err: - print('Error: cannot read infile:', err, file=sys.stderr) - return 1 - if not inencoding: - code, inencoding = guess_decode(code) - - # do we have to guess the lexer? - if not lexer: - try: - lexer = get_lexer_for_filename(infn, code, **parsed_opts) - except ClassNotFound as err: - if argns.g: - try: - lexer = guess_lexer(code, **parsed_opts) - except ClassNotFound: - lexer = TextLexer(**parsed_opts) - else: - print('Error:', err, file=sys.stderr) - return 1 - except OptionError as err: - print('Error:', err, file=sys.stderr) - return 1 - - elif not argns.s: # treat stdin as full file (-s support is later) - # read code from terminal, always in binary mode since we want to - # decode ourselves and be tolerant with it - code = sys.stdin.buffer.read() # use .buffer to get a binary stream - if not inencoding: - code, inencoding = guess_decode_from_terminal(code, sys.stdin) - # else the lexer will do the decoding - if not lexer: - try: - lexer = guess_lexer(code, **parsed_opts) - except ClassNotFound: - lexer = TextLexer(**parsed_opts) - - else: # -s option needs a lexer with -l - if not lexer: - print('Error: when using -s a lexer has to be selected with -l', - file=sys.stderr) - return 2 - - # process filters - for fname, fopts in F_opts: - try: - lexer.add_filter(fname, **fopts) - except ClassNotFound as err: - print('Error:', err, file=sys.stderr) - return 1 - - # select formatter - outfn = argns.o - fmter = argns.f - if fmter: - # custom formatter, located relative to user's cwd - if allow_custom_lexer_formatter and '.py' in fmter: - try: - filename = None - name = None - if ':' in fmter: - # Same logic as above for custom lexer - filename, name = fmter.rsplit(':', 1) - - if '.py' in name: - name = None - - if filename and name: - fmter = load_formatter_from_file(filename, name, - **parsed_opts) - else: - fmter = load_formatter_from_file(fmter, **parsed_opts) - except ClassNotFound as err: - print('Error:', err, file=sys.stderr) - return 1 - else: - try: - fmter = get_formatter_by_name(fmter, **parsed_opts) - except (OptionError, ClassNotFound) as err: - print('Error:', err, file=sys.stderr) - return 1 - - if outfn: - if not fmter: - try: - fmter = get_formatter_for_filename(outfn, **parsed_opts) - except (OptionError, ClassNotFound) as err: - print('Error:', err, file=sys.stderr) - return 1 - try: - outfile = open(outfn, 'wb') - except Exception as err: - print('Error: cannot open outfile:', err, file=sys.stderr) - return 1 - else: - if not fmter: - if os.environ.get('COLORTERM','') in ('truecolor', '24bit'): - fmter = TerminalTrueColorFormatter(**parsed_opts) - elif '256' in os.environ.get('TERM', ''): - fmter = Terminal256Formatter(**parsed_opts) - else: - fmter = TerminalFormatter(**parsed_opts) - outfile = sys.stdout.buffer - - # determine output encoding if not explicitly selected - if not outencoding: - if outfn: - # output file? use lexer encoding for now (can still be None) - fmter.encoding = inencoding - else: - # else use terminal encoding - fmter.encoding = terminal_encoding(sys.stdout) - - # provide coloring under Windows, if possible - if not outfn and sys.platform in ('win32', 'cygwin') and \ - fmter.name in ('Terminal', 'Terminal256'): # pragma: no cover - # unfortunately colorama doesn't support binary streams on Py3 - outfile = UnclosingTextIOWrapper(outfile, encoding=fmter.encoding) - fmter.encoding = None - try: - import colorama.initialise - except ImportError: - pass - else: - outfile = colorama.initialise.wrap_stream( - outfile, convert=None, strip=None, autoreset=False, wrap=True) - - # When using the LaTeX formatter and the option `escapeinside` is - # specified, we need a special lexer which collects escaped text - # before running the chosen language lexer. - escapeinside = parsed_opts.get('escapeinside', '') - if len(escapeinside) == 2 and isinstance(fmter, LatexFormatter): - left = escapeinside[0] - right = escapeinside[1] - lexer = LatexEmbeddedLexer(left, right, lexer) - - # ... and do it! - if not argns.s: - # process whole input as per normal... - try: - highlight(code, lexer, fmter, outfile) - finally: - if outfn: - outfile.close() - return 0 - else: - # line by line processing of stdin (eg: for 'tail -f')... - try: - while 1: - line = sys.stdin.buffer.readline() - if not line: - break - if not inencoding: - line = guess_decode_from_terminal(line, sys.stdin)[0] - highlight(line, lexer, fmter, outfile) - if hasattr(outfile, 'flush'): - outfile.flush() - return 0 - except KeyboardInterrupt: # pragma: no cover - return 0 - finally: - if outfn: - outfile.close() - - -class HelpFormatter(argparse.HelpFormatter): - def __init__(self, prog, indent_increment=2, max_help_position=16, width=None): - if width is None: - try: - width = shutil.get_terminal_size().columns - 2 - except Exception: - pass - argparse.HelpFormatter.__init__(self, prog, indent_increment, - max_help_position, width) - - -def main(args=sys.argv): - """ - Main command line entry point. - """ - desc = "Highlight an input file and write the result to an output file." - parser = argparse.ArgumentParser(description=desc, add_help=False, - formatter_class=HelpFormatter) - - operation = parser.add_argument_group('Main operation') - lexersel = operation.add_mutually_exclusive_group() - lexersel.add_argument( - '-l', metavar='LEXER', - help='Specify the lexer to use. (Query names with -L.) If not ' - 'given and -g is not present, the lexer is guessed from the filename.') - lexersel.add_argument( - '-g', action='store_true', - help='Guess the lexer from the file contents, or pass through ' - 'as plain text if nothing can be guessed.') - operation.add_argument( - '-F', metavar='FILTER[:options]', action='append', - help='Add a filter to the token stream. (Query names with -L.) ' - 'Filter options are given after a colon if necessary.') - operation.add_argument( - '-f', metavar='FORMATTER', - help='Specify the formatter to use. (Query names with -L.) ' - 'If not given, the formatter is guessed from the output filename, ' - 'and defaults to the terminal formatter if the output is to the ' - 'terminal or an unknown file extension.') - operation.add_argument( - '-O', metavar='OPTION=value[,OPTION=value,...]', action='append', - help='Give options to the lexer and formatter as a comma-separated ' - 'list of key-value pairs. ' - 'Example: `-O bg=light,python=cool`.') - operation.add_argument( - '-P', metavar='OPTION=value', action='append', - help='Give a single option to the lexer and formatter - with this ' - 'you can pass options whose value contains commas and equal signs. ' - 'Example: `-P "heading=Pygments, the Python highlighter"`.') - operation.add_argument( - '-o', metavar='OUTPUTFILE', - help='Where to write the output. Defaults to standard output.') - - operation.add_argument( - 'INPUTFILE', nargs='?', - help='Where to read the input. Defaults to standard input.') - - flags = parser.add_argument_group('Operation flags') - flags.add_argument( - '-v', action='store_true', - help='Print a detailed traceback on unhandled exceptions, which ' - 'is useful for debugging and bug reports.') - flags.add_argument( - '-s', action='store_true', - help='Process lines one at a time until EOF, rather than waiting to ' - 'process the entire file. This only works for stdin, only for lexers ' - 'with no line-spanning constructs, and is intended for streaming ' - 'input such as you get from `tail -f`. ' - 'Example usage: `tail -f sql.log | pygmentize -s -l sql`.') - flags.add_argument( - '-x', action='store_true', - help='Allow custom lexers and formatters to be loaded from a .py file ' - 'relative to the current working directory. For example, ' - '`-l ./customlexer.py -x`. By default, this option expects a file ' - 'with a class named CustomLexer or CustomFormatter; you can also ' - 'specify your own class name with a colon (`-l ./lexer.py:MyLexer`). ' - 'Users should be very careful not to use this option with untrusted ' - 'files, because it will import and run them.') - flags.add_argument('--json', help='Output as JSON. This can ' - 'be only used in conjunction with -L.', - default=False, - action='store_true') - - special_modes_group = parser.add_argument_group( - 'Special modes - do not do any highlighting') - special_modes = special_modes_group.add_mutually_exclusive_group() - special_modes.add_argument( - '-S', metavar='STYLE -f formatter', - help='Print style definitions for STYLE for a formatter ' - 'given with -f. The argument given by -a is formatter ' - 'dependent.') - special_modes.add_argument( - '-L', nargs='*', metavar='WHAT', - help='List lexers, formatters, styles or filters -- ' - 'give additional arguments for the thing(s) you want to list ' - '(e.g. "styles"), or omit them to list everything.') - special_modes.add_argument( - '-N', metavar='FILENAME', - help='Guess and print out a lexer name based solely on the given ' - 'filename. Does not take input or highlight anything. If no specific ' - 'lexer can be determined, "text" is printed.') - special_modes.add_argument( - '-C', action='store_true', - help='Like -N, but print out a lexer name based solely on ' - 'a given content from standard input.') - special_modes.add_argument( - '-H', action='store', nargs=2, metavar=('NAME', 'TYPE'), - help='Print detailed help for the object of type , ' - 'where is one of "lexer", "formatter" or "filter".') - special_modes.add_argument( - '-V', action='store_true', - help='Print the package version.') - special_modes.add_argument( - '-h', '--help', action='store_true', - help='Print this help.') - special_modes_group.add_argument( - '-a', metavar='ARG', - help='Formatter-specific additional argument for the -S (print ' - 'style sheet) mode.') - - argns = parser.parse_args(args[1:]) - - try: - return main_inner(parser, argns) - except BrokenPipeError: - # someone closed our stdout, e.g. by quitting a pager. - return 0 - except Exception: - if argns.v: - print(file=sys.stderr) - print('*' * 65, file=sys.stderr) - print('An unhandled exception occurred while highlighting.', - file=sys.stderr) - print('Please report the whole traceback to the issue tracker at', - file=sys.stderr) - print('.', - file=sys.stderr) - print('*' * 65, file=sys.stderr) - print(file=sys.stderr) - raise - import traceback - info = traceback.format_exception(*sys.exc_info()) - msg = info[-1].strip() - if len(info) >= 3: - # extract relevant file and position info - msg += '\n (f%s)' % info[-2].split('\n')[0].strip()[1:] - print(file=sys.stderr) - print('*** Error while highlighting:', file=sys.stderr) - print(msg, file=sys.stderr) - print('*** If this is a bug you want to report, please rerun with -v.', - file=sys.stderr) - return 1 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/referencing/_attrs.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/referencing/_attrs.py deleted file mode 100644 index 54e2b7325ce86b0e54951a7736165b3ea3d9d332..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/referencing/_attrs.py +++ /dev/null @@ -1,30 +0,0 @@ -from __future__ import annotations - -from typing import NoReturn, TypeVar - -from attrs import define as _define, frozen as _frozen - -_T = TypeVar("_T") - - -def define(cls: type[_T]) -> type[_T]: # pragma: no cover - cls.__init_subclass__ = _do_not_subclass - return _define(cls) - - -def frozen(cls: type[_T]) -> type[_T]: - cls.__init_subclass__ = _do_not_subclass - return _frozen(cls) - - -class UnsupportedSubclassing(Exception): - pass - - -@staticmethod -def _do_not_subclass() -> NoReturn: # pragma: no cover - raise UnsupportedSubclassing( - "Subclassing is not part of referencing's public API. " - "If no other suitable API exists for what you're trying to do, " - "feel free to file an issue asking for one.", - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/screen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/screen.py deleted file mode 100644 index b4f7fd19de7ffc7e3c18702389e093402a633c5b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/screen.py +++ /dev/null @@ -1,54 +0,0 @@ -from typing import Optional, TYPE_CHECKING - -from .segment import Segment -from .style import StyleType -from ._loop import loop_last - - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - RenderResult, - RenderableType, - Group, - ) - - -class Screen: - """A renderable that fills the terminal screen and crops excess. - - Args: - renderable (RenderableType): Child renderable. - style (StyleType, optional): Optional background style. Defaults to None. - """ - - renderable: "RenderableType" - - def __init__( - self, - *renderables: "RenderableType", - style: Optional[StyleType] = None, - application_mode: bool = False, - ) -> None: - from rich.console import Group - - self.renderable = Group(*renderables) - self.style = style - self.application_mode = application_mode - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - width, height = options.size - style = console.get_style(self.style) if self.style else None - render_options = options.update(width=width, height=height) - lines = console.render_lines( - self.renderable or "", render_options, style=style, pad=True - ) - lines = Segment.set_shape(lines, width, height, style=style) - new_line = Segment("\n\r") if self.application_mode else Segment.line() - for last, line in loop_last(lines): - yield from line - if not last: - yield new_line diff --git a/spaces/pycui/RealChar/realtime_ai_character/models/interaction.py b/spaces/pycui/RealChar/realtime_ai_character/models/interaction.py deleted file mode 100644 index d8e9971ee566bd4d13901e494a18c60779403256..0000000000000000000000000000000000000000 --- a/spaces/pycui/RealChar/realtime_ai_character/models/interaction.py +++ /dev/null @@ -1,26 +0,0 @@ -from sqlalchemy import Column, Integer, String, DateTime, Unicode -import datetime -from realtime_ai_character.database.base import Base - - -class Interaction(Base): - __tablename__ = "interactions" - - id = Column(Integer, primary_key=True, index=True, nullable=False) - client_id = Column(Integer) # deprecated, use user_id instead - user_id = Column(String(50)) - session_id = Column(String(50)) - # deprecated, use client_message_unicode instead - client_message = Column(String) - # deprecated, use server_message_unicode instead - server_message = Column(String) - client_message_unicode = Column(Unicode(65535)) - server_message_unicode = Column(Unicode(65535)) - - timestamp = Column(DateTime, default=datetime.datetime.utcnow) - platform = Column(String(50)) - action_type = Column(String(50)) - - def save(self, db): - db.add(self) - db.commit() diff --git a/spaces/pyodide-demo/self-hosted/zarr-tests.js b/spaces/pyodide-demo/self-hosted/zarr-tests.js deleted file mode 100644 index 2261cea3e19320720ec9ba190839a6beee3adc0a..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/zarr-tests.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="zarr-tests.data";var REMOTE_PACKAGE_BASE="zarr-tests.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","zarr",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/zarr","tests",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:146014,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,966,1837,2486,3292,4353,5327,6424,7219,8165,9097,10218,11196,11973,12760,13612,14706,15664,16590,17502,18336,19176,19949,20878,21827,22523,23244,23874,24745,25677,26458,27447,28342,29167,29957,30639,31316,32170,33173,34086,35035,36008,37026,38029,38865,39788,40649,41513,42418,43467,44209,45041,45972,46957,47957,48877,49764,50566,51631,52610,53717,54764,55744,56827,57661,58416,59281,60139,60759,61606,62723,63512,64332,65480,66326,67129,67908,68716,69536,70250,71009,71746,72546,73092,73985,74757,75499,76239,77227,78285,79246,80022,80741,81656,82261,83099,83987,84989,86029,87027,87856,88801,89835,90724,91731,92764,93613,94722,95584,96532,97401,97902,98881,99533,100173,101375,102281,103322,104359,105196,106146,107249,108441,109175,110129,110979,111567,112174,113093,113963,114783,115662,116441,117549,118656,119643,120601,121525,122440,123725,124645,125477,126408,127413,128461,129312,130340,131054,131626,132358,133409,134393,135125,136251,137192,138226,139344,140362,141322,142375,143001,143776,145056],sizes:[966,871,649,806,1061,974,1097,795,946,932,1121,978,777,787,852,1094,958,926,912,834,840,773,929,949,696,721,630,871,932,781,989,895,825,790,682,677,854,1003,913,949,973,1018,1003,836,923,861,864,905,1049,742,832,931,985,1e3,920,887,802,1065,979,1107,1047,980,1083,834,755,865,858,620,847,1117,789,820,1148,846,803,779,808,820,714,759,737,800,546,893,772,742,740,988,1058,961,776,719,915,605,838,888,1002,1040,998,829,945,1034,889,1007,1033,849,1109,862,948,869,501,979,652,640,1202,906,1041,1037,837,950,1103,1192,734,954,850,588,607,919,870,820,879,779,1108,1107,987,958,924,915,1285,920,832,931,1005,1048,851,1028,714,572,732,1051,984,732,1126,941,1034,1118,1018,960,1053,626,775,1280,958],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_zarr-tests.data")}Module["addRunDependency"]("datafile_zarr-tests.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/zarr/tests/__init__.py",start:0,end:0,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_attrs.py",start:0,end:7464,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_convenience.py",start:7464,end:30504,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_core.py",start:30504,end:127883,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_creation.py",start:127883,end:142680,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_filters.py",start:142680,end:148576,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_hierarchy.py",start:148576,end:190369,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_indexing.py",start:190369,end:233586,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_info.py",start:233586,end:234585,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_meta.py",start:234585,end:247324,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_storage.py",start:247324,end:315382,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_sync.py",start:315382,end:324695,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/test_util.py",start:324695,end:332059,audio:0},{filename:"/lib/python3.9/site-packages/zarr/tests/util.py",start:332059,end:333428,audio:0}],remote_package_size:150110,package_uuid:"0693cd1d-232d-4139-8593-8a761ce3945d"})})(); \ No newline at end of file diff --git a/spaces/pythainlp/wangchanglm-demo-cpu/index.html b/spaces/pythainlp/wangchanglm-demo-cpu/index.html deleted file mode 100644 index e418f0058e1ccf9cb16eaad47114faba3a12bc41..0000000000000000000000000000000000000000 --- a/spaces/pythainlp/wangchanglm-demo-cpu/index.html +++ /dev/null @@ -1,11 +0,0 @@ - - - - Old Page - - - - -

    This page has been moved. If you are not redirected within 0 seconds, click here to go to the WangChanGLM homepage.

    - - \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Goneyd 02 Gone Mania Yx TOP.md b/spaces/quidiaMuxgu/Expedit-SAM/Goneyd 02 Gone Mania Yx TOP.md deleted file mode 100644 index 4872f3f06dc3991a8efe58029d45d5628bcde3b2..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Goneyd 02 Gone Mania Yx TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Goneyd 02 Gone Mania Yx


    DOWNLOADhttps://geags.com/2uCrHd



    -
    -porno riko y duro asia sexx young girl vest est big hot mom jordi dobel penetration latin goneyd 01 gone mania yx russian mom son kitchen 2nd mom long hair ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/r3gm/RVC_HF/lib/uvr5_pack/lib_v5/nets_61968KB.py b/spaces/r3gm/RVC_HF/lib/uvr5_pack/lib_v5/nets_61968KB.py deleted file mode 100644 index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/lib/uvr5_pack/lib_v5/nets_61968KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/radames/gradio-lite-candle-SAM/README.md b/spaces/radames/gradio-lite-candle-SAM/README.md deleted file mode 100644 index 743d0f830c68c7011a2b27a0680cd0d32657498f..0000000000000000000000000000000000000000 --- a/spaces/radames/gradio-lite-candle-SAM/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Gradio Lite Candle SAM -emoji: 🌖 -colorFrom: purple -colorTo: green -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Active.keyboard.v2.5..WinAll.Cracked-ARN Setup Free TOP.md b/spaces/raedeXanto/academic-chatgpt-beta/Active.keyboard.v2.5..WinAll.Cracked-ARN Setup Free TOP.md deleted file mode 100644 index 19838439b9cfe6b5ba89fe6fde90e16132d12533..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Active.keyboard.v2.5..WinAll.Cracked-ARN Setup Free TOP.md +++ /dev/null @@ -1,126 +0,0 @@ - -

    Active.keyboard.v2.5..WinAll.Cracked-ARN: A Powerful Keyboard Customization Tool

    -

    Do you want to change the layout, functions, or shortcuts of your keyboard? Do you want to create your own keyboard layouts for different languages, applications, or purposes? Do you want to have more control and flexibility over your keyboard settings? If you answered yes to any of these questions, then you might be interested in Active.keyboard.v2.5..WinAll.Cracked-ARN, a powerful keyboard customization tool that lets you do all that and more.

    -

    In this article, we will explain what Active.keyboard.v2.5..WinAll.Cracked-ARN is, how to download and install it, how to use it, and some tips and tricks for using it effectively. We will also answer some frequently asked questions about this tool and provide some resources for further learning.

    -

    Active.keyboard.v2.5..WinAll.Cracked-ARN setup free


    Downloadhttps://tinourl.com/2uL3i7



    -

    What is Active.keyboard.v2.5..WinAll.Cracked-ARN?

    -

    Active.keyboard.v2.5..WinAll.Cracked-ARN is a software program that allows you to customize your keyboard in various ways. It is developed by SoftBoy, a company that specializes in creating tools for enhancing the functionality of Windows operating systems.

    -

    Features and benefits of Active.keyboard.v2.5..WinAll.Cracked-ARN

    -

    Some of the features and benefits of Active.keyboard.v2.5..WinAll.Cracked-ARN are:

    -
      -
    • It lets you create custom keyboard layouts from scratch or modify existing ones based on layouts for English and other languages.
    • -
    • It lets you assign keyboard shortcuts to any key or combination of keys, such as Ctrl, Alt, Shift, etc.
    • -
    • It lets you test your keyboard layout before saving it and apply it to any Windows application.
    • -
    • It lets you backup and restore your keyboard layouts easily.
    • -
    • It lets you switch between different keyboard layouts quickly with a hotkey or a tray icon.
    • -
    • It has a user-friendly interface that is easy to navigate and use.
    • -
    • It supports Windows XP, Vista, 7, 8, 10, and other

      How to download and install Active.keyboard.v2.5..WinAll.Cracked-ARN

      -

      To download and install Active.keyboard.v2.5..WinAll.Cracked-ARN, you need to follow these steps:

      -
        -
      1. Go to the official website of SoftBoy and click on the Download button for Active.keyboard.v2.5..WinAll.Cracked-ARN. You can also find the download link from other sources, such as torrent sites or file-sharing platforms, but be careful of malware or viruses.
      2. -
      3. Save the file to your computer and unzip it using a program like WinRAR or 7-Zip. You will see a folder named Active.keyboard.v2.5..WinAll.Cracked-ARN that contains the setup file and the crack file.
      4. -
      5. Run the setup file and follow the instructions to install Active.keyboard.v2.5..WinAll.Cracked-ARN on your computer. You can choose the destination folder and the start menu folder for the program.
      6. -
      7. After the installation is complete, do not run the program yet. Copy the crack file from the folder and paste it into the installation folder, replacing the original file. This will activate the full version of Active.keyboard.v2.5..WinAll.Cracked-ARN without requiring a license key or registration.
      8. -
      9. Now you can run Active.keyboard.v2.5..WinAll.Cracked-ARN from your desktop or start menu and enjoy its features.
      10. -
      -

      How to use Active.keyboard.v2.5..WinAll.Cracked-ARN

      -

      Once you have downloaded and installed Active.keyboard.v2.5..WinAll.Cracked-ARN, you can start using it to customize your keyboard. Here are some of the main functions and how to use them:

      -

      How to create a custom keyboard layout with Active.keyboard.v2.5..WinAll.Cracked-ARN

      -

      To create a custom keyboard layout with Active.keyboard.v2.5..WinAll.Cracked-ARN, you need to follow these steps:

      -
        -
      1. Open Active.keyboard.v2.5..WinAll.Cracked-ARN and click on the New button on the toolbar or go to File > New. This will open a new window where you can design your keyboard layout.
      2. -
      3. Select a base layout from the drop-down menu at the top left corner of the window. You can choose from layouts for English and other languages, such as Arabic, Chinese, French, German, etc.
      4. -
      5. Drag and drop keys from the bottom panel to the keyboard panel at the center of the window. You can also right-click on any key and select Edit to change its properties, such as name, caption, color, font, etc.
      6. -
      7. You can also add special keys, such as function keys, modifier keys, multimedia keys, etc., by clicking on the Add Special Key button on the toolbar or going to Edit > Add Special Key. You can then customize their properties as well.
      8. -
      9. You can also resize, move, delete, copy, paste, or align keys by using the buttons on the toolbar or the options in the Edit menu.
      10. -
      11. You can also add a background image to your keyboard layout by clicking on the Add Background Image button on the toolbar or going to Edit > Add Background Image. You can then browse for an image file on your computer and adjust its position and size.
      12. -
      13. You can also add a description to your keyboard layout by clicking on the Add Description button on the toolbar or going to Edit > Add Description. You can then type in some text that explains your keyboard layout and its purpose.
      14. -
      15. You can also change the name of your keyboard layout by clicking on the Rename Layout button on the toolbar or going to Edit > Rename Layout. You can then type in a new name for your keyboard layout.
      16. -
      17. You can also save your keyboard layout as a template by clicking on the Save As Template button on the toolbar or going to Edit > Save As Template. This will allow you to reuse your keyboard layout for other projects.
      18. -
      -

      How to assign keyboard shortcuts with Active.keyboard.v2.5..WinAll.Cracked-ARN

      -

      To assign keyboard shortcuts with Active.keyboard.v2.5..WinAll.Cracked-ARN, you need to follow these steps:

      -
          -
        1. Open Active.keyboard.v2.5..WinAll.Cracked-ARN and click on the Shortcuts button on the toolbar or go to Tools > Shortcuts. This will open a new window where you can manage your keyboard shortcuts.
        2. -
        3. Click on the Add button at the bottom of the window or go to Edit > Add. This will open a dialog box where you can create a new keyboard shortcut.
        4. -
        5. Type in a name for your keyboard shortcut in the Name field. This will help you identify your keyboard shortcut later.
        6. -
        7. Select a key or a combination of keys from the drop-down menu in the Key field. You can choose from any key on your keyboard, including modifier keys, function keys, multimedia keys, etc.
        8. -
        9. Select an action from the drop-down menu in the Action field. You can choose from various actions, such as launching an application, opening a file or folder, executing a command, sending keystrokes, etc.
        10. -
        11. If you selected an action that requires a parameter, such as launching an application or opening a file or folder, click on the Browse button next to the Parameter field and select the file or folder you want to open or launch.
        12. -
        13. If you selected an action that requires keystrokes, such as sending keystrokes or executing a command, type in the keystrokes you want to send in the Parameter field. You can use special symbols, such as ENTER, TAB, ESC, etc., to represent special keys.
        14. -
        15. Click on the OK button to save your keyboard shortcut. You will see it appear in the list of keyboard shortcuts in the window.
        16. -
        17. You can also edit, delete, enable, disable, or test your keyboard shortcuts by using the buttons at the bottom of the window or the options in the Edit menu.
        18. -
        19. You can also export or import your keyboard shortcuts by using the buttons at the top right corner of the window or the options in the File menu.
        20. -
        -

        How to test and save your keyboard layout with Active.keyboard.v2.5..WinAll.Cracked-ARN

        -

        To test and save your keyboard layout with Active.keyboard.v2.5..WinAll.Cracked-ARN, you need to follow these steps:

        -

        -
          -
        1. Open Active.keyboard.v2.5..WinAll.Cracked-ARN and click on the Test button on the toolbar or go to Tools > Test. This will open a new window where you can test your keyboard layout.
        2. -
        3. Type in some text in the text box at the bottom of the window and see how your keyboard layout works. You can also use the virtual keyboard at the top of the window to see which keys are assigned to which functions.
        4. -
        5. If you are satisfied with your keyboard layout, click on the Save button on the toolbar or go to File > Save. This will save your keyboard layout as a .klf file on your computer.
        6. -
        7. If you want to save your keyboard layout as a different name or location, click on the Save As button on the toolbar or go to File > Save As. This will open a dialog box where you can choose a name and location for your keyboard layout file.
        8. -
        9. If you want to apply your keyboard layout to any Windows application, click on the Apply button on the toolbar or go to Tools > Apply. This will activate your keyboard layout and make it available for use.
        10. -
        11. If you want to deactivate your keyboard layout, click on the Suspend button on the toolbar or go to Tools > Suspend. This will disable your keyboard layout and restore your default settings.
        12. -
        -

        Tips and tricks for using Active.keyboard.v2.5..WinAll.Cracked-ARN

        -

        Besides creating and using custom keyboard layouts and shortcuts with Active.keyboard.v2.5..WinAll.Cracked-ARN, there are some other tips and tricks that can help you get more out of this tool. Here are some of them:

        -

        How to backup and restore your keyboard layouts with Active.keyboard.v2.5..WinAll.Cracked-ARN

        -

        To backup and restore your keyboard layouts with Active.keyboard.v2.5..WinAll.Cracked-ARN, you need to follow these steps:

        -
          -
        1. Open Active.keyboard.v2.5..WinAll.Cracked-ARN and click on the Backup button on the toolbar or go to Tools > Backup. This will open a dialog box where you can choose a name and location for your backup file.
        2. -
        3. Click on the OK button to save your backup file. This will create a .zip file that contains all your keyboard layout files and shortcuts.
        4. -
        5. To restore your keyboard layouts from a backup file, click on the Restore button on the toolbar or go to Tools > Restore. This will open a dialog box where you can browse for your backup file.
        6. -
        7. Click on the OK button to restore your keyboard layouts. This will overwrite your existing keyboard layout files and shortcuts with the ones from the backup file.
        8. -
        -

        How to switch between keyboard layouts with Active.keyboard.v2.5..WinAll.Cracked-ARN

        -

        To switch between keyboard layouts with Active.keyboard.v2.5..WinAll.Cracked-ARN, you have two options:

        -
          -
        • You can use a hotkey that you can set in the Options window. To access the Options window, click on the Options button on the toolbar or go to Tools > Options. In the Options window, go to the Hotkeys tab and select a key or a combination of keys from the drop-down menu in the Switch Layouts field. Click on the OK button to save your settings. Now you can use that hotkey to switch between your keyboard layouts.
        • -
        • You can use a tray icon that you can enable or disable in the Options window. To access the Options window, click on the Options button on the toolbar or go to Tools > Options. In the Options window, go to the Miscellaneous tab and check or uncheck the box next to Show Tray Icon. Click on the OK button to save your settings. Now you can see a tray icon in the bottom right corner of your screen that shows your current keyboard layout. You can right-click on it and select a different keyboard layout from the menu.
        • -
        -

        How to troubleshoot common issues with Active.keyboard.v2.5..WinAll.Cracked-ARN

        -

        If you encounter any problems with Active.keyboard.v2.5..WinAll.Cracked-ARN, such as errors, crashes, or conflicts, here are some possible solutions:

        -
          -
        • Make sure you have downloaded and installed Active.keyboard.v2.5..WinAll.Cracked-ARN correctly and applied the crack file properly.
        • -
        • Make sure you have the latest version of Active.keyboard.v2.5..WinAll.Cracked-ARN and update it if necessary.
        • -
        • Make sure you have compatible system requirements for Active.keyboard.v2.5..WinAll.Cracked-ARN and update your drivers if necessary.
        • -
        • Make sure you have administrator privileges on your computer and run Active.keyboard.v2.5..WinAll.Cracked-ARN as an administrator.
        • -
        • Make sure you have no other programs or processes that interfere with Active.keyboard.v2.5..WinAll.Cracked-ARN and close them if necessary.
        • -
        • If none of these solutions work, you can contact the developer of Active.keyboard.v2.5..WinAll.Cracked-ARN by sending an email to support@softboy.net or visiting their website at http://www.softboy.net/.
        • -
        -

        Conclusion

        -

        Summary of the main points

        -

        In this article, we have covered everything you need to know about Active.keyboard.v2.5..WinAll.Cracked-ARN, a powerful keyboard customization tool that lets you create and use custom keyboard layouts and shortcuts for any Windows application. We have explained what Active.keyboard.v2.5..WinAll.Cracked-ARN is, how to download and install it, how to use it, and some tips and tricks for using it effectively.

        -

        Call to action and recommendation

        -

        If you are looking for a way to enhance your productivity, efficiency, and comfort with your keyboard, we highly recommend that you try out Active.keyboard.v2. 5..WinAll.Cracked-ARN. You can download it from the official website of SoftBoy or from other sources, but make sure you apply the crack file to activate the full version. You can also check out the user manual and the video tutorials on the website for more guidance and support. You will be amazed by how much you can do with your keyboard with Active.keyboard.v2.5..WinAll.Cracked-ARN.

        -

        So what are you waiting for? Download Active.keyboard.v2.5..WinAll.Cracked-ARN today and unleash your keyboard's full potential!

        -

        FAQs

        -

        What are the system requirements for Active.keyboard.v2.5..WinAll.Cracked-ARN?

        -

        The system requirements for Active.keyboard.v2.5..WinAll.Cracked-ARN are:

        -
          -
        • Windows XP, Vista, 7, 8, 10, or other compatible Windows operating systems.
        • -
        • At least 512 MB of RAM.
        • -
        • At least 10 MB of free disk space.
        • -
        • A standard keyboard and mouse.
        • -
        -

        Is Active.keyboard.v2.5..WinAll.Cracked-ARN safe and legal to use?

        -

        Active.keyboard.v2.5..WinAll.Cracked-ARN is safe to use as long as you download it from a reliable source and scan it for malware or viruses before installing it. However, using the crack file to activate the full version of Active.keyboard.v2.5..WinAll.Cracked-ARN may be considered illegal in some countries or regions, as it violates the terms and conditions of the software license agreement. Therefore, we advise you to use Active.keyboard.v2.5..WinAll.Cracked-ARN at your own risk and discretion.

        -

        What are the alternatives to Active.keyboard.v2.5..WinAll.Cracked-ARN?

        -

        If you are looking for other keyboard customization tools, you may want to check out these alternatives to Active.keyboard.v2.5..WinAll.Cracked-ARN:

        -
          -
        • KeyTweak: A free and simple tool that lets you remap your keyboard keys using a graphical interface or a registry editor.
        • -
        • SharpKeys: A free and easy tool that lets you edit the Windows registry to change the mapping of your keyboard keys.
        • -
        • KbdEdit: A paid and advanced tool that lets you create and edit custom keyboard layouts for any language or script.
        • -
        -

        How can I contact the developer of Active.keyboard.v2.5..WinAll.Cracked-ARN?

        -

        If you have any questions, feedback, or suggestions for Active.keyboard.v2.5..WinAll.Cracked-ARN, you can contact the developer of Active.keyboard.v2.5..WinAll.Cracked-ARN by sending an email to support@softboy.net or visiting their website at http://www.softboy.net/.

        -

        Where can I find more information about keyboard customization?

        -

        If you want to learn more about keyboard customization, you can find more information from these sources:

        -
          -
        • The Keyboard Layout Project: A website that provides information and resources on keyboard layouts for different languages and scripts.
        • -
        • The Keyboard Company: A website that sells various types of keyboards and accessories for different purposes and preferences.
        • -
        • The Keyboard Magazine: A magazine that covers topics related to keyboards, music, technology, and culture.
        • -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Astm D2794 Pdf Free 13.md b/spaces/raedeXanto/academic-chatgpt-beta/Astm D2794 Pdf Free 13.md deleted file mode 100644 index 9fb282b8c520b5e903006b2b917969442ad1b436..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Astm D2794 Pdf Free 13.md +++ /dev/null @@ -1,39 +0,0 @@ -
        -

        How to Test the Resistance of Organic Coatings to Rapid Deformation Using ASTM D2794

        -

        Organic coatings are often applied to various substrates to protect them from corrosion, abrasion, weathering, and other environmental factors. However, these coatings may also be subjected to damaging impacts during the manufacture or use of the coated articles. Such impacts can cause cracking, delamination, or loss of adhesion of the coating, which can compromise its protective function.

        -

        Astm D2794 Pdf Free 13


        Download File ☆☆☆ https://tinourl.com/2uL4ZF



        -

        One way to evaluate the resistance of organic coatings to rapid deformation is by using the standard test method ASTM D2794-93 (2019), which covers a procedure for deforming a coated panel by dropping a weight from a certain height and measuring the resulting damage. This test method can be used to compare different coatings or coating systems, or to assess the effect of coating variables such as thickness, curing conditions, or additives.

        -

        The test method involves securing a coated panel in a fixture with a hole of a specified diameter. A weight with a spherical indenter is then dropped from a known height onto the panel, causing an indentation or deformation on the coating and the substrate. The damage is assessed by either measuring the diameter of the indentation, the loss of coating adhesion, or the appearance of cracks in the coating. The test can be repeated with different weights or heights to determine the minimum impact energy that causes failure of the coating.

        -

        The test method has some limitations and sources of variability that should be considered when interpreting the results. For example, the test method does not account for the effects of shape, thickness, or stiffness of the substrate, or the influence of temperature, humidity, or aging on the coating properties. The test method also has poor reproducibility between laboratories, so numerical values should be used with caution and only for testing in one laboratory. A better way to compare coatings is by ranking them according to their relative performance.

        -

        The test method is useful for predicting the performance of organic coatings for their ability to resist cracking caused by impacts. However, it does not necessarily correlate with other types of coating failures or service conditions. Therefore, it should be used in conjunction with other test methods and field data to evaluate the overall durability and suitability of organic coatings for specific applications.

        -

        - -

        How to Perform the ASTM D2794 Test

        -

        To perform the ASTM D2794 test, the following equipment and materials are needed:

        -
          -
        • A coated panel of suitable size and shape, preferably 24 gauge metal with a minimum area of 4 by 6 inches (102 by 152 mm).
        • -
        • A fixture to hold the panel securely with a hole of 0.625 inch (15.9 mm) diameter in the center.
        • -
        • A weight with a spherical indenter of 0.625 inch (15.9 mm) diameter and a mass of 2 pounds (0.91 kg).
        • -
        • A device to drop the weight from a known height onto the panel, such as a Gardner Impact Tester or a similar apparatus.
        • -
        • A measuring device to determine the diameter of the indentation or the loss of adhesion of the coating.
        • -
        • A magnifying glass or a microscope to examine the coating for cracks.
        • -
        -

        The test procedure is as follows:

        -
          -
        1. Prepare the coated panel according to the manufacturer's instructions and condition it at room temperature for at least 24 hours before testing.
        2. -
        3. Secure the panel in the fixture with the coated side facing up and align the hole in the fixture with the center of the panel.
        4. -
        5. Select a height for dropping the weight that is expected to cause failure of the coating or a fraction thereof.
        6. -
        7. Drop the weight from the selected height onto the panel and observe the resulting damage.
        8. -
        9. Measure the diameter of the indentation or the loss of adhesion of the coating using a suitable device.
        10. -
        11. Examine the coating for cracks using a magnifying glass or a microscope.
        12. -
        13. Record the height, the diameter or adhesion loss, and the presence or absence of cracks for each test.
        14. -
        15. Repeat steps 3 to 7 with different heights or weights until a range of impact energies that cause failure or no failure of the coating is established.
        16. -
        -

        The test results are reported as follows:

        -
          -
        • The minimum impact energy that causes failure of the coating, expressed in inch-pounds or joules.
        • -
        • The type of failure observed, such as indentation, adhesion loss, or cracking.
        • -
        • The ranking of coatings according to their relative resistance to rapid deformation.
        • -

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/rbarman/resnet50-example/app.py b/spaces/rbarman/resnet50-example/app.py deleted file mode 100644 index 98b8a7f09ab99a2ec1e8f95401cee8fa69986253..0000000000000000000000000000000000000000 --- a/spaces/rbarman/resnet50-example/app.py +++ /dev/null @@ -1,36 +0,0 @@ -from transformers import AutoFeatureExtractor, ResNetForImageClassification -import torch -import gradio as gr - -# load model -feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50") -model = ResNetForImageClassification.from_pretrained("microsoft/resnet-50") - -def predict(image): - - inputs = feature_extractor(image, return_tensors="pt") - with torch.no_grad(): - logits = model(**inputs).logits - - # model predicts one of the 1000 ImageNet classes - predicted_label = logits.argmax(-1).item() - prediction = model.config.id2label[predicted_label] - return prediction - -# setup Gradio interface -title = "Image classifier" -description = "Image classification with pretrained resnet50 model" -examples = ['dog.jpg'] -interpretation='default' -enable_queue=True - -gr.Interface( - fn=predict, - inputs=gr.inputs.Image(), - outputs=gr.outputs.Label(num_top_classes=1), - title=title, - description=description, - examples=examples, - interpretation=interpretation, - enable_queue=enable_queue -).launch() diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Film Semi Indonesia Tahun 90 An 3 FREE.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Film Semi Indonesia Tahun 90 An 3 FREE.md deleted file mode 100644 index 23fe2469ac384b2e04075c274e015999a5ec46e1..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Film Semi Indonesia Tahun 90 An 3 FREE.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Film semi indonesia tahun 90 an 3


        Download File ->->->-> https://urlgoal.com/2uCKBX



        -
        -Terhitung sebagai film sukses, Inem Pelayan Sexy dibuat sampai seri tiga. Kesuksesan film ini tak bisa dilepaskan dari keseksian para pemain wanita di ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/riccorl/relik-entity-linking/relik/reader/trainer/train.py b/spaces/riccorl/relik-entity-linking/relik/reader/trainer/train.py deleted file mode 100644 index f1983b38c02199f45c112247fc74bd09d3f1e4f0..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/reader/trainer/train.py +++ /dev/null @@ -1,98 +0,0 @@ -import hydra -import lightning -from hydra.utils import to_absolute_path -from lightning import Trainer -from lightning.pytorch.callbacks import LearningRateMonitor, ModelCheckpoint -from lightning.pytorch.loggers.wandb import WandbLogger -from omegaconf import DictConfig, OmegaConf, open_dict -from reader.data.relik_reader_data import RelikDataset -from reader.lightning_modules.relik_reader_pl_module import RelikReaderPLModule -from reader.pytorch_modules.optim import LayerWiseLRDecayOptimizer -from torch.utils.data import DataLoader - -from relik.reader.utils.special_symbols import get_special_symbols -from relik.reader.utils.strong_matching_eval import ELStrongMatchingCallback - - -@hydra.main(config_path="../conf", config_name="config") -def train(cfg: DictConfig) -> None: - lightning.seed_everything(cfg.training.seed) - - special_symbols = get_special_symbols(cfg.model.entities_per_forward) - - # model declaration - model = RelikReaderPLModule( - cfg=OmegaConf.to_container(cfg), - transformer_model=cfg.model.model.transformer_model, - additional_special_symbols=len(special_symbols), - training=True, - ) - - # optimizer declaration - opt_conf = cfg.model.optimizer - electra_optimizer_factory = LayerWiseLRDecayOptimizer( - lr=opt_conf.lr, - warmup_steps=opt_conf.warmup_steps, - total_steps=opt_conf.total_steps, - total_reset=opt_conf.total_reset, - no_decay_params=opt_conf.no_decay_params, - weight_decay=opt_conf.weight_decay, - lr_decay=opt_conf.lr_decay, - ) - - model.set_optimizer_factory(electra_optimizer_factory) - - # datasets declaration - train_dataset: RelikDataset = hydra.utils.instantiate( - cfg.data.train_dataset, - dataset_path=to_absolute_path(cfg.data.train_dataset_path), - special_symbols=special_symbols, - ) - - # update of validation dataset config with special_symbols since they - # are required even from the EvaluationCallback dataset_config - with open_dict(cfg): - cfg.data.val_dataset.special_symbols = special_symbols - - val_dataset: RelikDataset = hydra.utils.instantiate( - cfg.data.val_dataset, - dataset_path=to_absolute_path(cfg.data.val_dataset_path), - ) - - # callbacks declaration - callbacks = [ - ELStrongMatchingCallback( - to_absolute_path(cfg.data.val_dataset_path), cfg.data.val_dataset - ), - ModelCheckpoint( - "model", - filename="{epoch}-{val_core_f1:.2f}", - monitor="val_core_f1", - mode="max", - ), - LearningRateMonitor(), - ] - - wandb_logger = WandbLogger(cfg.model_name, project=cfg.project_name) - - # trainer declaration - trainer: Trainer = hydra.utils.instantiate( - cfg.training.trainer, - callbacks=callbacks, - logger=wandb_logger, - ) - - # Trainer fit - trainer.fit( - model=model, - train_dataloaders=DataLoader(train_dataset, batch_size=None, num_workers=0), - val_dataloaders=DataLoader(val_dataset, batch_size=None, num_workers=0), - ) - - -def main(): - train() - - -if __name__ == "__main__": - main() diff --git a/spaces/riyueyiming/gpt/ChuanhuChatbot.py b/spaces/riyueyiming/gpt/ChuanhuChatbot.py deleted file mode 100644 index 5d18393a7cc42c6545d90e9a8ebf949745ebe5bf..0000000000000000000000000000000000000000 --- a/spaces/riyueyiming/gpt/ChuanhuChatbot.py +++ /dev/null @@ -1,423 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules import config -from modules.config import * -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.chat_func import * -from modules.openai_func import get_usage - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - user_name = gr.State("") - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - user_question = gr.State("") - outputing = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(): - gr.HTML(title) - user_info = gr.Markdown(value="", elem_id="user_info") - gr.HTML('
        Duplicate Space
        ') - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - # https://github.com/gradio-app/gradio/pull/3296 - def create_greeting(request: gr.Request): - if hasattr(request, "username") and request.username: # is not None or is not "" - logging.info(f"Get User Name: {request.username}") - return gr.Markdown.update(value=f"User: {request.username}"), request.username - else: - return gr.Markdown.update(value=f"User: default", visible=False), "" - demo.load(create_greeting, inputs=None, outputs=[user_info, user_name]) - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox( - elem_id="user_input_tb", - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delFirstBtn = gr.Button("🗑️ 删除最旧对话") - delLastBtn = gr.Button("🗑️ 删除最新对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - if multi_api_key: - usageTxt = gr.Markdown("多账号模式已开启,无需输入key,可直接开始对话", elem_id="usage_display") - else: - usageTxt = gr.Markdown("**发送消息** 或 **提交key** 以显示额度", elem_id="usage_display") - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - language_select_dropdown = gr.Dropdown( - label="选择回复语言(针对搜索&索引功能)", - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - two_column = gr.Checkbox(label="双栏pdf", value=advance_docs["pdf"].get("two_column", False)) - # TODO: 公式ocr - # formula_ocr = gr.Checkbox(label="识别公式", value=advance_docs["pdf"].get("formula_ocr", False)) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - default_btn = gr.Button("🔙 恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - with gr.Accordion("网络设置", open=False, visible=False): - # 优先展示自定义的api_host - apihostTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API-Host...", - label="API-Host", - value=config.api_host or shared.API_HOST, - lines=1, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - gr.HTML(footer.format(versions=versions_html()), elem_id="footer") - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - user_api_key, - systemPromptTxt, - history, - user_question, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, history, status_display, token_count], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=get_usage, inputs=[user_api_key], outputs=[usageTxt], show_progress=False - ) - - - # Chatbot - cancelBtn.click(cancel_outputing, [], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [history, token_count], - [history, token_count, status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(sum(token_count.value[-4:])), - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - reduceTokenBtn.click(**get_usage_args) - - two_column.change(update_doc_config, [two_column], None) - - # ChatGPT - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot, user_name], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot, user_name], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot, user_name], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot, user_name], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_host, - [apihostTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - auth=auth_list, - favicon_path="./assets/favicon.ico", - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - share=False, - favicon_path="./assets/favicon.ico", - ) - # if not running in Docker - else: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, - auth=auth_list, - favicon_path="./assets/favicon.ico", - inbrowser=True, - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, favicon_path="./assets/favicon.ico", inbrowser=True - ) # 改为 share=True 可以创建公开分享链接 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/api_wrappers/panoptic_evaluation.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/api_wrappers/panoptic_evaluation.py deleted file mode 100644 index 55f57bf4a4ca3554ab90ac768dc9ec06e9c878d2..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/api_wrappers/panoptic_evaluation.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -# Copyright (c) 2018, Alexander Kirillov -# This file supports `file_client` for `panopticapi`, -# the source code is copied from `panopticapi`, -# only the way to load the gt images is modified. -import multiprocessing -import os - -import mmcv -import numpy as np - -try: - from panopticapi.evaluation import OFFSET, VOID, PQStat - from panopticapi.utils import rgb2id -except ImportError: - PQStat = None - rgb2id = None - VOID = 0 - OFFSET = 256 * 256 * 256 - - -def pq_compute_single_core(proc_id, - annotation_set, - gt_folder, - pred_folder, - categories, - file_client=None, - print_log=False): - """The single core function to evaluate the metric of Panoptic - Segmentation. - - Same as the function with the same name in `panopticapi`. Only the function - to load the images is changed to use the file client. - - Args: - proc_id (int): The id of the mini process. - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - print_log (bool): Whether to print the log. Defaults to False. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - pq_stat = PQStat() - - idx = 0 - for gt_ann, pred_ann in annotation_set: - if print_log and idx % 100 == 0: - print('Core: {}, {} from {} images processed'.format( - proc_id, idx, len(annotation_set))) - idx += 1 - # The gt images can be on the local disk or `ceph`, so we use - # file_client here. - img_bytes = file_client.get( - os.path.join(gt_folder, gt_ann['file_name'])) - pan_gt = mmcv.imfrombytes(img_bytes, flag='color', channel_order='rgb') - pan_gt = rgb2id(pan_gt) - - # The predictions can only be on the local dist now. - pan_pred = mmcv.imread( - os.path.join(pred_folder, pred_ann['file_name']), - flag='color', - channel_order='rgb') - pan_pred = rgb2id(pan_pred) - - gt_segms = {el['id']: el for el in gt_ann['segments_info']} - pred_segms = {el['id']: el for el in pred_ann['segments_info']} - - # predicted segments area calculation + prediction sanity checks - pred_labels_set = set(el['id'] for el in pred_ann['segments_info']) - labels, labels_cnt = np.unique(pan_pred, return_counts=True) - for label, label_cnt in zip(labels, labels_cnt): - if label not in pred_segms: - if label == VOID: - continue - raise KeyError( - 'In the image with ID {} segment with ID {} is ' - 'presented in PNG and not presented in JSON.'.format( - gt_ann['image_id'], label)) - pred_segms[label]['area'] = label_cnt - pred_labels_set.remove(label) - if pred_segms[label]['category_id'] not in categories: - raise KeyError( - 'In the image with ID {} segment with ID {} has ' - 'unknown category_id {}.'.format( - gt_ann['image_id'], label, - pred_segms[label]['category_id'])) - if len(pred_labels_set) != 0: - raise KeyError( - 'In the image with ID {} the following segment IDs {} ' - 'are presented in JSON and not presented in PNG.'.format( - gt_ann['image_id'], list(pred_labels_set))) - - # confusion matrix calculation - pan_gt_pred = pan_gt.astype(np.uint64) * OFFSET + pan_pred.astype( - np.uint64) - gt_pred_map = {} - labels, labels_cnt = np.unique(pan_gt_pred, return_counts=True) - for label, intersection in zip(labels, labels_cnt): - gt_id = label // OFFSET - pred_id = label % OFFSET - gt_pred_map[(gt_id, pred_id)] = intersection - - # count all matched pairs - gt_matched = set() - pred_matched = set() - for label_tuple, intersection in gt_pred_map.items(): - gt_label, pred_label = label_tuple - if gt_label not in gt_segms: - continue - if pred_label not in pred_segms: - continue - if gt_segms[gt_label]['iscrowd'] == 1: - continue - if gt_segms[gt_label]['category_id'] != pred_segms[pred_label][ - 'category_id']: - continue - - union = pred_segms[pred_label]['area'] + gt_segms[gt_label][ - 'area'] - intersection - gt_pred_map.get((VOID, pred_label), 0) - iou = intersection / union - if iou > 0.5: - pq_stat[gt_segms[gt_label]['category_id']].tp += 1 - pq_stat[gt_segms[gt_label]['category_id']].iou += iou - gt_matched.add(gt_label) - pred_matched.add(pred_label) - - # count false positives - crowd_labels_dict = {} - for gt_label, gt_info in gt_segms.items(): - if gt_label in gt_matched: - continue - # crowd segments are ignored - if gt_info['iscrowd'] == 1: - crowd_labels_dict[gt_info['category_id']] = gt_label - continue - pq_stat[gt_info['category_id']].fn += 1 - - # count false positives - for pred_label, pred_info in pred_segms.items(): - if pred_label in pred_matched: - continue - # intersection of the segment with VOID - intersection = gt_pred_map.get((VOID, pred_label), 0) - # plus intersection with corresponding CROWD region if it exists - if pred_info['category_id'] in crowd_labels_dict: - intersection += gt_pred_map.get( - (crowd_labels_dict[pred_info['category_id']], pred_label), - 0) - # predicted segment is ignored if more than half of - # the segment correspond to VOID and CROWD regions - if intersection / pred_info['area'] > 0.5: - continue - pq_stat[pred_info['category_id']].fp += 1 - - if print_log: - print('Core: {}, all {} images processed'.format( - proc_id, len(annotation_set))) - return pq_stat - - -def pq_compute_multi_core(matched_annotations_list, - gt_folder, - pred_folder, - categories, - file_client=None, - nproc=32): - """Evaluate the metrics of Panoptic Segmentation with multithreading. - - Same as the function with the same name in `panopticapi`. - - Args: - matched_annotations_list (list): The matched annotation list. Each - element is a tuple of annotations of the same image with the - format (gt_anns, pred_anns). - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - nproc (int): Number of processes for panoptic quality computing. - Defaults to 32. When `nproc` exceeds the number of cpu cores, - the number of cpu cores is used. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - cpu_num = min(nproc, multiprocessing.cpu_count()) - - annotations_split = np.array_split(matched_annotations_list, cpu_num) - print('Number of cores: {}, images per core: {}'.format( - cpu_num, len(annotations_split[0]))) - workers = multiprocessing.Pool(processes=cpu_num) - processes = [] - for proc_id, annotation_set in enumerate(annotations_split): - p = workers.apply_async(pq_compute_single_core, - (proc_id, annotation_set, gt_folder, - pred_folder, categories, file_client)) - processes.append(p) - - # Close the process pool, otherwise it will lead to memory - # leaking problems. - workers.close() - workers.join() - - pq_stat = PQStat() - for p in processes: - pq_stat += p.get() - - return pq_stat diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Om Shanti Om Hd 1080p Blu Ray).md b/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Om Shanti Om Hd 1080p Blu Ray).md deleted file mode 100644 index ab3464814d1cda5730aca86d0e044eaae5b78144..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Om Shanti Om Hd 1080p Blu Ray).md +++ /dev/null @@ -1,9 +0,0 @@ - -

        What makes tongue drums so great are the effects anyone can experience when playing or listening to their peaceful tunes. When playing a tongue drum, a beginner, a hobbyist or a professional musician can all unlock creativity, boost focus and wash away anxiety and stress. Tongue drums are great for meditation, yoga practice and even emotional healing. Whether purchasing a tongue drum from a craftsman or online manufacturer, know that what you are investing in is more than a simple instrument: it is a way to relax, play and reconnect with yourself. Pick up a tongue drum and start playing today!

        -

        HD Online Player (Om Shanti Om Hd 1080p Blu Ray)


        Download > https://tinurll.com/2uzoGW



        -

        Oh God, lead us from the unreal to the Real. Oh God, lead us from darkness to light. Oh God, lead us from death to immortality. Shanti, Shanti, Shanti unto all. Oh Lord God almighty, may there be peace in celestial regions. May there be peace on earth. May the waters be appeasing. May herbs be wholesome, and may trees and plants bring peace to all. May all beneficent beings bring peace to us. May thy Vedic Law propagate peace all through the world. May all things be a source of peace to us. And may thy peace itself, bestow peace on all, and may that peace come to me also.

        -

        Home Shanti picks up a bit slowly in the opening episode of the series. The jokes dont land, the one-liners seem pretentious, and you dont feel any affinity towards the characters on screen. But stay with it patiently, and Home Shanti grabs your attention from the next episode onwards. You slowly become invested in the characters, and their joys feel like your own.

        -

        Lastly, of course, you have the option of buying a tongue drum from your own home whilst online shopping. Though with this method you cannot play the tongue drum before buying it, most manufacturers and craftsmen include sound clips and detailed descriptions of the tongue drums they sell so that the buyer can be as informed as possible before purchasing. You can always order directly through a manufacturers or artisans website or through a third party marketplace. If you are open to purchasing a second-hand drum, those can also be purchased through websites like Craigslist.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/rstallman/Contract-AI/app.py b/spaces/rstallman/Contract-AI/app.py deleted file mode 100644 index 94a1c078abf1ed00de822c16afb23b8f91d6db2e..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Contract-AI/app.py +++ /dev/null @@ -1,1028 +0,0 @@ -#!/usr/bin/python -# -*- coding: utf-8 -*- -import tensorflow as tf -import gradio as gr -import pandas as pd -import re -import ast -import spacy -import nltk -nltk.download('punkt') -from nltk.tokenize import sent_tokenize -from transformers import AutoTokenizer, \ - TFAutoModelForSequenceClassification -import numpy as np - - -def make_prediction(contract): - if contract is list: - contract=contract[0] - tokenizer = AutoTokenizer.from_pretrained('roberta-base') - final_model = TFAutoModelForSequenceClassification.from_pretrained('ullasmrnva/LawBerta') - contract_df = pd.DataFrame() - contract_df = contract_df.append({'contracts': contract}, - ignore_index=True) - contract_sentences_df = contract_df['contracts' - ].apply(sent_tokenize).reset_index()['contracts' - ].explode().to_frame().rename(columns={'contracts': 'sentences' - }).reset_index() - input = [np.array(tokenizer(list(contract_sentences_df.sentences), - truncation=True, max_length=100, padding='max_length' - ).input_ids)] - y_pred = np.argmax(final_model.predict(input)[0], axis=1) - clause_map = { - 0: 'Affiliate License-Licensee', - 1: 'Affiliate License-Licensor', - 2: 'Anti-Assignment', - 3: 'Audit Rights', - 4: 'Cap On Liability', - 5: 'Change Of Control', - 6: 'Competitive Restriction Exception', - 7: 'Covenant Not To Sue', - 8: 'Exclusivity', - 9: 'Insurance', - 10: 'Ip Ownership Assignment', - 11: 'Irrevocable Or Perpetual License', - 12: 'Joint Ip Ownership', - 13: 'License Grant', - 14: 'Liquidated Damages', - 15: 'Minimum Commitment', - 16: 'Most Favored Nation', - 17: 'No Clause', - 18: 'No-Solicit Of Customers', - 19: 'No-Solicit Of Employees', - 20: 'Non-Compete', - 21: 'Non-Disparagement', - 22: 'Non-Transferable License', - 23: 'Post-Termination Services', - 24: 'Price Restrictions', - 25: 'Revenue/Profit Sharing', - 26: 'Rofr/Rofo/Rofn', - 27: 'Source Code Escrow', - 28: 'Termination For Convenience', - 29: 'Third Party Beneficiary', - 30: 'Uncapped Liability', - 31: 'Unlimited/All-You-Can-Eat-License', - 32: 'Volume Restriction', - 33: 'Warranty Duration', - } - final_df = contract_sentences_df[y_pred != 17][['sentences']] - final_df['clause'] = np.array([clause_map[x] for x in y_pred[y_pred - != 17]]) - output_sentences = [] - clauses_found=[] - for i in [ - 'License Grant', - 'Audit Rights', - 'Non-Disparagement', - 'Cap On Liability', - 'Anti-Assignment', - 'Minimum Commitment', - 'Most Favored Nation', - 'Unlimited/All-You-Can-Eat-License', - 'Revenue/Profit Sharing', - 'Uncapped Liability', - 'Termination For Convenience', - 'Exclusivity', - 'Change Of Control', - 'Rofr/Rofo/Rofn', - 'Irrevocable Or Perpetual License', - 'Competitive Restriction Exception', - 'Price Restrictions', - 'Covenant Not To Sue', - 'Volume Restriction', - 'Joint Ip Ownership', - 'Post-Termination Services', - 'Ip Ownership Assignment', - 'Non-Compete', - 'Insurance', - 'Affiliate License-Licensor', - 'Affiliate License-Licensee', - 'Non-Transferable License', - 'No-Solicit Of Customers', - 'Warranty Duration', - 'No-Solicit Of Employees', - 'Liquidated Damages', - 'Third Party Beneficiary', - 'Source Code Escrow', - ]: - clause=final_df[final_df['clause']== i]['sentences'].str.cat(sep='***\n\n***') - if clause!='': - print(i) - clauses_found.append(i) - output_sentences.append(clause) - found='' - if len(clauses_found)==0: - found='None' - else: - found=', '.join(clauses_found) - return [found]+output_sentences - - -gr.Interface(fn=make_prediction, inputs=gr.Textbox(placeholder="In a timely manner, upon the written instruction of the Company, invest and reinvest the Property in United States government securities within the meaning of Section 2(a)(16) of the Investment Company Act of 1940...\nPlease see example below."),\ - outputs=[gr.Textbox(label='Clauses Found:'), gr.Textbox(label='License Grant'),\ - gr.Textbox(label='Audit Rights'),\ - gr.Textbox(label='Non-Disparagement'),\ - gr.Textbox(label='Cap On Liability'),\ - gr.Textbox(label='Anti-Assignment'),\ - gr.Textbox(label='Minimum Commitment'),\ - gr.Textbox(label='Most Favored Nation'),\ - gr.Textbox(label='Unlimited/All-You-Can-Eat-License'),\ - gr.Textbox(label='Revenue/Profit Sharing'),\ - gr.Textbox(label='Uncapped Liability'),\ - gr.Textbox(label='Termination For Convenience'),\ - gr.Textbox(label='Exclusivity'),\ - gr.Textbox(label='Change Of Control'),\ - gr.Textbox(label='Rofr/Rofo/Rofn'),\ - gr.Textbox(label='Irrevocable Or Perpetual License'),\ - gr.Textbox(label='Competitive Restriction Exception'),\ - gr.Textbox(label='Price Restrictions'),\ - gr.Textbox(label='Covenant Not To Sue'),\ - gr.Textbox(label='Volume Restriction'),\ - gr.Textbox(label='Joint Ip Ownership'),\ - gr.Textbox(label='Post-Termination Services'),\ - gr.Textbox(label='Ip Ownership Assignment'),\ - gr.Textbox(label='Non-Compete'),\ - gr.Textbox(label='Insurance'),\ - gr.Textbox(label='Affiliate License-Licensor'),\ - gr.Textbox(label='Affiliate License-Licensee'),\ - gr.Textbox(label='Non-Transferable License'),\ - gr.Textbox(label='No-Solicit Of Customers'),\ - gr.Textbox(label='Warranty Duration'),\ - gr.Textbox(label='No-Solicit Of Employees'),\ - gr.Textbox(label='Liquidated Damages'),\ - gr.Textbox(label='Third Party Beneficiary'),\ - gr.Textbox(label='Source Code Escrow')], examples=["""-------------------------------------------------------------------------------- - -Exhibit 10.2 - -  -INVESTMENT MANAGEMENT TRUST AGREEMENT -  -This Investment Management Trust Agreement (this “Agreement”) is made effective -as of September 30, 2020 by and between Altimeter Growth Corp., a Cayman Islands -exempted company (the “Company”), and Continental Stock Transfer & Trust -Company, a New York corporation (the “Trustee”). -  -WHEREAS, the Company’s registration statement on Form S-1, File No. 333-248762 -(the “Registration Statement”) and prospectus (the “Prospectus”) for the initial -public offering of the Company’s units (the “Units”), each of which consists of -one of the Company’s Class A ordinary shares, par value $0.0001 per share (the -“Ordinary Shares”), and a fraction of one redeemable warrant, each whole warrant -entitling the holder thereof to purchase one Ordinary Share (such initial public -offering hereinafter referred to as the “Offering”), has been declared effective -as of the date hereof by the U.S. Securities and Exchange Commission; and -  -WHEREAS, the Company has entered into an Underwriting Agreement (the -“Underwriting Agreement”) with Citigroup Global Markets Inc., Goldman Sachs & -Co. LLC and Morgan Stanley & Co. LLC, as representatives (the “Representatives”) -to the several underwriters (the “Underwriters”) named therein; and -  -WHEREAS, as described in the Prospectus, $450,000,000 of the gross proceeds of -the Offering and sale of the Private Placement Warrants (as defined in the -Underwriting Agreement) (or $500,000,000 if the Underwriters’ option to purchase -additional units is exercised in full) will be delivered to the Trustee to be -deposited and held in a segregated trust account located at all times in the -United States (the “Trust Account”) for the benefit of the Company and the -holders of the Ordinary Shares included in the Units issued in the Offering as -hereinafter provided (the amount to be delivered to the Trustee (and any -interest subsequently earned thereon) is referred to herein as the “Property,” -the shareholders for whose benefit the Trustee shall hold the Property will be -referred to as the “Public Shareholders,” and the Public Shareholders and the -Company will be referred to together as the “Beneficiaries”); and -  -WHEREAS, pursuant to the Underwriting Agreement, a portion of the Property equal -to $15,750,000, or $17,500,000 if the Underwriters’ option to purchase -additional units is exercised in full, is attributable to deferred underwriting -discounts and commissions that will be payable by the Company to the -Underwriters upon the consummation of the Business Combination (as defined -below) (the “Deferred Discount”); and -  -WHEREAS, the Company and the Trustee desire to enter into this Agreement to set -forth the terms and conditions pursuant to which the Trustee shall hold the -Property. -  -NOW THEREFORE, IT IS AGREED: -  -1. Agreements and Covenants of Trustee. The Trustee hereby agrees and covenants -to: -  -(a) Hold the Property in trust for the Beneficiaries in accordance with the -terms of this Agreement in the Trust Account established by the Trustee in the -United States at J.P. Morgan Chase Bank, N.A. (or at another U.S chartered -commercial bank with consolidated assets of $100 billion or more) and at a -brokerage institution selected by the Trustee that is reasonably satisfactory to -the Company; -  -(b) Manage, supervise and administer the Trust Account subject to the terms and -conditions set forth herein; -  -(c) In a timely manner, upon the written instruction of the Company, invest and -reinvest the Property in United States government securities within the meaning -of Section 2(a)(16) of the Investment Company Act of 1940, as amended, having a -maturity of 185 days or less, or in money market funds meeting the conditions of -paragraphs (d)(1), (d)(2), (d)(3) and (d)(4) of Rule 2a-7 promulgated under the -Investment Company Act of 1940, as amended (or any successor rule), which invest -only in direct U.S. government treasury obligations, as determined by the -Company; the Trustee may not invest in any other securities or assets, it being -understood that the Trust Account will earn no interest while account funds are -uninvested awaiting the Company’s instructions hereunder and the Trustee may -earn bank credits or other consideration; -  - --------------------------------------------------------------------------------- - -(d) Collect and receive, when due, all principal, interest or other income -arising from the Property, which shall become part of the “Property,” as such -term is used herein; -  -(e) Promptly notify the Company and the Representative of all communications -received by the Trustee with respect to any Property requiring action by the -Company; -  -(f) Supply any necessary information or documents as may be requested by the -Company (or its authorized agents) in connection with the Company’s preparation -of the tax returns relating to assets held in the Trust Account; -  -(g) Participate in any plan or proceeding for protecting or enforcing any right -or interest arising from the Property if, as and when instructed by the Company -to do so; -  -(h) Render to the Company monthly written statements of the activities of, and -amounts in, the Trust Account reflecting all receipts and disbursements of the -Trust Account; -  -(i) Commence liquidation of the Trust Account only after and promptly after (x) -receipt of, and only in accordance with, the terms of a letter from the Company -(“Termination Letter”) in a form substantially similar to that attached hereto -as either Exhibit A or Exhibit B, as applicable, signed on behalf of the Company -by its Chief Executive Officer, President, Chief Operating Officer or other -authorized officer of the Company, and complete the liquidation of the Trust -Account and distribute the Property in the Trust Account, including interest -earned on the funds held in the Trust Account and not previously released to us -to pay our income taxes (less up to $100,000 of interest to pay dissolution -expenses), only as directed in the Termination Letter and the other documents -referred to therein, or (y) upon the date which is the later of (1) 24 months -after the closing of the Offering (or 27 months from the closing of Offering if -the Company has executed a letter of intent, agreement in principle or -definitive agreement for a Business Combination within 24 months from the -closing of Offering but has not completed a Business Combination within such 24 -month period) and (2) such later date as may be approved by the Company’s -shareholders in accordance with the Company’s amended and restated memorandum -and articles of association, if a Termination Letter has not been received by -the Trustee prior to such date, in which case the Trust Account shall be -liquidated in accordance with the procedures set forth in the Termination Letter -attached as Exhibit B and the Property in the Trust Account, including interest -earned on the funds held in the Trust Account and not previously released to the -Company to pay its income taxes (less up to $100,000 of interest to pay -dissolution expenses), shall be distributed to the Public Shareholders of record -as of such date It is acknowledged and agreed that there should be no reduction -in the principal amount per share initially deposited in the Trust Account; -  -(j) Upon written request from the Company, which may be given from time to time -in a form substantially similar to that attached hereto as Exhibit C (a “Tax -Payment Withdrawal Instruction”), withdraw from the Trust Account and distribute -to the Company the amount of interest earned on the Property requested by the -Company to cover any tax obligation owed by the Company as a result of assets of -the Company or interest or other income earned on the Property, which amount -shall be delivered directly to the Company by electronic funds transfer or other -method of prompt payment, and the Company shall forward such payment to the -relevant taxing authority, so long as there is no reduction in the principal -amount per share initially deposited in the Trust Account; provided, however, -that to the extent there is not sufficient cash in the Trust Account to pay such -tax obligation, the Trustee shall liquidate such assets held in the Trust -Account as shall be designated by the Company in writing to make such -distribution (it being acknowledged and agreed that any such amount in excess of -interest income earned on the Property shall not be payable from the Trust -Account). The written request of the Company referenced above shall constitute -presumptive evidence that the Company is entitled to said funds, and the Trustee -shall have no responsibility to look beyond said request; -  -(k) Upon written request from the Company, which may be given from time to time -in a form substantially similar to that attached hereto as Exhibit D (a -“Shareholder Redemption Withdrawal Instruction”), the Trustee shall distribute -to the remitting brokers on behalf of Public Shareholders redeeming Ordinary -Shares the amount required to pay redeemed Ordinary Shares from Public -Shareholders pursuant to the Company’s amended and restated memorandum and -articles of association; and -  -(l) Not make any withdrawals or distributions from the Trust Account other than -pursuant to Section 1(i), (j) or (k) above. -  - --------------------------------------------------------------------------------- - -2. Agreements and Covenants of the Company. The Company hereby agrees and -covenants to: -  -(a) Give all instructions to the Trustee hereunder in writing, signed by the -Company’s Chief Executive Officer, President, Chief Operating Officer or other -authorized officer of the Company. In addition, except with respect to its -duties under Sections 1(i), (j) or (k) hereof, the Trustee shall be entitled to -rely on, and shall be protected in relying on, any verbal or telephonic advice -or instruction which it, in good faith and with reasonable care, believes to be -given by any one of the persons authorized above to give written instructions, -provided that the Company shall promptly confirm such instructions in writing; -  -(b) Subject to Section 4 hereof, hold the Trustee harmless and indemnify the -Trustee from and against any and all expenses, including reasonable counsel fees -and disbursements, or losses suffered by the Trustee in connection with any -action taken by it hereunder and in connection with any action, suit or other -proceeding brought against the Trustee involving any claim, or in connection -with any claim or demand, which in any way arises out of or relates to this -Agreement, the services of the Trustee hereunder, or the Property or any -interest earned on the Property, except for expenses and losses resulting from -the Trustee’s gross negligence, fraud or willful misconduct. Promptly after the -receipt by the Trustee of notice of demand or claim or the commencement of any -action, suit or proceeding, pursuant to which the Trustee intends to seek -indemnification under this Section 2(b), it shall notify the Company in writing -of such claim (hereinafter referred to as the “Indemnified Claim”). The Trustee -shall have the right to conduct and manage the defense against such Indemnified -Claim; provided that the Trustee shall obtain the consent of the Company with -respect to the selection of counsel, which consent shall not be unreasonably -withheld. The Trustee may not agree to settle any Indemnified Claim without the -prior written consent of the Company, which such consent shall not be -unreasonably withheld. The Company may participate in such action with its own -counsel; -  -(c) Pay the Trustee the fees set forth on Schedule A hereto, including an -initial acceptance fee, annual administration fee, and transaction processing -fee which fees shall be subject to modification by the parties from time to -time. It is expressly understood that the Property shall not be used to pay such -fees unless and until it is distributed to the Company pursuant to Sections 1(i) -through 1(k) hereof. The Company shall pay the Trustee the initial acceptance -fee and the first annual administration fee at the consummation of the Offering. -The Company shall not be responsible for any other fees or charges of the -Trustee except as set forth in this Section 2(c) and as may be provided in -Section 2(b) hereof; -  -(d) In connection with any vote of the Company’s shareholders regarding a -merger, share exchange, asset acquisition, share purchase, reorganization or -similar business combination involving the Company and one or more businesses -(the “Business Combination”), provide to the Trustee an affidavit or certificate -of the inspector of elections for the shareholder meeting verifying the vote of -such shareholders regarding such Business Combination; -  -(e) Provide the Representative with a copy of any Termination Letter(s) and/or -any other correspondence that is sent to the Trustee with respect to any -proposed withdrawal from the Trust Account promptly after it issues the same; -  -(f) Unless otherwise agreed between the Company and the Representative, ensure -that any Instruction Letter (as defined in Exhibit A) delivered in connection -with a Termination Letter in the form of Exhibit A expressly provides that the -Deferred Discount is paid directly to the account or accounts directed by the -Representative on behalf of the Underwriters prior to any transfer of the funds -held in the Trust Account to the Company or any other person; -  -(g) Instruct the Trustee to make only those distributions that are permitted -under this Agreement, and refrain from instructing the Trustee to make any -distributions that are not permitted under this Agreement; -  -(h) If the Company seeks to amend any provisions of its amended and restated -memorandum and articles of association (A) to modify the substance or timing of -the Company’s obligation to provide holders of the Ordinary Shares the right to -have their shares redeemed in connection with the Company’s initial Business -Combination or to redeem 100% of the Ordinary Shares if the Company does not -complete its initial Business Combination within the time period set forth -therein or (B) with respect to any other provision relating to the rights of -holders of the Ordinary Shares (in each case, an “Amendment”), the Company will -provide the Trustee with a letter (an “Amendment Notification Letter”) in the -form of Exhibit D providing instructions for the distribution of funds to Public -Shareholders who exercise their redemption option in connection with such -Amendment; and -  -(i) Within five (5) business days after the Underwriters exercise their option -to purchase additional units (or any unexercised portion thereof) or such option -to purchase additional units expires, provide the Trustee with a notice in -writing of the total amount of the Deferred Discount. -  -3. Limitations of Liability. The Trustee shall have no responsibility or -liability to: -  -(a) Imply obligations, perform duties, inquire or otherwise be subject to the -provisions of any agreement or document other than this Agreement and that which -is expressly set forth herein; -  - --------------------------------------------------------------------------------- - -(b) Take any action with respect to the Property, other than as directed in -Section 1 hereof, and the Trustee shall have no liability to any third party -except for liability arising out of the Trustee’s gross negligence, fraud or -willful misconduct; -  -(c) Institute any proceeding for the collection of any principal and income -arising from, or institute, appear in or defend any proceeding of any kind with -respect to, any of the Property unless and until it shall have received written -instructions from the Company given as provided herein to do so and the Company -shall have advanced or guaranteed to it funds sufficient to pay any expenses -incident thereto; -  -(d) Change the investment of any Property, other than in compliance with Section -1 hereof; -  -(e) Refund any depreciation in principal of any Property; -  -(f) Assume that the authority of any person designated by the Company to give -instructions hereunder shall not be continuing unless provided otherwise in such -designation, or unless the Company shall have delivered a written revocation of -such authority to the Trustee; -  -(g) The other parties hereto or to anyone else for any action taken or omitted -by it, or any action suffered by it to be taken or omitted, in good faith and in -the Trustee’s best judgment, except for the Trustee’s gross negligence, fraud or -willful misconduct. The Trustee may rely conclusively and shall be protected in -acting upon any order, notice, demand, certificate, opinion or advice of counsel -(including counsel chosen by the Trustee, which counsel may be the Company’s -counsel), statement, instrument, report or other paper or document (not only as -to its due execution and the validity and effectiveness of its provisions, but -also as to the truth and acceptability of any information therein contained) -which the Trustee believes, in good faith and with reasonable care, to be -genuine and to be signed or presented by the proper person or persons. The -Trustee shall not be bound by any notice or demand, or any waiver, modification, -termination or rescission of this Agreement or any of the terms hereof, unless -evidenced by a written instrument delivered to the Trustee, signed by the proper -party or parties and, if the duties or rights of the Trustee are affected, -unless it shall give its prior written consent thereto; -  -(h) Verify the accuracy of the information contained in the Registration -Statement; -  -(i) Provide any assurance that any Business Combination entered into by the -Company or any other action taken by the Company is as contemplated by the -Registration Statement; -  -(j) File information returns with respect to the Trust Account with any local, -state or federal taxing authority or provide periodic written statements to the -Company documenting the taxes payable by the Company, if any, relating to any -interest income earned on the Property; -  -(k) Prepare, execute and file tax reports, income or other tax returns and pay -any taxes with respect to any income generated by, and activities relating to, -the Trust Account, regardless of whether such tax is payable by the Trust -Account or the Company, including, but not limited to, income tax obligations, -except pursuant to Section 1(j) hereof; or -  -(l) Verify calculations, qualify or otherwise approve the Company’s written -requests for distributions pursuant to Sections 1(i), 1(j) or 1(k) hereof. -  -4. Trust Account Waiver. The Trustee has no right of set-off or any right, -title, interest or claim of any kind (“Claim”) to, or to any monies in, the -Trust Account, and hereby irrevocably waives any Claim to, or to any monies in, -the Trust Account that it may have now or in the future. In the event the -Trustee has any Claim against the Company under this Agreement, including, -without limitation, under Section 2(b) or Section 2(c) hereof, the Trustee shall -pursue such Claim solely against the Company and its assets outside the Trust -Account and not against the Property or any monies in the Trust Account. -  -5. Termination. This Agreement shall terminate as follows: -  -(a) If the Trustee gives written notice to the Company that it desires to resign -under this Agreement, the Company shall use its reasonable efforts to locate a -successor trustee, pending which the Trustee shall continue to act in accordance -with this Agreement. At such time that the Company notifies the Trustee that a -successor trustee has been appointed by the Company and has agreed to become -subject to the terms of this Agreement, the Trustee shall transfer the -management of the Trust Account to the successor trustee, including but not -limited to the transfer of copies of the reports and statements relating to the -Trust Account, whereupon this Agreement shall terminate; provided, however, that -in the event that the Company does not locate a successor trustee within ninety -(90) days of receipt of the resignation notice from the Trustee, the Trustee may -submit an application to have the Property deposited with any court in the State -of New York or with the United States District Court for the Southern District -of New York and upon such deposit, the Trustee shall be immune from any -liability whatsoever; or -  - --------------------------------------------------------------------------------- - -(b) At such time that the Trustee has completed the liquidation of the Trust -Account and its obligations in accordance with the provisions of Section 1(i) -hereof and distributed the Property in accordance with the provisions of the -Termination Letter, this Agreement shall terminate except with respect to -Section 2(b). -  -6. Miscellaneous. -  -(a) The Company and the Trustee each acknowledge that the Trustee will follow -the security procedures set forth below with respect to funds transferred from -the Trust Account. The Company and the Trustee will each restrict access to -confidential information relating to such security procedures to authorized -persons. Each party must notify the other party immediately if it has reason to -believe unauthorized persons may have obtained access to such confidential -information, or of any change in its authorized personnel. In executing funds -transfers, the Trustee shall rely upon all information supplied to it by the -Company, including, account names, account numbers, and all other identifying -information relating to a Beneficiary, Beneficiary’s bank or intermediary bank. -Except for any liability arising out of the Trustee’s gross negligence, fraud or -willful misconduct, the Trustee shall not be liable for any loss, liability or -expense resulting from any error in the information or transmission of the -funds. -  -(b) This Agreement shall be governed by and construed and enforced in accordance -with the laws of the State of New York, without giving effect to conflicts of -law principles that would result in the application of the substantive laws of -another jurisdiction. This Agreement may be executed in several original or -facsimile counterparts, each one of which shall constitute an original, and -together shall constitute but one instrument. -  -(c) This Agreement contains the entire agreement and understanding of the -parties hereto with respect to the subject matter hereof. Except for Section -1(i), 1(j) and 1(k) hereof (which sections may not be modified, amended or -deleted without the affirmative vote of sixty-five percent (65%) of the then -outstanding Ordinary Shares and Class B ordinary shares, par value $0.0001 per -share, of the Company, voting together as a single class; provided that no such -amendment will affect any Public Shareholder who has properly elected to redeem -his or her Ordinary Shares in connection with a shareholder vote to amend this -Agreement to modify the substance or timing of the Company’s obligation to -provide for the redemption of the Public Shares in connection with an initial -Business Combination or an Amendment or to redeem 100% of its Ordinary Shares if -the Company does not complete its initial Business Combination within the time -frame specified in the Company’s amended and restated memorandum and articles of -association), this Agreement or any provision hereof may only be changed, -amended or modified (other than to correct a typographical error) by a writing -signed by each of the parties hereto. -  -(d) The parties hereto consent to the jurisdiction and venue of any state or -federal court located in the City of New York, State of New York, for purposes -of resolving any disputes hereunder. AS TO ANY CLAIM, CROSS-CLAIM OR -COUNTERCLAIM IN ANY WAY RELATING TO THIS AGREEMENT, EACH PARTY WAIVES THE RIGHT -TO TRIAL BY JURY. -  -(e) Any notice, consent or request to be given in connection with any of the -terms or provisions of this Agreement shall be in writing and shall be sent by -express mail or similar private courier service, by certified mail (return -receipt requested), by hand delivery or by electronic mail: -  -if to the Trustee, to: -  -Continental Stock Transfer & Trust Company -1 State Street, 30th Floor -New York, New York 10004 -Attn: Francis E. Wolf, Jr. & Celeste Gonzalez -Email: fwolf@continentalstock.com -cgonzalez@continentalstock.com -  - --------------------------------------------------------------------------------- - -if to the Company, to: -  -Altimeter Growth Corp. - - -2550 Sand Hill Road -Suite 150 -Menlo Park, CA 94025 -Attn: Hab Siam -Email: hab@altimeter.com -  -in each case, with copies to: -  -Ropes & Gray LLP -1211 Avenue of the Americas -New York, New York 10036 -Attn: Paul D. Tropp -Michael S. Pilo -E-mail: paul.tropp@ropesgray.com -michael.pilo @ropesgray.com -  -and - - -Citigroup Global Markets Inc. -388 Greenwich Street -New York, New York 10013 -Attn: Pavan Bellur -Email: pavan.bellur@citigroup.com - - -and - - -Goldman Sachs & Co. LLC -200 West Street -New York, NY 10282 -Attn: Registration Department - - -and - - -Morgan Stanley & Co. LLC -1585 Broadway -New York, New York 10036 -Attn: Equity Syndicate Desk - - -and -  -Kirkland & Ellis LLP -601 Lexington Avenue -New York, New York 10022 -Attn: Christian O. Nagler -E-mail: cnagler@kirkland.com -  -(f) Each of the Company and the Trustee hereby represents that it has the full -right and power and has been duly authorized to enter into this Agreement and to -perform its respective obligations as contemplated hereunder. The Trustee -acknowledges and agrees that it shall not make any claims or proceed against the -Trust Account, including by way of set-off, and shall not be entitled to any -funds in the Trust Account under any circumstance. -  -(g) This Agreement is the joint product of the Trustee and the Company and each -provision hereof has been subject to the mutual consultation, negotiation and -agreement of such parties and shall not be construed for or against any party -hereto. -  -(h) This Agreement may be executed in any number of counterparts, each of which -shall be deemed to be an original, but all such counterparts shall together -constitute one and the same instrument. Delivery of a signed counterpart of this -Agreement by facsimile or electronic transmission shall constitute valid and -sufficient delivery thereof. -  - --------------------------------------------------------------------------------- - -(i) Each of the Company and the Trustee hereby acknowledges and agrees that the -Representative on behalf of the Underwriters is a third-party beneficiary of -this Agreement. -  -(j) Except as specified herein, no party to this Agreement may assign its rights -or delegate its obligations hereunder to any other person or entity. -  -[Signature Page Follows] - - - --------------------------------------------------------------------------------- - -IN WITNESS WHEREOF, the parties have duly executed this Investment Management -Trust Agreement as of the date first written above. - - - - -  -CONTINENTAL STOCK TRANSFER & TRUST COMPANY, as Trustee -        -By: -/s/ Francis Wolf -  -Name: -Francis Wolf -  -Title: -Vice President -      -ALTIMETER GROWTH CORP. -        -By: -/s/ Hab Siam -  -Name: -Hab Siam -  -Title: -General Counsel - - - -[Signature Page to Investment Management Trust Agreement] - - - --------------------------------------------------------------------------------- - -SCHEDULE A - - - -Fee Item -  -Time and method of payment -  -Amount -  -Initial acceptance fee -  -Initial closing of IPO by wire transfer -  -$ -3,500.00 -  -Annual fee -  -First year, initial closing of IPO by wire transfer; thereafter on the -anniversary of the effective date of the IPO by wire transfer or check -  -$ -10,000.00 -  -Transaction processing fee for disbursements to Company under Sections 1(i), -(j), and (k) -  -Billed by Trustee to Company under Section 1 -  -$ -250.00 -  -Paying Agent services as required pursuant to Section 1(i) and 1(k) -  -Billed to Company upon delivery of service pursuant to Section 1(i) and 1(k) -  -Prevailing rates -  - - - - --------------------------------------------------------------------------------- - -EXHIBIT A -  -[Letterhead of Company] -  -[Insert date] -  -Continental Stock Transfer & Trust Company -1 State Street, 30th Floor -New York, New York 10004 -Attn: Francis Wolf & Celeste Gonzalez -  -Re: Trust Account - Termination Letter -  -Dear Mr. Wolf and Ms. Gonzalez: -  -Pursuant to Section 1(i) of the Investment Management Trust Agreement between -Altimeter Growth Corp. (the “Company”) and Continental Stock Transfer & Trust -Company (“Trustee”), dated as of October [•], 2020 (the “Trust Agreement”), this -is to advise you that the Company has entered into an agreement with ___________ -(the “Target Business”) to consummate a business combination with Target -Business (the “Business Combination”) on or about [insert date]. The Company -shall notify you at least seventy-two (72) hours in advance of the actual date -(or such shorter time period as you may agree) of the consummation of the -Business Combination (the “Consummation Date”). Capitalized terms used but not -defined herein shall have the meanings set forth in the Trust Agreement. -  -In accordance with the terms of the Trust Agreement, we hereby authorize you to -commence to liquidate all of the assets of the Trust Account, and to transfer -the proceeds into the trust operating account at J.P. Morgan Chase Bank, N.A. to -the effect that, on the Consummation Date, all of the funds held in the Trust -Account will be immediately available for transfer to the account or accounts -that the Representative (with respect to the Deferred Discount) and the Company -shall direct on the Consummation Date. It is acknowledged and agreed that while -the funds are on deposit in said trust operating account at J.P. Morgan Chase -Bank, N.A. awaiting distribution, neither the Company nor the Representative -will earn any interest or dividends. -  -On the Consummation Date (i) counsel for the Company shall deliver to you -written notification that the Business Combination has been consummated, or will -be consummated substantially concurrently with your transfer of funds to the -accounts as directed by the Company (the “Notification”), and (ii) the Company -shall deliver to you (a) a certificate by the Chief Executive Officer, Chief -Financial Officer or other authorized officer of the Company, which verifies -that the Business Combination has been approved by a vote of the Company’s -shareholders, if a vote is held and (b) joint written instruction signed by the -Company and the Representative with respect to the transfer of the funds held in -the Trust Account, including payment of the Deferred Discount from the Trust -Account (the “Instruction Letter”). You are hereby directed and authorized to -transfer the funds held in the Trust Account immediately upon your receipt of -the Notification and the Instruction Letter, in accordance with the terms of the -Instruction Letter. In the event that certain deposits held in the Trust Account -may not be liquidated by the Consummation Date without penalty, you will notify -the Company in writing of the same and the Company shall direct you as to -whether such funds should remain in the Trust Account and be distributed after -the Consummation Date to the Company. Upon the distribution of all the funds, -net of any payments necessary for reasonable unreimbursed expenses related to -liquidating the Trust Account, your obligations under the Trust Agreement shall -be terminated. -  -In the event that the Business Combination is not consummated on the -Consummation Date described in the notice thereof and we have not notified you -on or before the original Consummation Date of a new Consummation Date, then -upon receipt by the Trustee of written instructions from the Company, the funds -held in the Trust Account shall be reinvested as provided in Section 1(c) of the -Trust Agreement on the business day immediately following the Consummation Date -as set forth in such notice as soon thereafter as possible. - - - --------------------------------------------------------------------------------- - -  -Very truly yours, -      -Altimeter Growth Corp. -        -By: -    -Name: -    -Title: -  - - - -cc: -Citigroup Global Markets Inc. -    -Goldman Sachs & Co. LLC -    -Morgan Stanley &Co. LLC -  - - - - --------------------------------------------------------------------------------- - -EXHIBIT B -  -[Letterhead of Company] -  -[Insert date] -  -Continental Stock Transfer & Trust Company -1 State Street, 30th Floor -New York, New York 10004 -Attn: Francis Wolf & Celeste Gonzalez -  -Re: Trust Account - Termination Letter -  -Ladies and Gentlemen: -  -Pursuant to Section 1(i) of the Investment Management Trust Agreement between -Altimeter Growth Corp. (the “Company”) and Continental Stock Transfer & Trust -Company (the “Trustee”), dated as of October [•], 2020 (the “Trust Agreement”), -this is to advise you that the Company has been unable to effect a business -combination with a Target Business (the “Business Combination”) within the time -frame specified in the Company’s Amended and Restated Memorandum and Articles of -Association, as described in the Company’s Prospectus relating to the Offering. -Capitalized terms used but not defined herein shall have the meanings set forth -in the Trust Agreement. -  -In accordance with the terms of the Trust Agreement, we hereby authorize you to -liquidate all of the assets in the Trust Account and to transfer the total -proceeds into the trust operating account at J.P. Morgan Chase Bank, N.A. to -await distribution to the Public Shareholders. The Company has selected -__________ as the effective date for the purpose of determining the Public -Shareholders that will be entitled to receive their share of the liquidation -proceeds. It is acknowledged that no interest will be earned by the Company on -the liquidation proceeds while on deposit in the trust operating account You -agree to be the Paying Agent of record and, in your separate capacity as Paying -Agent, agree to distribute said funds directly to the Company’s Public -Shareholders in accordance with the terms of the Trust Agreement and the Amended -and Restated Memorandum and Articles of Association of the Company. Upon the -distribution of all the funds, net of any payments necessary for reasonable -unreimbursed expenses related to liquidating the Trust Account, your obligations -under the Trust Agreement shall be terminated, except to the extent otherwise -provided in Section 1(j) of the Trust Agreement. -  - -  -Very truly yours, -      -Altimeter Growth Corp. -        -By: -    -Name: -    -Title: -  - - - -cc: -Citigroup Global Markets Inc. -    -Goldman Sachs & Co. LLC -    -Morgan Stanley &Co. LLC -  - - - - --------------------------------------------------------------------------------- - -EXHIBIT C -  -[Letterhead of Company] -  -[Insert date] -  -Continental Stock Transfer & Trust Company -1 State Street, 30th Floor -New York, New York 10004 -Attn: Francis Wolf & Celeste Gonzalez -  -Re: Trust Account - Tax Payment Withdrawal Instruction -  -Dear Mr. Wolf and Ms. Gonzalez: -  -Pursuant to Section 1(j) of the Investment Management Trust Agreement between -Altimeter Growth Corp. (the “Company”) and Continental Stock Transfer & Trust -Company (the “Trustee”), dated as of October [•], 2020 (the “Trust Agreement”), -the Company hereby requests that you deliver to the Company $___________ of the -interest income earned on the Property as of the date hereof. Capitalized terms -used but not defined herein shall have the meanings set forth in the Trust -Agreement. -  -The Company needs such funds to pay for the tax obligations as set forth on the -attached tax return or tax statement. In accordance with the terms of the Trust -Agreement, you are hereby directed and authorized to transfer (via wire -transfer) such funds promptly upon your receipt of this letter to the Company’s -operating account at: -  -[WIRE INSTRUCTION INFORMATION] -  - -  -Very truly yours, -      -Altimeter Growth Corp. -        -By: -    -Name: -    -Title: -  - - - -cc: -Citigroup Global Markets Inc. -    -Goldman Sachs & Co. LLC -    -Morgan Stanley &Co. LLC -  - - - - --------------------------------------------------------------------------------- - -EXHIBIT D -  -[Letterhead of Company] -  -[Insert date] -  -Continental Stock Transfer & Trust Company -1 State Street, 30th Floor -New York, New York 10004 -Attn: Francis Wolf & Celeste Gonzalez -  -Re: Trust Account  -. Shareholder Redemption Withdrawal Instruction -  -Dear Mr. Wolf and Ms. Gonzalez: -  -Pursuant to Section 1(k) of the Investment Management Trust Agreement between -Altimeter Growth Corp. (the “Company”) and Continental Stock Transfer & Trust -Company (the “Trustee”), dated as of October [•], 2020 (the “Trust Agreement”), -the Company hereby requests that you deliver to the Company’s shareholders -$___________ of the principal and interest income earned on the Property as of -the date hereof. Capitalized terms used but not defined herein shall have the -meanings set forth in the Trust Agreement. -  -Pursuant to Section 1(k) of the Trust Agreement, this is to advise you that the -Company has sought an Amendment. Accordingly, in accordance with the terms of -the Trust Agreement, we hereby authorize you to liquidate a sufficient portion -of the Trust Account and to transfer $[•] of the proceeds of the Trust Account -to the trust operating account at J.P. Morgan Chase Bank, N.A. for distribution -to the shareholders that have requested redemption of their shares in connection -with such Amendment. -  - -  -Very truly yours, -      -Altimeter Growth Corp. -        -By: -    -Name: -    -Title: -  - - - -cc: -Citigroup Global Markets Inc. -    -Goldman Sachs & Co. LLC -    -Morgan Stanley &Co. LLC -  - - - - - - ---------------------------------------------------------------------------------"""]).launch(share=True) \ No newline at end of file diff --git a/spaces/rushankg/discovercourses/app.py b/spaces/rushankg/discovercourses/app.py deleted file mode 100644 index 49e65a6fe92c465f1b684ed717896d16ea115630..0000000000000000000000000000000000000000 --- a/spaces/rushankg/discovercourses/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import streamlit as st -import pickle -import pandas as pd -import torch -import numpy as np -import requests - -cosine_scores = pickle.load(open('cosine_scores.pkl','rb')) -coursedf = pd.read_pickle('course_df_new.pkl') # course_df uses titles to generate course recommendations -#course_df_new = pd.read_pickle('course_df_new.pkl') #course_df_new makes recommendations using the entire description - -course_title_list = [i + ": " + j for i, j in zip(coursedf['ref'].to_list(), coursedf['title'].to_list())] - -def get_random_course(): - row=coursedf.sample(1) - return row['ref'], row['title'] - -def recommend(index): - pairs = {} - - for i in range(len(coursedf)): - pairs[coursedf.iloc[i,1]]=cosine_scores[index][i] - - sorttemp = sorted(pairs.items(), key=lambda x:x[1], reverse=True) - sorted_final = dict(sorttemp[1:31]) - - return list(sorted_final.keys()) - -st.set_page_config(page_title='DiscoverCourses', page_icon=':bird:') -st.header('DiscoverCourses') -st.subheader('Course recommendations based on cosine similarity between vector embeddings of lemmatized text') -st.write('') -st.write("Do you like the tech + social impact focus of CS51? Excited by film-centered courses like FILMEDIA245B? Saw a cool study-abroad course (OSPISTAN76) and want to study that topic on campus?") -st.write('') -st.write("Enter DiscoverCourses. Just pick a course and get dozens of recommendations for similar courses based on titles or descriptions. Give it a go! If you have any thoughts on DiscoverCourses (or project ideas or a book recommendation or really anything), shoot me an email at rushankg@stanford.edu.") -st.write('') - -st.markdown('',unsafe_allow_html=True) - -selected_course = st.selectbox('Pick a course from the dropdown (or click on it and start typing to search).',course_title_list) -#st.write("Description: "+coursedf.iloc[np.where((coursedf['ref']+": "+coursedf['title'])==selected_course)[0][0],3]) -#st.write('') - -container = st.container() -maincol1, maincol2 = container.columns(2) -st.write('') - -if maincol1.button('Discover by title',use_container_width=True): - url='https://datadrop.wolframcloud.com/api/v1.0/Add?bin=1fYEdJizg&data='+selected_course.replace(":","") - x=requests.get(url) - output=recommend(np.where((coursedf['ref']+": "+coursedf['title']) == selected_course)[0][0]) - for result in output: - index=np.where(coursedf['title'] == result)[0][0] - course_id=coursedf.iloc[index,0] - st.subheader(course_id+": "+result) - with st.expander("See description"): - st.write(coursedf.iloc[index,3]) #Using the new coursedf because it has proper descriptions for each course - link1 = "[ExploreCourses ↗](https://explorecourses.stanford.edu/search?q="+course_id+"+"+result.replace(" ","+")+")" - link2 = "[Carta ↗](https://carta-beta.stanford.edu/results/"+course_id+")" - st.markdown(link1+" "+link2, unsafe_allow_html=True) - st.divider() - -if maincol2.button('Discover by description',use_container_width=True): - url='https://datadrop.wolframcloud.com/api/v1.0/Add?bin=1fYEdJizg&data='+selected_course.replace(":","") - x=requests.get(url) - index_new=np.where((coursedf['ref']+": "+coursedf['title']) == selected_course)[0][0] - rec_list=coursedf.iloc[index_new,2] - for result in rec_list[1:]: - index=np.where(coursedf['title'] == result)[0][0] - course_id=coursedf.iloc[index,0] - st.subheader(course_id+": "+result) - with st.expander("See description"): - st.write(coursedf.iloc[index,3]) #Using the new coursedf because it has proper descriptions for each course - link1 = "[ExploreCourses ↗](https://explorecourses.stanford.edu/search?q="+course_id+"+"+result.replace(" ","+")+")" - link2 = "[Carta ↗](https://carta-beta.stanford.edu/results/"+course_id+")" - st.markdown(link1+" "+link2, unsafe_allow_html=True) - st.divider() - -st.write('© 2023 Rushank Goyal. All rights reserved. Source for the all-MiniLM-L6-v2 model: Wang, Wenhui, et al. "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers." arXiv, 25 Feb. 2020, doi:10.48550/arXiv.2002.10957.') diff --git a/spaces/ryoung41/AIPairProgramming1/app.py b/spaces/ryoung41/AIPairProgramming1/app.py deleted file mode 100644 index 0788c9b0c839c6c56def150a8dac0d1aad1a6175..0000000000000000000000000000000000000000 --- a/spaces/ryoung41/AIPairProgramming1/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import streamlit as st -import time - - -def main(): - - st.title("File Upload and Display") - - # File upload - - uploaded_file = st.file_uploader("Upload a file") - - - - if uploaded_file is not None: - - # Display file contents using st.markdown() - - file_contents = uploaded_file.read().decode("utf-8") - - st.markdown("### File Contents:") - - st.markdown(f"```{file_contents}```") - - - - # Wait for 5 seconds - - time.sleep(5) - - - - # Show completed message - - st.success("File processing completed!") - - - -if __name__ == "__main__": - - main() \ No newline at end of file diff --git a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/model.py b/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/model.py deleted file mode 100644 index 4e3c9687a3f4f7301cf053bee95c1e288b1c939b..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/model.py +++ /dev/null @@ -1,703 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - -# Wrapper that gives name to tensor -class NamedTensor(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x): - return x - -# Give each style a unique name -class StridedStyle(nn.ModuleList): - def __init__(self, n_latents): - super().__init__([NamedTensor() for _ in range(n_latents)]) - self.n_latents = n_latents - - def forward(self, x): - # x already strided - styles = [self[i](x[:, i, :]) for i in range(self.n_latents)] - return torch.stack(styles, dim=1) - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - self.strided_style = StridedStyle(self.n_latent) - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_w=False, - noise=None, - randomize_noise=True, - ): - if not input_is_w: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) == 1: - # One global latent - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - elif len(styles) == 2: - # Latent mixing with two latents - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = self.strided_style(torch.cat([latent, latent2], 1)) - else: - # One latent per layer - assert len(styles) == self.n_latent, f'Expected {self.n_latents} latents, got {len(styles)}' - styles = torch.stack(styles, dim=1) # [N, 18, 512] - latent = self.strided_style(styles) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - diff --git a/spaces/sanjayw/starcoder-playground/static/styles.css b/spaces/sanjayw/starcoder-playground/static/styles.css deleted file mode 100644 index 7a6fe3687d95d64f8372bbd0af600f4f61b89a47..0000000000000000000000000000000000000000 --- a/spaces/sanjayw/starcoder-playground/static/styles.css +++ /dev/null @@ -1,78 +0,0 @@ -@import url('https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap'); - -h1, h2 { - font-family: 'IBM Plex Mono', sans-serif; -} - -.generating { - visibility: hidden -} - -.gradio-container { - color: black -} - -/* monospace_css */ -#q-input textarea { - font-family: monospace, 'Consolas', Courier, monospace; -} - -/* Share Button */ - -/* it was hidden directly inside the svg xml content */ -#share-btn-loading-icon { - display: none; -} - -a { - text-decoration-line: underline; - font-weight: 600; -} - -.animate-spin { - animation: spin 1s linear infinite; -} - -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} - -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - width: 15rem; -} - -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor: pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; -} - -#share-btn * { - all: unset; -} - -#share-btn-container div:nth-child(-n+2) { - width: auto !important; - min-height: 0px !important; -} - -#share-btn-container .wrap { - display: none !important; -} diff --git a/spaces/saurabhg2083/jobbias/app.py b/spaces/saurabhg2083/jobbias/app.py deleted file mode 100644 index 7ed0cd4363815de2aee89ef408394cc92fba5648..0000000000000000000000000000000000000000 --- a/spaces/saurabhg2083/jobbias/app.py +++ /dev/null @@ -1,248 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import re -import string -import textwrap -from transformers import BertTokenizer, BertForSequenceClassification, AutoModelForCausalLM, AutoTokenizer, pipeline, AdamW -from happytransformer import HappyTextToText, TTSettings -import torch -from torch.nn import BCEWithLogitsLoss -from torch.utils.data import DataLoader, TensorDataset, random_split -from transformers import AutoTokenizer, AutoModelForSequenceClassification - -tokenizer = AutoTokenizer.from_pretrained("saurabhg2083/model_bert") -model = AutoModelForSequenceClassification.from_pretrained("saurabhg2083/model_bert") -happy_tt = HappyTextToText("T5", "vennify/t5-base-grammar-correction") -args = TTSettings(num_beams=5, min_length=1) - -gendered_pronouns = [ - 'ambition', 'driven', 'lead', 'persist', 'principle', 'decision', 'superior', 'individual', 'assertive', - 'strong', 'hierarchical', 'rigid', 'silicon valley', 'stock options', 'takes risk', 'workforce', 'autonomous', - 'ping pong', 'pool table', 'must', 'competitive', 'he', 'his', 'himself', 'confident', 'active', 'aggressive', - 'ambitious', 'fearless', 'headstrong', 'defensive', 'independent', 'dominant', 'outspoken', 'leader', 'fast paced', - 'adventurous', 'analytical', 'decisive', 'determined', 'ninja', 'objective', 'rock star', 'boast', 'challenging', 'courage', - 'thoughtful', 'creative', 'adaptable', 'choose', 'curious', 'excellent', 'flexible', 'multitasking', 'health', - 'imaginative', 'intuitive', 'leans in', 'plans for the future', 'resilient', 'self-aware', 'socially responsible', - 'trustworthy', 'shup-to-date', 'wellness program', 'nurture', 'teach', 'dependable', 'community', 'serving', 'loyal', - 'enthusiasm', 'interpersonal', 'connect', 'commit', 'she', 'agree', 'empathy', 'sensitive', 'affectionate', 'feel', - 'support', 'collaborate', 'honest', 'trust', 'understand', 'compassion', 'share', 'polite', 'kind', 'caring', 'her', - 'hers', 'herself', 'feminine', 'cheer', 'communal', 'emotional', 'flatterable', 'gentle', 'interdependent', 'kinship', - 'modesty', 'pleasant', 'polite', 'quiet', 'sympathy', 'warm', 'dominant', 'yield', - 'native english speaker', 'professionally groomed hair', 'native', 'culture fit', 'non-white', 'clean-shaven', - 'neat hairstyle', 'master', 'slave', 'a cakewalk', 'brownbag session', 'spirit animal', 'digital native', - 'servant leadership', 'tribe', 'oriental', 'spic', 'english fluency', 'level native', 'illegals', 'eskimo', - 'latino', 'latina', 'migrant', 'blacklist', 'whitelist' - ] - -# List of neutral words -neutral_words = [ - "drive", - "motivated", - "guide", - "continue", - "ethic", - "choice", - "excellent", - "person", - "confident", - "resilient", - "structured", - "inflexible", - "tech industry", - "equity options", - "is adventurous", - "employees", - "independent", - "table tennis", - "billiards table", - "should", - "challenging", - "they", - "their", - "themselves", - "self-assured", - "energetic", - "assertive", - "aspiring", - "courageous", - "determined", - "protective", - "self-reliant", - "influential", - "expressive", - "guiding force", - "high-speed", - "daring", - "logical", - "resolute", - "committed", - "expert", - "impartial", - "outstanding performer", - "brag", - "demanding", - "bravery", - "considerate", - "innovative", - "flexible", - "select", - "inquisitive", - "outstanding", - "adaptable", - "handling multiple tasks", - "well-being", - "creative", - "instinctive", - "long-term planning", - "tough", - "aware of oneself", - "ethical", - "reliable", - "current", - "health program", - "foster", - "instruct", - "reliable", - "society", - "assisting", - "devoted", - "passion", - "relational", - "link", - "dedicate", - "they", - "concur", - "understanding", - "responsive", - "loving", - "experience", - "assist", - "work together", - "truthful", - "confidence", - "comprehend", - "sympathy", - "contribute", - "courteous", - "considerate", - "supportive", - "their", - "theirs", - "themselves", - "androgynous", - "encourage", - "collective", - "expressive", - "complimentable", - "tender", - "mutual", - "connection", - "humility", - "agreeable", - "silent", - "empathy", - "friendly", - "leading", - "produce", - "fluent English speaker", - "well-groomed appearance", - "indigenous", - "cultural alignment", - "diverse", - "clean-cut", - "tidy hair", - "expert", - "subordinate", - "easy task", - "informal meeting", - "personal inspiration", - "tech-savvy", - "supportive leadership", - "community", - "eastern", - "avoid using", - "english proficiency", - "fluent", - "unauthorized individuals", - "indigenous Northern people", - "hispanic", - "latinx", - "mobile worker", - "inclusion list", -] - - - -def replace_gendered_pronouns(text): - # Define a dictionary of gendered pronouns and their gender-neutral replacements - word_dict = dict(zip(gendered_pronouns, neutral_words)) - - # Use regular expressions to find and replace gendered pronouns in the text - for pronoun, replacement in word_dict.items(): - # Use word boundaries to match whole words only - pattern = r'\b' + re.escape(pronoun) + r'\b' - text = re.sub(pattern, replacement, text, flags=re.IGNORECASE) - - return text - -def model_eval(text): - # Put the model in evaluation mode - model.eval() - - # Input text - input_text = text - - # Tokenize the input text - inputs = tokenizer(input_text, padding='max_length', truncation=True, max_length=512, return_tensors="pt") - - # Make the prediction - with torch.no_grad(): - outputs = model(**inputs) - - logits = outputs.logits - predicted_label = (logits > 0).int().item() - - return predicted_label - - -st.title("Job Bias Testing") - -text1 = st.text_area("Enter Text 1") - -if st.button("Check Bias"): - if text1: - predicted_label = model_eval(text1) - # Convert 0 or 1 label back to a meaningful label if needed - label_mapping = {0: "Negative", 1: "Positive"} - predicted_label_text = label_mapping[predicted_label] - #print(f"Predicted Label: {predicted_label_text}") - if predicted_label_text == "Positive": - rewritten_sentence = replace_gendered_pronouns(text1) - words = rewritten_sentence.split() - word_count = 0 - chunk = "" - target_word_count = 35 - result = "" - - for word in words: - # Add the sentence to the current chunk - chunk += word + " " - - words_list = chunk.split() - word_count = len(words_list) - - # Check if the word count exceeds the target - if word_count >= target_word_count: - grammar_text = happy_tt.generate_text("grammar: "+chunk, args=args) - result = result + grammar_text.text - chunk = "" - word_count = 0 - - # Add the prefix "grammar: " before each input - #result = happy_tt.generate_text("grammar: "+rewritten_sentence, args=args) - #print(result.text) # This sentence has bad grammar. - st.success(f"Predicted Label: {predicted_label_text} \n new Text is: {result}") - else: - st.warning("Please enter text Job Description.") - \ No newline at end of file diff --git a/spaces/sayakpaul/sots-indoor-dehazing-maxim/create_maxim_model.py b/spaces/sayakpaul/sots-indoor-dehazing-maxim/create_maxim_model.py deleted file mode 100644 index f6f8ef29093d5defdaa51e3f99ce25fcdc77b513..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/sots-indoor-dehazing-maxim/create_maxim_model.py +++ /dev/null @@ -1,37 +0,0 @@ -from tensorflow import keras - -from maxim import maxim -from maxim.configs import MAXIM_CONFIGS - - -def Model(variant=None, input_resolution=(256, 256), **kw) -> keras.Model: - """Factory function to easily create a Model variant like "S". - - Args: - variant: UNet model variants. Options: 'S-1' | 'S-2' | 'S-3' - | 'M-1' | 'M-2' | 'M-3' - input_resolution: Size of the input images. - **kw: Other UNet config dicts. - - Returns: - The MAXIM model. - """ - - if variant is not None: - config = MAXIM_CONFIGS[variant] - for k, v in config.items(): - kw.setdefault(k, v) - - if "variant" in kw: - _ = kw.pop("variant") - if "input_resolution" in kw: - _ = kw.pop("input_resolution") - model_name = kw.pop("name") - - maxim_model = maxim.MAXIM(**kw) - - inputs = keras.Input((*input_resolution, 3)) - outputs = maxim_model(inputs) - final_model = keras.Model(inputs, outputs, name=f"{model_name}_model") - - return final_model diff --git a/spaces/scedlatioru/img-to-music/example/Crack Winquick Kfz REPACK.md b/spaces/scedlatioru/img-to-music/example/Crack Winquick Kfz REPACK.md deleted file mode 100644 index 47975cb372c55635d5404d936c7edd4d3efe5d0f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Crack Winquick Kfz REPACK.md +++ /dev/null @@ -1,65 +0,0 @@ - -

        Crack Winquick Kfz: How to Unlock Your Car's Full Potential

        -

        Winquick Kfz is a software that allows you to optimize your car's performance, fuel efficiency, and safety. It can also help you diagnose and fix any problems that may arise with your vehicle. But what if you want to access all the features and benefits of Winquick Kfz without paying for it? That's where cracking Winquick Kfz comes in.

        -

        Cracking Winquick Kfz is a process of bypassing the security measures and activation codes of the software, so that you can use it for free. This can save you money and give you more control over your car. However, cracking Winquick Kfz is not easy, and it comes with some risks. In this article, we will explain how to crack Winquick Kfz, what are the advantages and disadvantages of doing so, and what are some alternatives to cracking Winquick Kfz.

        -

        Crack Winquick Kfz


        Download Filehttps://gohhs.com/2uEA66



        -

        How to Crack Winquick Kfz

        -

        There are different methods of cracking Winquick Kfz, but they all involve downloading a crack file or a key generator from the internet. A crack file is a modified version of the original software that removes the protection and activation features. A key generator is a program that generates valid serial numbers or license keys for the software. You can find these files on various websites, forums, or torrent sites.

        -

        However, before you download any crack file or key generator, you should be aware of the potential dangers. These files may contain viruses, malware, spyware, or other harmful programs that can damage your computer or steal your personal information. They may also not work properly or cause errors in your system. Moreover, cracking Winquick Kfz is illegal and unethical, as it violates the terms and conditions of the software and infringes on the intellectual property rights of the developers. You may face legal consequences or penalties if you are caught using a cracked version of Winquick Kfz.

        -

        Therefore, we do not recommend cracking Winquick Kfz, as it is risky and immoral. Instead, we suggest that you use one of the following alternatives to cracking Winquick Kfz.

        -

        Alternatives to Cracking Winquick Kfz

        -

        If you want to use Winquick Kfz without paying for it, there are some legitimate and safe options that you can try. Here are some of them:

        -
          -
        • Use a free trial version. Winquick Kfz offers a free trial version that you can download from their official website. The trial version allows you to use the software for a limited time and with some restrictions. You can use this option to test the software and see if it suits your needs before buying it.
        • -
        • Use a discount code or a coupon. Winquick Kfz sometimes offers discounts or coupons that you can use to get a lower price for the software. You can find these codes or coupons on their website, social media pages, newsletters, or partner sites. You can use this option to save some money and get a legal and licensed version of Winquick Kfz.
        • -
        • Use an alternative software. Winquick Kfz is not the only software that can help you optimize your car's performance, fuel efficiency, and safety. There are other similar software that you can use instead of Winquick Kfz. Some of them are free, while others are paid but cheaper than Winquick Kfz. You can search online for reviews and comparisons of different car optimization software and choose the one that best fits your budget and preferences.
        • -
        -

        Conclusion

        -

        Winquick Kfz is a powerful and useful software that can help you improve your car's performance, fuel efficiency, and safety. However, if you want to use it without paying for it, cracking Winquick Kfz is not a good idea. Cracking Winquick Kfz is risky, illegal, and unethical. It can expose you to viruses, malware, spyware, errors, legal issues, and moral dilemmas.

        -

        Instead of cracking Winquick Kfz, we recommend that you use one of the alternatives that we mentioned above. You can use a free trial version, a discount code or a coupon, or an alternative software. These options are legitimate, safe, and affordable. They can help you enjoy the benefits of car optimization software without compromising your security, integrity, or wallet.

        -

        -

        Crack Winquick Kfz: What are the Benefits and Drawbacks?

        -

        One of the main reasons why people want to crack Winquick Kfz is to enjoy the benefits that the software can offer. Some of these benefits are:

        -
          -
        • Improved performance. Winquick Kfz can help you optimize your car's engine, transmission, brakes, suspension, and other components. It can also help you adjust your car's settings according to your driving style and preferences. This can result in better acceleration, speed, handling, and stability.
        • -
        • Increased fuel efficiency. Winquick Kfz can help you reduce your car's fuel consumption and emissions. It can also help you monitor your car's fuel level, mileage, and consumption patterns. This can help you save money and protect the environment.
        • -
        • Enhanced safety. Winquick Kfz can help you detect and prevent any potential problems or malfunctions that may affect your car's performance or safety. It can also help you diagnose and fix any errors or issues that may occur with your car. This can help you avoid accidents, injuries, or damages.
        • -
        -

        However, cracking Winquick Kfz also has some drawbacks that you should consider. Some of these drawbacks are:

        -
          -
        • Risk of viruses, malware, spyware, or other harmful programs. As we mentioned earlier, cracking Winquick Kfz involves downloading a crack file or a key generator from the internet. These files may contain malicious programs that can infect your computer or steal your personal information. They may also damage your system or cause errors.
        • -
        • Risk of legal issues or penalties. As we mentioned earlier, cracking Winquick Kfz is illegal and unethical, as it violates the terms and conditions of the software and infringes on the intellectual property rights of the developers. You may face legal consequences or penalties if you are caught using a cracked version of Winquick Kfz.
        • -
        • Risk of losing support or updates. When you crack Winquick Kfz, you lose access to the official support and updates from the developers. This means that you will not be able to get any help or guidance if you encounter any problems or difficulties with the software. You will also not be able to get any new features or improvements that the developers may release in the future.
        • -
        - -

        Crack Winquick Kfz: How to Choose the Best Car Optimization Software

        -

        If you are looking for a car optimization software that can help you improve your car's performance, fuel efficiency, and safety, you may be tempted to crack Winquick Kfz. However, as we explained above, cracking Winquick Kfz is not a good idea. It is risky, illegal, and unethical. It can expose you to viruses, malware, spyware, errors, legal issues, and moral dilemmas.

        -

        So how can you choose the best car optimization software for your needs? Here are some tips that you can follow:

        -
          -
        • Do your research. Before you download or buy any car optimization software, you should do some research about it. You should check its features, benefits, drawbacks, reviews, ratings, testimonials, and reputation. You should also compare it with other similar software and see which one offers the best value for your money.
        • -
        • Choose a reputable and reliable source. When you download or buy any car optimization software, you should choose a reputable and reliable source. You should avoid any websites, forums, or torrent sites that offer crack files or key generators for Winquick Kfz or any other software. You should also avoid any sources that offer suspiciously low prices or unrealistic promises. You should only download or buy from the official website of the software or from authorized dealers or distributors.
        • -
        • Follow the instructions and guidelines. When you install or use any car optimization software, you should follow the instructions and guidelines provided by the developers or the source. You should not modify, alter, tamper with, or misuse the software in any way. You should also respect the terms and conditions of the software and the intellectual property rights of the developers.
        • -
        -

        Crack Winquick Kfz: How to Use the Software Safely and Effectively

        -

        If you have decided to crack Winquick Kfz and use it for your car, you should know how to use the software safely and effectively. Here are some tips that you can follow:

        -
          -
        • Backup your data. Before you install or run any crack file or key generator for Winquick Kfz, you should backup your data on your computer and your car. This way, you can restore your data in case something goes wrong or you lose any important information.
        • -
        • Scan your files. Before you open or execute any crack file or key generator for Winquick Kfz, you should scan them with a reliable antivirus or anti-malware program. This way, you can detect and remove any viruses, malware, spyware, or other harmful programs that may be hidden in the files.
        • -
        • Follow the instructions. When you install or use any crack file or key generator for Winquick Kfz, you should follow the instructions provided by the source or the file. You should not skip any steps or do anything that is not recommended. You should also not modify, alter, tamper with, or misuse the software in any way.
        • -
        • Monitor your car. When you use Winquick Kfz to optimize your car's performance, fuel efficiency, and safety, you should monitor your car's condition and performance. You should check if everything is working properly and if there are any changes or improvements. You should also be alert for any signs of problems or malfunctions that may occur with your car.
        • -
        - -

        Crack Winquick Kfz: How to Get Help and Support

        -

        If you have any questions, doubts, issues, or difficulties with cracking Winquick Kfz or using the software, you may need some help and support. However, since cracking Winquick Kfz is illegal and unethical, you cannot get any help or support from the official developers or the source of the software. You will have to rely on other sources of help and support. Here are some of them:

        -
          -
        • Online forums or communities. You can find online forums or communities where people share their experiences and knowledge about cracking Winquick Kfz or using the software. You can join these forums or communities and ask for help or advice from other users who may have faced similar situations or problems as you. However, you should be careful about the credibility and reliability of these sources, as they may not be accurate or trustworthy.
        • -
        • Online tutorials or guides. You can find online tutorials or guides that explain how to crack Winquick Kfz or use the software step by step. You can follow these tutorials or guides and learn how to do it yourself. However, you should be careful about the quality and validity of these sources, as they may not be updated or relevant.
        • -
        • Professional services. You can find professional services that offer to crack Winquick Kfz or use the software for you for a fee. You can hire these services and let them do it for you. However, you should be careful about the legality and morality of these sources, as they may not be ethical or authorized.
        • -
        -

        Crack Winquick Kfz: Is It Worth It?

        -

        Winquick Kfz is a software that can help you optimize your car's performance, fuel efficiency, and safety. It can also help you diagnose and fix any problems that may arise with your vehicle. However, if you want to use it without paying for it, you may be tempted to crack Winquick Kfz.

        -

        Cracking Winquick Kfz is a process of bypassing the security measures and activation codes of the software, so that you can use it for free. This can save you money and give you more control over your car. However, cracking Winquick Kfz is not easy, and it comes with some risks. It can expose you to viruses, malware, spyware, errors, legal issues, and moral dilemmas. It can also make you lose access to the official support and updates from the developers.

        -

        Therefore, we do not recommend cracking Winquick Kfz, as it is risky, illegal, and unethical. Instead, we suggest that you use one of the alternatives that we mentioned above. You can use a free trial version, a discount code or a coupon, or an alternative software. These options are legitimate, safe, and affordable. They can help you enjoy the benefits of car optimization software without compromising your security, integrity, or wallet.

        -

        We hope that this article has helped you understand what cracking Winquick Kfz is, how to do it, what are the benefits and drawbacks of doing so, and what are some alternatives to doing so. We hope that you will make an informed and responsible decision about using Winquick Kfz for your car.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/HD Online Player (Rules - Pyaar Ka Superhit Formula Hi).md b/spaces/scedlatioru/img-to-music/example/HD Online Player (Rules - Pyaar Ka Superhit Formula Hi).md deleted file mode 100644 index ba77087852cb2491401d7e10937352cd419c679f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/HD Online Player (Rules - Pyaar Ka Superhit Formula Hi).md +++ /dev/null @@ -1,6 +0,0 @@ -

        HD Online Player (Rules - Pyaar Ka Superhit Formula hi)


        Download Zip ►►►►► https://gohhs.com/2uEySm



        - - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Heroes Of Might And Magic 5 Collectors Edition RELOADED.md b/spaces/scedlatioru/img-to-music/example/Heroes Of Might And Magic 5 Collectors Edition RELOADED.md deleted file mode 100644 index c8b972b3890232d8272bb92c1ca301d8094588ff..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Heroes Of Might And Magic 5 Collectors Edition RELOADED.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Heroes of Might and Magic 5 Collectors Edition RELOADED


        Download ✓✓✓ https://gohhs.com/2uEA41



        -
        -Heroes of Might and Magic V - Collectors Edition v1.0 +7 TRAINER; Heroes of Might and Magic V. NORMAL-2-COLLECTORS EDITION PATCH & CHEATS . 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/model/__init__.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/model/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sdhsdhk/bingosjj/src/components/theme-toggle.tsx b/spaces/sdhsdhk/bingosjj/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/test/sample.html b/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/test/sample.html deleted file mode 100644 index 9360c416cd56cfb119542ba975df975c7016642c..0000000000000000000000000000000000000000 --- a/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/test/sample.html +++ /dev/null @@ -1,769 +0,0 @@ - - - - - - - - - - - - - - - - - - -
        - -

        Hello, World!

        - -
        - - - - diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transducer/vgg2l.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transducer/vgg2l.py deleted file mode 100644 index 18aeafb0f32c1feea7f38c28645ecac2d461b0e5..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transducer/vgg2l.py +++ /dev/null @@ -1,89 +0,0 @@ -"""VGG2L module definition for transformer encoder.""" - -from typing import Tuple -from typing import Union - -import torch - - -class VGG2L(torch.nn.Module): - """VGG2L module for custom encoder. - - Args: - idim: Dimension of inputs - odim: Dimension of outputs - pos_enc: Positional encoding class - - """ - - def __init__(self, idim: int, odim: int, pos_enc: torch.nn.Module = None): - """Construct a VGG2L object.""" - super().__init__() - - self.vgg2l = torch.nn.Sequential( - torch.nn.Conv2d(1, 64, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.Conv2d(64, 64, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.MaxPool2d((3, 2)), - torch.nn.Conv2d(64, 128, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.Conv2d(128, 128, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.MaxPool2d((2, 2)), - ) - - if pos_enc is not None: - self.output = torch.nn.Sequential( - torch.nn.Linear(128 * ((idim // 2) // 2), odim), pos_enc - ) - else: - self.output = torch.nn.Linear(128 * ((idim // 2) // 2), odim) - - def forward( - self, x: torch.Tensor, x_mask: torch.Tensor - ) -> Union[ - Tuple[torch.Tensor, torch.Tensor], - Tuple[Tuple[torch.Tensor, torch.Tensor], torch.Tensor], - ]: - """VGG2L forward for x. - - Args: - x: Input tensor (B, T, idim) - x_mask: Input mask (B, 1, T) - - Returns: - x: Output tensor (B, sub(T), odim) - or ((B, sub(T), odim), (B, sub(T), att_dim)) - x_mask: Output mask (B, 1, sub(T)) - - """ - x = x.unsqueeze(1) - x = self.vgg2l(x) - - b, c, t, f = x.size() - - x = self.output(x.transpose(1, 2).contiguous().view(b, t, c * f)) - - if x_mask is not None: - x_mask = self.create_new_mask(x_mask) - - return x, x_mask - - def create_new_mask(self, x_mask: torch.Tensor) -> torch.Tensor: - """Create a subsampled version of x_mask. - - Args: - x_mask: Input mask (B, 1, T) - - Returns: - x_mask: Output mask (B, 1, sub(T)) - - """ - x_t1 = x_mask.size(2) - (x_mask.size(2) % 3) - x_mask = x_mask[:, :, :x_t1][:, :, ::3] - - x_t2 = x_mask.size(2) - (x_mask.size(2) % 2) - x_mask = x_mask[:, :, :x_t2][:, :, ::2] - - return x_mask diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/test/unit/__init__.py b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/test/unit/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/nets.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/nets.py deleted file mode 100644 index d4c376ed8715f9e85d71609e348add0a6550a4ba..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/nets.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import layers -from uvr5_pack.lib_v5 import spec_utils - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/shgao/EditAnything/ldm/modules/diffusionmodules/__init__.py b/spaces/shgao/EditAnything/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/shgao/EditAnything/sam2semantic.py b/spaces/shgao/EditAnything/sam2semantic.py deleted file mode 100644 index 16658c04325f66502b6874fdfa68ef7e1645fc61..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/sam2semantic.py +++ /dev/null @@ -1,174 +0,0 @@ -# Edit Anything trained with Stable Diffusion + ControlNet + SAM + BLIP2 -# pip install mmcv - -from torchvision.utils import save_image -from PIL import Image -import subprocess -from collections import OrderedDict -import numpy as np -import cv2 -import textwrap -import torch -import os -from annotator.util import resize_image, HWC3 -import mmcv -import random - -# device = "cuda" if torch.cuda.is_available() else "cpu" # > 15GB GPU memory required -device = "cpu" -use_blip = True -use_gradio = True - -if device == 'cpu': - data_type = torch.float32 -else: - data_type = torch.float16 -# Diffusion init using diffusers. - -# diffusers==0.14.0 required. -from diffusers.utils import load_image - -base_model_path = "stabilityai/stable-diffusion-2-inpainting" -config_dict = OrderedDict([('SAM Pretrained(v0-1): Good Natural Sense', 'shgao/edit-anything-v0-1-1'), - ('LAION Pretrained(v0-3): Good Face', 'shgao/edit-anything-v0-3'), - ('SD Inpainting: Not keep position', 'stabilityai/stable-diffusion-2-inpainting') - ]) - -# Segment-Anything init. -# pip install git+https://github.com/facebookresearch/segment-anything.git -try: - from segment_anything import sam_model_registry, SamAutomaticMaskGenerator -except ImportError: - print('segment_anything not installed') - result = subprocess.run(['pip', 'install', 'git+https://github.com/facebookresearch/segment-anything.git'], check=True) - print(f'Install segment_anything {result}') - from segment_anything import sam_model_registry, SamAutomaticMaskGenerator -if not os.path.exists('./models/sam_vit_h_4b8939.pth'): - result = subprocess.run(['wget', 'https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth', '-P', 'models'], check=True) - print(f'Download sam_vit_h_4b8939.pth {result}') -sam_checkpoint = "models/sam_vit_h_4b8939.pth" -model_type = "default" -sam = sam_model_registry[model_type](checkpoint=sam_checkpoint) -sam.to(device=device) -mask_generator = SamAutomaticMaskGenerator(sam) - - -# BLIP2 init. -if use_blip: - # need the latest transformers - # pip install git+https://github.com/huggingface/transformers.git - from transformers import AutoProcessor, Blip2ForConditionalGeneration - processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") - blip_model = Blip2ForConditionalGeneration.from_pretrained( - "Salesforce/blip2-opt-2.7b", torch_dtype=data_type) - - -def region_classify_w_blip2(image): - inputs = processor(image, return_tensors="pt").to(device, data_type) - generated_ids = blip_model.generate(**inputs, max_new_tokens=15) - generated_text = processor.batch_decode( - generated_ids, skip_special_tokens=True)[0].strip() - return generated_text - -def region_level_semantic_api(image, topk=5): - """ - rank regions by area, and classify each region with blip2 - Args: - image: numpy array - topk: int - Returns: - topk_region_w_class_label: list of dict with key 'class_label' - """ - topk_region_w_class_label = [] - anns = mask_generator.generate(image) - if len(anns) == 0: - return [] - sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True) - for i in range(min(topk, len(sorted_anns))): - ann = anns[i] - m = ann['segmentation'] - m_3c = m[:,:, np.newaxis] - m_3c = np.concatenate((m_3c,m_3c,m_3c), axis=2) - bbox = ann['bbox'] - region = mmcv.imcrop(image*m_3c, np.array([bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]]), scale=1) - region_class_label = region_classify_w_blip2(region) - ann['class_label'] = region_class_label - print(ann['class_label'], str(bbox)) - topk_region_w_class_label.append(ann) - return topk_region_w_class_label - -def show_semantic_image_label(anns): - """ - show semantic image label for each region - Args: - anns: list of dict with key 'class_label' - Returns: - full_img: numpy array - """ - full_img = None - # generate mask image - for i in range(len(anns)): - m = anns[i]['segmentation'] - if full_img is None: - full_img = np.zeros((m.shape[0], m.shape[1], 3)) - color_mask = np.random.random((1, 3)).tolist()[0] - full_img[m != 0] = color_mask - full_img = full_img*255 - # add text on this mask image - for i in range(len(anns)): - m = anns[i]['segmentation'] - class_label = anns[i]['class_label'] - # add text to region - # Calculate the centroid of the region to place the text - y, x = np.where(m != 0) - x_center, y_center = int(np.mean(x)), int(np.mean(y)) - - # Split the text into multiple lines - max_width = 20 # Adjust this value based on your preferred maximum width - wrapped_text = textwrap.wrap(class_label, width=max_width) - - # Add text to region - font = cv2.FONT_HERSHEY_SIMPLEX - font_scale = 1.2 - font_thickness = 2 - font_color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)) # red - line_spacing = 40 # Adjust this value based on your preferred line - - for idx, line in enumerate(wrapped_text): - y_offset = y_center - (len(wrapped_text) - 1) * line_spacing // 2 + idx * line_spacing - text_size = cv2.getTextSize(line, font, font_scale, font_thickness)[0] - x_offset = x_center - text_size[0] // 2 - # Draw the text multiple times with small offsets to create a bolder appearance - offsets = [(-1, -1), (-1, 0), (-1, 1), (0, -1), (0, 1), (1, -1), (1, 0), (1, 1)] - for off_x, off_y in offsets: - cv2.putText(full_img, line, (x_offset + off_x, y_offset + off_y), font, font_scale, font_color, font_thickness, cv2.LINE_AA) - - return full_img - - - -image_path = "images/sa_224577.jpg" -input_image = Image.open(image_path) -detect_resolution=1024 -input_image = resize_image(np.array(input_image, dtype=np.uint8), detect_resolution) -region_level_annots = region_level_semantic_api(input_image, topk=5) -output = show_semantic_image_label(region_level_annots) - -image_list = [] -input_image = resize_image(input_image, 512) -output = resize_image(output, 512) -input_image = np.array(input_image, dtype=np.uint8) -output = np.array(output, dtype=np.uint8) -image_list.append(torch.tensor(input_image).float()) -image_list.append(torch.tensor(output).float()) -for each in image_list: - print(each.shape, type(each)) - print(each.max(), each.min()) - - -image_list = torch.stack(image_list).permute(0, 3, 1, 2) -print(image_list.shape) - -save_image(image_list, "images/sample_semantic.jpg", nrow=2, - normalize=True) - diff --git a/spaces/shi-labs/FcF-Inpainting/setup.sh b/spaces/shi-labs/FcF-Inpainting/setup.sh deleted file mode 100644 index 4bd441e4596d231fb4827f3f2505cd325da996c9..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/setup.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/sh -pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html -python3 -c "import torch; print(torch.__version__)" \ No newline at end of file diff --git a/spaces/shibing624/ChatPDF/assets/custom.css b/spaces/shibing624/ChatPDF/assets/custom.css deleted file mode 100644 index 22108488886cfc8d7772214dd9b83727b3fca6a3..0000000000000000000000000000000000000000 --- a/spaces/shibing624/ChatPDF/assets/custom.css +++ /dev/null @@ -1,468 +0,0 @@ -:root { - --chatbot-color-light: #000000; - --chatbot-color-dark: #FFFFFF; - --chatbot-background-color-light: #F3F3F3; - --chatbot-background-color-dark: #121111; - --message-user-background-color-light: #95EC69; - --message-user-background-color-dark: #26B561; - --message-bot-background-color-light: #FFFFFF; - --message-bot-background-color-dark: #2C2C2C; -} - -#app_title { - font-weight: var(--prose-header-text-weight); - font-size: var(--text-xxl); - line-height: 1.3; - text-align: left; - margin-top: 6px; - white-space: nowrap; -} -#description { - text-align: center; - margin: 32px 0 4px 0; -} - -/* gradio的页脚信息 */ -footer { - /* display: none !important; */ - margin-top: .2em !important; - font-size: 85%; -} -#footer { - text-align: center; -} -#footer div { - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.60; -} - -#float_display { - position: absolute; - max-height: 30px; -} -/* user_info */ -#user_info { - white-space: nowrap; - position: absolute; left: 8em; top: .2em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; min-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user_info .wrap { - opacity: 0; -} -#user_info p { - color: white; - font-weight: var(--block-label-text-weight); -} -#user_info.hideK { - opacity: 0; - transition: opacity 1s ease-in-out; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace; - /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */ - color: var(--body-text-color-subdued); -} - -#status_display { - transition: all 0.6s; -} -#chuanhu_chatbot { - transition: height 0.3s ease; -} - -/* usage_display */ -.insert_block { - position: relative; - margin: 0; - padding: .5em 1em; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: .5em 0 !important; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - -.apSwitch { - top: 2px; - display: inline-block; - height: 24px; - position: relative; - width: 48px; - border-radius: 12px; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--neutral-200); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 18px; - border-radius: 12px; -} -.apSlider::before { - bottom: -1.5px; - left: 1px; - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--primary-600); -} -input:checked + .apSlider::before { - transform: translateX(23px); - content:"🌚"; -} - -/* Override Slider Styles (for webkit browsers like Safari and Chrome) - * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410 - * 进度滑块在各个平台还是太不统一了 - */ -input[type="range"] { - -webkit-appearance: none; - height: 4px; - background: var(--input-background-fill); - border-radius: 5px; - background-image: linear-gradient(var(--primary-500),var(--primary-500)); - background-size: 0% 100%; - background-repeat: no-repeat; -} -input[type="range"]::-webkit-slider-thumb { - -webkit-appearance: none; - height: 20px; - width: 20px; - border-radius: 50%; - border: solid 0.5px #ddd; - background-color: white; - cursor: ew-resize; - box-shadow: var(--input-shadow); - transition: background-color .1s ease; -} -input[type="range"]::-webkit-slider-thumb:hover { - background: var(--neutral-50); -} -input[type=range]::-webkit-slider-runnable-track { - -webkit-appearance: none; - box-shadow: none; - border: none; - background: transparent; -} - -#submit_btn, #cancel_btn { - height: 42px !important; -} -#submit_btn::before { - content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -#cancel_btn::before { - content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色(默认) */ -#chuanhu_chatbot { - background-color: var(--chatbot-background-color-light) !important; - color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: var(--message-bot-background-color-light) !important; -} -[data-testid = "user"] { - background-color: var(--message-user-background-color-light) !important; -} -/* 暗色 */ -.dark #chuanhu_chatbot { - background-color: var(--chatbot-background-color-dark) !important; - color: var(--chatbot-color-dark) !important; -} -.dark [data-testid = "bot"] { - background-color: var(--message-bot-background-color-dark) !important; -} -.dark [data-testid = "user"] { - background-color: var(--message-user-background-color-dark) !important; -} - -/* 屏幕宽度大于等于500px的设备 */ -/* update on 2023.4.8: 高度的细致调整已写入JavaScript */ -@media screen and (min-width: 500px) { - #chuanhu_chatbot { - height: calc(100vh - 200px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } -} -/* 屏幕宽度小于500px的设备 */ -@media screen and (max-width: 499px) { - #chuanhu_chatbot { - height: calc(100vh - 140px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } - [data-testid = "bot"] { - max-width: 95% !important; - } - #app_title h1{ - letter-spacing: -1px; font-size: 22px; - } -} -#chuanhu_chatbot .wrap { - overflow-x: hidden; -} -/* 对话气泡 */ -.message { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} - -.message.user p { - white-space: pre-wrap; -} -.message .user-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} - -.message .md-message p { - margin-top: 0.6em !important; - margin-bottom: 0.6em !important; -} -.message .md-message p:first-child { margin-top: 0 !important; } -.message .md-message p:last-of-type { margin-bottom: 0 !important; } - -.message .md-message { - display: block; - padding: 0 !important; -} -.message .raw-message p { - margin:0 !important; -} -.message .raw-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} -.raw-message.hideM, .md-message.hideM { - display: none; -} - -/* custom buttons */ -.chuanhu-btn { - border-radius: 5px; - /* background-color: #E6E6E6 !important; */ - color: rgba(120, 120, 120, 0.64) !important; - padding: 4px !important; - position: absolute; - right: -22px; - cursor: pointer !important; - transition: color .2s ease, background-color .2s ease; -} -.chuanhu-btn:hover { - background-color: rgba(167, 167, 167, 0.25) !important; - color: unset !important; -} -.chuanhu-btn:active { - background-color: rgba(167, 167, 167, 0.5) !important; -} -.chuanhu-btn:focus { - outline: none; -} -.copy-bot-btn { - /* top: 18px; */ - bottom: 0; -} -.toggle-md-btn { - /* top: 0; */ - bottom: 20px; -} -.copy-code-btn { - position: relative; - float: right; - font-size: 1em; - cursor: pointer; -} - -.message-wrap>div img{ - border-radius: 10px !important; -} - -/* history message */ -.wrap>.history-message { - padding: 10px !important; -} -.history-message { - /* padding: 0 !important; */ - opacity: 80%; - display: flex; - flex-direction: column; -} -.history-message>.history-message { - padding: 0 !important; -} -.history-message>.message-wrap { - padding: 0 !important; - margin-bottom: 16px; -} -.history-message>.message { - margin-bottom: 16px; -} -.wrap>.history-message::after { - content: ""; - display: block; - height: 2px; - background-color: var(--body-text-color-subdued); - margin-bottom: 10px; - margin-top: -10px; - clear: both; -} -.wrap>.history-message>:last-child::after { - content: "仅供查看"; - display: block; - text-align: center; - color: var(--body-text-color-subdued); - font-size: 0.8em; -} - -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -.message :not(pre) code { - display: inline; - white-space: break-spaces; - font-family: var(--font-mono); - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -.message pre, -.message pre[class*=language-] { - color: #fff; - overflow-x: auto; - overflow-y: hidden; - margin: .8em 1em 1em 0em !important; - padding: var(--spacing-xl) 1.2em !important; - border-radius: var(--radius-lg) !important; -} -.message pre code, -.message pre code[class*=language-] { - color: #fff; - padding: 0; - margin: 0; - background-color: unset; - text-shadow: none; - font-family: var(--font-mono); -} -/* 覆盖 gradio 丑陋的复制按钮样式 */ -pre button[title="copy"] { - border-radius: 5px; - transition: background-color .2s ease; -} -pre button[title="copy"]:hover { - background-color: #333232; -} -pre button .check { - color: #fff !important; - background: var(--neutral-950) !important; -} - -/* 覆盖prism.css */ -.language-css .token.string, -.style .token.string, -.token.entity, -.token.operator, -.token.url { - background: none !important; -} diff --git a/spaces/simpie28/VITS-Umamusume-voice-synthesizer/modules.py b/spaces/simpie28/VITS-Umamusume-voice-synthesizer/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/simpie28/VITS-Umamusume-voice-synthesizer/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Countries Game Travel Around the World from Your Home.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Countries Game Travel Around the World from Your Home.md deleted file mode 100644 index 61a01d73034f8ec2577017242bf673e5cfa40ea2..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Countries Game Travel Around the World from Your Home.md +++ /dev/null @@ -1,96 +0,0 @@ - -

        Countries Game: A Fun Way to Learn Geography

        -

        Do you love geography? Do you want to learn more about different countries and their locations? If so, you might enjoy playing countries game. Countries game is a simple but fun activity that can help you improve your geography skills while having a good time. In this article, we will explain what is a countries game, how to play it, what are its benefits, and what are some tips and tricks for playing it better.

        -

        How to Play Countries Game

        -

        Countries game is a game where you have to name as many countries as you can that belong to a certain category. You can play it alone or with others, online or offline, with or without a timer. Here are the basic steps for playing countries game:

        -

        countries game


        Download ❤❤❤ https://ssurll.com/2uO0pw



        -

        Choose a Category

        -

        The first step is to choose a category for the game. This can be anything related to geography, such as continents, regions, or alphabetical order. For example, you can choose to name countries in Africa, in Asia, or that start with the letter A.

        -

        Take Turns Naming Countries

        -

        The next step is to take turns naming countries that fit the chosen category. You can start with any country you want, as long as it belongs to the category. For example, if you chose Africa, you can start with Algeria, Botswana, or Chad.

        -

        The catch is that you cannot repeat any country that has already been named by yourself or others. If you do, you lose a point or get eliminated from the game. You also have to name the country within a certain time limit, usually 10 seconds or less.

        -

        Keep Score and Declare a Winner

        -

        The final step is to keep score and declare a winner based on who names the most countries or who lasts the longest without making a mistake. You can use a pen and paper, a spreadsheet, or an online tool to keep track of the score. The winner is the one who has the highest score or who remains in the game when everyone else is out.

        -

        Benefits of Playing Countries Game

        -

        Playing countries game is not only fun but also beneficial for learning geography. Here are some of the main benefits of playing countries game:

        -

        Improve Your Memory and Recall

        -

        Playing countries game can help you improve your memory and recall of countries and their locations. By naming countries repeatedly, you can strengthen your neural connections and enhance your long-term memory. You can also improve your short-term memory and working memory by recalling countries quickly and accurately.

        -

        Expand Your Knowledge and Curiosity

        -

        Playing countries game can help you expand your knowledge and curiosity about different countries and their cultures, histories, and facts. By naming countries, you can learn more about their geography, such as their capitals, borders, flags, and landmarks. You can also learn more about their people, languages, religions, cuisines, and traditions. Playing countries game can spark your interest and curiosity to explore and discover more about the world.

        -

        Countries of the World Quiz
        -Countries of Europe Map Game
        -Seterra Geography Games Online
        -Countries and Capitals Matching Game
        -World Geography Trivia Game
        -Countries of Africa Puzzle Game
        -Flags of the World Quiz Game
        -Countries of Asia Spelling Game
        -World Map Jigsaw Game
        -Countries and Currencies Quiz Game
        -Countries of South America Memory Game
        -World Capitals Hangman Game
        -Countries of Oceania Bingo Game
        -World Landmarks Quiz Game
        -Countries and Languages Crossword Game
        -Countries of North America Word Search Game
        -World Rivers and Lakes Quiz Game
        -Countries and Continents Sorting Game
        -World Population Quiz Game
        -Countries and Nationalities Guessing Game
        -World Religions Quiz Game
        -Countries and Regions Anagram Game
        -World Time Zones Map Game
        -Countries and Flags Matching Game
        -World Climate Zones Quiz Game
        -Countries and Animals Trivia Game
        -World Mountains and Volcanoes Quiz Game
        -Countries and Emojis Puzzle Game
        -World History Quiz Game
        -Countries and Sports Quiz Game
        -World Currency Converter Game
        -Countries and Food Quiz Game
        -World Music Quiz Game
        -Countries and Famous People Quiz Game
        -World Art and Culture Quiz Game
        -Countries and Holidays Trivia Game
        -World Literature Quiz Game
        -Countries and Landmarks Matching Game
        -World Science and Technology Quiz Game
        -Countries and Coats of Arms Quiz Game
        -World Politics Quiz Game
        -Countries and Islands Spelling Game
        -World Mythology Quiz Game
        -Countries and Borders Map Game
        -World Languages Quiz Game
        -Countries and Flowers Trivia Game
        -World Movies Quiz Game
        -Countries and Colors Matching Game
        -World Fashion Quiz Game

        -

        Have Fun and Challenge Yourself

        -

        Playing countries game can help you have fun and challenge yourself by competing with others or setting personal goals. You can play with your friends, family, or classmates and see who knows more countries or who can name them faster. You can also play by yourself and try to beat your own record or reach a certain target. Playing countries game can make learning geography fun and rewarding.

        -

        Tips and Tricks for Playing Countries Game

        -

        If you want to play countries game better and enjoy it more, here are some tips and tricks that you can use:

        -

        Use Mnemonics and Associations

        -

        One way to remember and recall countries more easily is to use mnemonics and associations. Mnemonics are memory devices that help you encode and retrieve information, such as acronyms, rhymes, or songs. Associations are connections that help you link information, such as images, colors, or emotions. For example, you can use the mnemonic ANGOLA to remember the countries in Africa that start with A (Algeria, Niger, Gabon, O man, and Libya). You can also associate countries with images, such as a kangaroo for Australia, a maple leaf for Canada, or a pyramid for Egypt.

        -

        Use Maps and Resources

        -

        Another way to learn more about countries and their locations is to use maps and resources. Maps are visual representations that show the shapes, sizes, positions, and features of countries and regions. Resources are sources of information that provide facts, data, and trivia about countries and geography. For example, you can use a world map or a globe to see where countries are located and how they relate to each other. You can also use online tools, such as Google Maps, Wikipedia, or CIA World Factbook, to find out more about countries and their details.

        -

        Vary the Difficulty and Format

        -

        A third way to make countries game more interesting and challenging is to vary the difficulty and format of the game. You can adjust the difficulty by choosing different categories, time limits, or scoring systems. You can also change the format by using different modes, such as reverse mode (where you have to name the country that matches a given capital, flag, or landmark), list mode (where you have to name all the countries in a given continent or region), or quiz mode (where you have to answer multiple-choice or fill-in-the-blank questions about countries and geography).

        -

        Conclusion

        -

        Countries game is a fun way to learn geography and improve your memory, knowledge, and curiosity. It is easy to play and can be adapted to different levels and preferences. All you need is a category, a timer, and a scorekeeper. You can play it alone or with others, online or offline, with or without a timer. The benefits of playing countries game are many, such as enhancing your memory and recall, expanding your knowledge and curiosity, and having fun and challenging yourself. If you want to play countries game better and enjoy it more, you can use some tips and tricks, such as using mnemonics and associations, using maps and resources, and varying the difficulty and format. We hope you found this article helpful and informative. Why not try playing countries game today and see how many countries you can name?

        -

        If you have any feedback, questions, or experiences with playing countries game, please feel free to share them with us in the comments section below. We would love to hear from you!

        -

        FAQs

        -
          -
        • Q: How many countries are there in the world?
        • -
        • A: There is no definitive answer to this question, as different sources may have different criteria for defining what constitutes a country. However, one widely accepted source is the United Nations (UN), which currently has 193 member states and 2 observer states (the Vatican City and Palestine). Another source is the International Organization for Standardization (ISO), which assigns codes to 249 countries and territories.
        • -
        • Q: What is the largest country in the world by area?
        • -
        • A: The largest country in the world by area is Russia, which covers about 17 million square kilometers (6.6 million square miles), or about 11% of the world's land area.
        • -
        • Q: What is the smallest country in the world by area?
        • -
        • A: The smallest country in the world by area is Vatican City, which covers about 0.44 square kilometers (0.17 square miles), or about 0.0001% of the world's land area.
        • -
        • Q: What is the most populous country in the world?
        • -
        • A: The most populous country in the world is China, which has about 1.4 billion people, or about 18% of the world's population.
        • -
        • Q: What is the least populous country in the world?
        • -
        • A: The least populous country in the world is Vatican City, which has about 800 people, or about 0.00001% of the world's population.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Off Road 4x4 Driving Simulator with Unlimited Money Mod APK.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Off Road 4x4 Driving Simulator with Unlimited Money Mod APK.md deleted file mode 100644 index f2866ac53bf97d1cd2e069aaef60b221e3b28407..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Off Road 4x4 Driving Simulator with Unlimited Money Mod APK.md +++ /dev/null @@ -1,97 +0,0 @@ -
        -

        Off Road 4x4 Driving Simulator Unlimited Money Mod APK: A Review

        -

        If you are a fan of off-road racing games, you might have heard of Off Road 4x4 Driving Simulator, a popular and realistic car racing simulator that lets you drive various 4x4 trucks and vehicles on rugged terrains. But did you know that there is a way to get unlimited money in the game without spending a dime? In this article, we will review Off Road 4x4 Driving Simulator unlimited money mod apk, a modified version of the game that gives you access to unlimited money and other features. We will also tell you how to download and install the mod apk, as well as the pros and cons of using it. Finally, we will give you some reasons why you should play Off Road 4x4 Driving Simulator, whether you use the mod apk or not.

        -

        off road 4x4 driving simulator unlimited money mod apk


        Download Zip ✔✔✔ https://ssurll.com/2uNTLa



        -

        What is Off Road 4x4 Driving Simulator?

        -

        Off Road 4x4 Driving Simulator is an addictive ultimate mud truck driving game and realistic car racing simulator developed by Offroad Game Official. The game is available for Android devices and has over 1 million downloads on Google Play. In this game, you can:

        -

        A realistic and addictive off-road racing game

        -

        Drive various 4x4 trucks and vehicles on different off-road environments, such as mud, snow, sand, rocks, and more. You can also experience realistic driving physics, car damage, weather effects, and car sounds. The game has dozens of off-road racing challenges and time trials for you to complete and test your driving skills.

        -

        Various 4x4 trucks and vehicles to choose from

        -

        The game offers a huge selection of 4x4 trucks and vehicles with different driving characteristics. You can choose from SUVs, pickups, jeeps, monster trucks, rally cars, and more. Each vehicle has its own strengths and weaknesses, so you have to choose wisely depending on the terrain and the challenge.

        -

        Endless tuning and customization options

        -

        The game also allows you to tune and customize your vehicles to your liking. You can upgrade your engine, suspension, tires, brakes, gearbox, and more. You can also change the color, paint, stickers, rims, lights, and other accessories of your vehicles. You can create your own unique style and show it off to other players.

        -

        * off road 4x4 driving simulator mod apk free download
        -* off road 4x4 driving simulator hack apk unlimited money
        -* off road 4x4 driving simulator modded apk with unlimited cash
        -* off road 4x4 driving simulator cheat apk for unlimited money
        -* off road 4x4 driving simulator apk mod money unlimited download
        -* off road 4x4 driving simulator unlimited coins mod apk
        -* off road 4x4 driving simulator mod apk latest version unlimited money
        -* off road 4x4 driving simulator hacked apk download free money
        -* off road 4x4 driving simulator mod apk android unlimited money
        -* off road 4x4 driving simulator unlimited cash mod apk download
        -* off road 4x4 driving simulator mod apk money hack unlimited
        -* off road 4x4 driving simulator free money mod apk download
        -* off road 4x4 driving simulator mod apk unlimited gold money
        -* off road 4x4 driving simulator unlimited money mod apk offline
        -* off road 4x4 driving simulator mod apk online unlimited money
        -* off road 4x4 driving simulator mod apk no root unlimited money
        -* off road 4x4 driving simulator unlimited money hack mod apk
        -* off road 4x4 driving simulator mod apk full version unlimited money
        -* off road 4x4 driving simulator premium mod apk unlimited money
        -* off road 4x4 driving simulator pro mod apk unlimited money
        -* off road 4x4 driving simulator mega mod apk unlimited money
        -* off road 4x4 driving simulator vip mod apk unlimited money
        -* off road 4x4 driving simulator mod apk all cars unlocked unlimited money
        -* off road 4x4 driving simulator mod apk everything unlocked unlimited money
        -* off road 4x4 driving simulator unlock all vehicles mod apk unlimited money
        -* off road 4x4 driving simulator mod apk unlock all maps unlimited money
        -* off road 4x4 driving simulator mod apk all levels unlocked unlimited money
        -* off road 4x4 driving simulator unlock all features mod apk unlimited money
        -* off road 4x4 driving simulator mod apk all upgrades unlocked unlimited money
        -* off road 4x4 driving simulator unlock all items mod apk unlimited money

        -

        What is the unlimited money mod apk?

        -

        The unlimited money mod apk is a modified version of Off Road 4x4 Driving Simulator that gives you unlimited money in the game. With unlimited money, you can buy any vehicle you want, upgrade it to the max level, and customize it however you want. You can also unlock all the levels and modes in the game without any restrictions. The mod apk also removes ads from the game, so you can enjoy a smoother gameplay experience.

        -

        How to download and install the mod apk

        -

        To download and install the mod apk, you need to follow these steps:

        -
          -
        1. Go to this link or this link to download the mod apk file.
        2. -
        3. Enable unknown sources on your device settings.
        4. -
        5. Locate the downloaded file on your device storage and tap on it to install it.
        6. -
        7. Launch the game and enjoy the unlimited money mod apk.
        8. -
        -

        The benefits and drawbacks of using the mod apk

        -

        The unlimited money mod apk has some benefits and drawbacks that you should be aware of before using it. Here are some of them:

        - - - - - - - - - - - - - - - - - -
        BenefitsDrawbacks
        You can enjoy the game without any limitations or costs.You might lose the thrill and challenge of the game.
        You can explore all the features and content of the game.You might encounter some bugs or errors in the game.
        You can impress your friends and other players with your vehicles and achievements.You might get banned or reported by the game developers or other players.
        -

        Why should you play Off Road 4x4 Driving Simulator?

        -

        Whether you use the unlimited money mod apk or not, Off Road 4x4 Driving Simulator is a great game to play if you love off-road racing games. Here are some reasons why you should play it:

        -

        The stunning graphics and realistic physics of the game

        -

        Off Road 4x4 Driving Simulator has amazing 3D graphics and animations that make the game look realistic and immersive. You can see the details of the vehicles, the environments, the weather, and the effects. The game also has realistic physics and car dynamics that make the driving experience authentic and fun. You can feel the weight, speed, traction, and handling of your vehicles as you drive on different terrains and conditions.

        -

        The challenging and fun off-road racing modes and levels

        -

        Off Road 4x4 Driving Simulator has various off-road racing modes and levels that will keep you entertained and challenged. You can choose from free roam, career, time trial, checkpoint, elimination, drag race, and more. You can also select from different difficulty levels, from easy to extreme. Each mode and level has its own objectives, rewards, and leaderboards. You can compete with yourself or with other players to see who is the best off-road racer.

        -

        The online multiplayer and social features of the game

        -

        Off Road 4x4 Driving Simulator also has online multiplayer and social features that make the game more interactive and fun. You can join or create online rooms and race with other players from around the world. You can also chat with them, send them messages, invite them to your friends list, and challenge them to duels. You can also share your vehicles, achievements, screenshots, and videos with other players on social media platforms such as Facebook, Instagram, Twitter, and YouTube.

        -

        Conclusion

        -

        Off Road 4x4 Driving Simulator is a realistic and addictive off-road racing game that lets you drive various 4x4 trucks and vehicles on rugged terrains. You can also tune and customize your vehicles to your liking. The game has stunning graphics, realistic physics, challenging modes and levels, online multiplayer, and social features. The unlimited money mod apk is a modified version of the game that gives you unlimited money in the game. You can use it to buy any vehicle you want, upgrade it to the max level, and customize it however you want. You can also unlock all the levels and modes in the game without any restrictions. The mod apk also removes ads from the game. However, the mod apk also has some drawbacks, such as losing the thrill and challenge of the game, encountering some bugs or errors in the game, or getting banned or reported by the game developers or other players. Therefore, you should use it at your own risk and discretion.

        -

        FAQs

        -
          -
        1. What is Off Road 4x4 Driving Simulator?
        2. -

          Off Road 4x4 Driving Simulator is a realistic and addictive off-road racing game that lets you drive various 4x4 trucks and vehicles on rugged terrains.

          -
        3. What is the unlimited money mod apk?
        4. -

          The unlimited money mod apk is a modified version of Off Road 4x4 Driving Simulator that gives you unlimited money in the game.

          -
        5. How to download and install the mod apk?
        6. -

          To download and install the mod apk, you need to go to this link or this link to download the mod apk file, enable unknown sources on your device settings, locate the downloaded file on your device storage and tap on it to install it, and launch the game.

          -
        7. What are the benefits and drawbacks of using the mod apk?
        8. -

          The benefits of using the mod apk are that you can enjoy the game without any limitations or costs, explore all the features and content of the game, and impress your friends and other players with your vehicles and achievements. The drawbacks of using the mod apk are that you might lose the thrill and challenge of the game, encounter some bugs or errors in the game, or get banned or reported by the game developers or other players.

          -
        9. Why should you play Off Road 4x4 Driving Simulator?
        10. -

          You should play Off Road 4x4 Driving Simulator because it is a realistic and addictive off-road racing game that has stunning graphics, realistic physics, challenging modes and levels, online multiplayer, and social features. You can also tune and customize your vehicles to your liking.

          -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/__init__.py b/spaces/simsantonioii/MusicGen-Continuation/audiocraft/__init__.py deleted file mode 100644 index 2befac60faf6f406f78ff7b7da05225dbfe7b111..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import data, modules, models - -__version__ = '0.0.2a1' diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/networks/__init__.py b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/networks/__init__.py deleted file mode 100644 index cec8d76d6b261e5d6b5dc2ace2f884d0dfffa1a2..0000000000000000000000000000000000000000 --- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/networks/__init__.py +++ /dev/null @@ -1,63 +0,0 @@ -""" -Copyright (C) 2019 NVIDIA Corporation. All rights reserved. -Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). -""" - -import torch -from sstan_models.networks.base_network import BaseNetwork -# from sstan_models.networks.loss import * -# from sstan_models.networks.discriminator import * -from sstan_models.networks.generator import * -# from sstan_models.networks.encoder import * -import model_util as util - - -def find_network_using_name(target_network_name, filename): - target_class_name = target_network_name + filename - module_name = 'sstan_models.networks.' + filename - network = util.find_class_in_module(target_class_name, module_name) - - assert issubclass(network, BaseNetwork), \ - "Class %s should be a subclass of BaseNetwork" % network - - return network - - -def modify_commandline_options(parser, is_train): - opt, _ = parser.parse_known_args() - - netG_cls = find_network_using_name(opt.netG, 'generator') - parser = netG_cls.modify_commandline_options(parser, is_train) - if is_train: - netD_cls = find_network_using_name(opt.netD, 'discriminator') - parser = netD_cls.modify_commandline_options(parser, is_train) - # netE_cls = find_network_using_name('conv', 'encoder') - # parser = netE_cls.modify_commandline_options(parser, is_train) - - return parser - - -def create_network(cls, opt): - net = cls(opt) - net.print_network() - if len(opt.gpu_ids) > 0: - assert(torch.cuda.is_available()) - net.cuda() - net.init_weights(opt.init_type, opt.init_variance) - return net - - -def define_G(opt): - netG_cls = find_network_using_name(opt.netG, 'generator') - return create_network(netG_cls, opt) - - -def define_D(opt): - netD_cls = find_network_using_name(opt.netD, 'discriminator') - return create_network(netD_cls, opt) - - -def define_E(opt): - # there exists only one encoder type - netE_cls = find_network_using_name('conv', 'encoder') - return create_network(netE_cls, opt) diff --git a/spaces/stamps-labs/stamp2vec/segmentation_models/deeplabv3/__init__.py b/spaces/stamps-labs/stamp2vec/segmentation_models/deeplabv3/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/stevenkolawole/T5-multitasks-streamlit/README.md b/spaces/stevenkolawole/T5-multitasks-streamlit/README.md deleted file mode 100644 index 1a304482f707915b51a00d59e718a803bf6ef12c..0000000000000000000000000000000000000000 --- a/spaces/stevenkolawole/T5-multitasks-streamlit/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: T5 Multitasks Streamlit -emoji: 🔥 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.0.0 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/stistko/CzechCapitalization/app.py b/spaces/stistko/CzechCapitalization/app.py deleted file mode 100644 index 773629449c263d1511d36eb60f97d78718a54a98..0000000000000000000000000000000000000000 --- a/spaces/stistko/CzechCapitalization/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import streamlit as st -from huggingface_hub import Repository - -repo = Repository( - local_dir="scripts", - repo_type="model", - clone_from="stistko/CzechCapitalizationProtected", - token=True -) -repo.git_pull() - -from scripts.scripts.model import CapitalizationModel - -model = CapitalizationModel() - -# Inicializace - -st.write(""" -# Czech Capitalization Model (CCM) -This application uses a transformer model 'Electra-small' with subsequent fine-tuning and a contextual window of 256 tokens. -""") - -# Text input field -input_text = st.text_input("Enter text here:") - -# Button for submission -submit_button = st.button("Submit") - -# Action after button press -if submit_button: - st.write(f"Output: {model.run(input_text)}") \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Business And Management Paul Hoang 2nd Edition Pdf 240.md b/spaces/stomexserde/gpt4-ui/Examples/Business And Management Paul Hoang 2nd Edition Pdf 240.md deleted file mode 100644 index 2520e0e03e9ab2692ee174c7a83f11b561ce7422..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Business And Management Paul Hoang 2nd Edition Pdf 240.md +++ /dev/null @@ -1,14 +0,0 @@ - -

        Review of Business and Management by Paul Hoang (2nd edition)

        -

        Business and Management by Paul Hoang is a comprehensive textbook for the International Baccalaureate (IB) Diploma Programme courses in Business and Management (SL and HL). The book covers all the topics and learning outcomes of the syllabus, with clear explanations, examples, diagrams, case studies, and practice questions. The book also provides guidance on the internal assessment, extended essay, and exam preparation.

        -

        The second edition of the book was published in 2011 by IBID Press, an Australian publisher that specializes in IB resources. The book has 750 pages and is divided into six units: Business Organization and Environment, Human Resource Management, Finance and Accounts, Marketing, Operations Management, and Business Strategy. Each unit has several chapters that cover the relevant concepts and theories, as well as links to real-world businesses and issues. The book also has appendices that include a glossary, formulas, command terms, and assessment criteria.

        -

        business and management paul hoang 2nd edition pdf 240


        DOWNLOAD - https://urlgoal.com/2uIayc



        -

        The book is suitable for both SL and HL students, as it clearly indicates which topics are core or extension. The book also has a variety of activities and exercises that help students apply their knowledge and develop their skills. The book also offers online access to additional resources, such as PowerPoint presentations, worksheets, quizzes, and exam-style questions.

        -

        Business and Management by Paul Hoang is a valuable resource for IB Business and Management students and teachers. It is written in an accessible and engaging style, with a focus on understanding and application. It is also updated with current examples and case studies that reflect the global and dynamic nature of business. The book is highly recommended for anyone who wants to learn more about business and management in the IB context.

        - -

        One of the strengths of the book is its use of real-life examples and case studies that illustrate the concepts and theories in a relevant and engaging way. The book features a wide range of businesses from different sectors, countries, and cultures, such as Apple, Starbucks, IKEA, Coca-Cola, Nike, McDonald's, Toyota, Google, and many more. The book also covers contemporary issues and challenges that businesses face in the globalized and digitalized world, such as ethics, sustainability, innovation, diversity, social media, e-commerce, and corporate social responsibility.

        -

        Another strength of the book is its alignment with the IB assessment objectives and criteria. The book provides clear explanations of the command terms and assessment objectives that are used in the internal assessment, extended essay, and exam questions. The book also offers tips and advice on how to approach these tasks and how to improve one's performance. The book also has a wealth of practice questions that test one's knowledge and understanding, as well as one's application, analysis, evaluation, and synthesis skills. The book also provides sample answers and mark schemes for some of these questions.

        -

        Overall, Business and Management by Paul Hoang is an excellent textbook for IB Business and Management students and teachers. It is comprehensive, up-to-date, relevant, and aligned with the IB requirements. It is also written in a clear and concise style that makes it easy to follow and understand. The book is a must-have for anyone who wants to succeed in the IB Business and Management course.

        -

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/EDIROL Orchestral DXi VSTi 1.03.rar 1 - [Extra Quality].md b/spaces/stomexserde/gpt4-ui/Examples/EDIROL Orchestral DXi VSTi 1.03.rar 1 - [Extra Quality].md deleted file mode 100644 index 99d1f85c8f57ec187b7e77dcc1852312e3782079..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/EDIROL Orchestral DXi VSTi 1.03.rar 1 - [Extra Quality].md +++ /dev/null @@ -1,47 +0,0 @@ - -

        EDIROL Orchestral DXi VSTi 1.03.rar 1 - A Powerful and Versatile Virtual Instrument

        -

        If you are looking for a realistic and expressive orchestral sound library, you might want to check out EDIROL Orchestral DXi VSTi 1.03.rar 1. This is a virtual instrument that features high-quality samples of various orchestral instruments, such as strings, brass, woodwinds, percussion, and more. You can use it as a standalone application or as a plugin in your favorite DAW.

        -

        EDIROL Orchestral DXi VSTi 1.03.rar 1 offers a lot of flexibility and control over the sound and performance of each instrument. You can adjust the volume, pan, reverb, attack, release, and other parameters of each section. You can also choose from different articulations, such as legato, staccato, pizzicato, tremolo, and more. You can even create your own custom ensembles by mixing and matching different instruments.

        -

        EDIROL Orchestral DXi VSTi 1.03.rar 1 -


        Download Ziphttps://urlgoal.com/2uI7Db



        -

        One of the best features of EDIROL Orchestral DXi VSTi 1.03.rar 1 is the ability to play in real time using a MIDI keyboard or controller. You can use the modulation wheel to switch between different articulations on the fly, or use the pitch bend wheel to create realistic glissandos and portamentos. You can also use the key switch function to change the instrument or section instantly.

        -

        EDIROL Orchestral DXi VSTi 1.03.rar 1 is compatible with Windows XP and Vista, and requires a minimum of 256 MB of RAM and 2 GB of hard disk space. You can download it from various online sources, but make sure you scan it for viruses before installing it. EDIROL Orchestral DXi VSTi 1.03.rar 1 is a powerful and versatile virtual instrument that can enhance your music production and composition.

        - -

        How to use EDIROL Orchestral DXi VSTi 1.03.rar 1 in your DAW

        -

        If you want to use EDIROL Orchestral DXi VSTi 1.03.rar 1 as a plugin in your DAW, you need to follow these steps:

        -
          -
        1. Extract the EDIROL Orchestral DXi VSTi 1.03.rar 1 file to a folder on your computer.
        2. -
        3. Copy the EDIROL Orchestral.dll file to your VST plugins folder.
        4. -
        5. Launch your DAW and scan for new plugins.
        6. -
        7. Insert EDIROL Orchestral as an instrument track or a MIDI track.
        8. -
        9. Select the instrument or section you want to use from the interface.
        10. -
        11. Adjust the parameters and articulations as you wish.
        12. -
        13. Play or record your MIDI notes using your keyboard or controller.
        14. -
        -

        You can also use EDIROL Orchestral DXi VSTi 1.03.rar 1 as a standalone application by running the EDIROL Orchestral.exe file. You can then use your MIDI keyboard or controller to play the sounds, or load and save MIDI files. You can also export your performance as a WAV file.

        - -

        Pros and cons of EDIROL Orchestral DXi VSTi 1.03.rar 1

        -

        EDIROL Orchestral DXi VSTi 1.03.rar 1 is a great virtual instrument for creating realistic and expressive orchestral sounds. However, it also has some drawbacks that you should be aware of. Here are some of the pros and cons of EDIROL Orchestral DXi VSTi 1.03.rar 1:

        -

        -
          -
        • Pros: -
            -
          • It has a large and diverse sound library of various orchestral instruments and sections.
          • -
          • It has a lot of control and flexibility over the sound and performance of each instrument.
          • -
          • It has a simple and intuitive interface that is easy to use.
          • -
          • It supports real-time playing and switching between different articulations and instruments.
          • -
          • It can be used as a standalone application or as a plugin in your DAW.
          • -
          -
        • -
        • Cons: -
            -
          • It is an old and discontinued product that may not work well with newer operating systems and DAWs.
          • -
          • It requires a lot of RAM and hard disk space to run smoothly.
          • -
          • It may not have the most realistic or detailed sound quality compared to newer and more advanced virtual instruments.
          • -
          • It may not have the most comprehensive or updated features or options compared to newer and more advanced virtual instruments.
          • -
          • It may not be easy to find or download from reliable sources online.
          • -
          -
        • -
        -

        In conclusion, EDIROL Orchestral DXi VSTi 1.03.rar 1 is a powerful and versatile virtual instrument that can enhance your music production and composition. However, it also has some limitations and drawbacks that you should consider before using it. You may want to compare it with other alternatives or newer versions of orchestral virtual instruments before making your final decision.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fleetwood Mac Torrent !!EXCLUSIVE!!.md b/spaces/stomexserde/gpt4-ui/Examples/Fleetwood Mac Torrent !!EXCLUSIVE!!.md deleted file mode 100644 index 9dfd3c22c66b3e89b3fc130223dc44b5e1d14711..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Fleetwood Mac Torrent !!EXCLUSIVE!!.md +++ /dev/null @@ -1,36 +0,0 @@ - -

        How to Download Fleetwood Mac Discography with Torrents

        -

        Fleetwood Mac is one of the most influential and commercially successful rock bands of all time. Their music spans across genres such as blues, pop, soft rock, and more. If you are a fan of Fleetwood Mac and want to download their discography with torrents, this article will show you how.

        -

        Fleetwood Mac Torrent


        Download 🗸 https://urlgoal.com/2uI998



        -

        Torrents are files that contain information about other files that are shared by peers on a network. You can use a torrent client software to download the files you want from other users who have them. Torrents are a fast and convenient way to get large files such as music albums, movies, games, etc.

        -

        However, torrents also have some risks and drawbacks. You need to be careful about the legality and safety of the torrents you download. Some torrents may contain copyrighted or illegal content that can get you in trouble with the law or your internet service provider. Some torrents may also contain viruses or malware that can harm your computer or device. Therefore, you should always use a reputable torrent site and a reliable antivirus software when downloading torrents.

        -

        One of the best torrent sites for music is RuTracker.org. It is a Russian site that has a huge collection of music torrents in various formats and quality. You can find almost any album or artist you want on RuTracker.org, including Fleetwood Mac. To access RuTracker.org, you may need to use a VPN service or a proxy server, as the site may be blocked in some countries.

        -

        To download Fleetwood Mac discography with torrents from RuTracker.org, follow these steps:

        -
          -
        1. Go to https://rutracker.org and create an account or log in if you already have one.
        2. -
        3. In the search box, type "Fleetwood Mac" and click on the magnifying glass icon.
        4. -
        5. You will see a list of results with different Fleetwood Mac albums and collections. Choose the one you want and click on its title.
        6. -
        7. You will see a page with more details about the torrent, such as its size, number of files, seeders, leechers, etc. You can also read the comments from other users who have downloaded it.
        8. -
        9. To download the torrent file, click on the green button that says "Скачать .torrent" (Download .torrent).
        10. -
        11. Open the torrent file with your torrent client software and start downloading the files.
        12. -
        13. Enjoy listening to Fleetwood Mac discography!
        14. -
        -

        Note: Some of the torrents may have password-protected archives. You can find the password in the comments section or in the description of the torrent.

        -

        -

        Another option to download Fleetwood Mac discography with torrents is to use Archive.org. It is a non-profit website that preserves digital content such as books, music, videos, etc. You can find some Fleetwood Mac albums on Archive.org that are free to download and stream.

        -

        To download Fleetwood Mac discography with torrents from Archive.org, follow these steps:

        -
          -
        1. Go to https://archive.org and create an account or log in if you already have one.
        2. -
        3. In the search box, type "Fleetwood Mac" and click on the magnifying glass icon.
        4. -
        5. You will see a list of results with different Fleetwood Mac albums and collections. Choose the one you want and click on its title.
        6. -
        7. You will see a page with more details about the album, such as its tracks, cover art, reviews, etc. You can also listen to the album online by clicking on the play button.
        8. -
        9. To download the album as a torrent file, click on the "TORRENT" link under "DOWNLOAD OPTIONS".
        10. -
        11. Open the torrent file with your torrent client software and start downloading the files.
        12. -
        13. Enjoy listening to Fleetwood Mac discography!
        14. -
        -

        Note: Some of the albums may have low quality or incomplete tracks. You can check the reviews and comments from other users before downloading them.

        - -

        Conclusion

        -

        Fleetwood Mac is a legendary band that has

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/sub314xxl/MusicGen-Continuation/app.py b/spaces/sub314xxl/MusicGen-Continuation/app.py deleted file mode 100644 index 4b3d5baef41dfa9c508e9b8a8fb35a9ed494f57a..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/app.py +++ /dev/null @@ -1,392 +0,0 @@ -""" -Copyright (c) Meta Platforms, Inc. and affiliates. -All rights reserved. - -This source code is licensed under the license found in the -LICENSE file in the root directory of this source tree. -""" - -from tempfile import NamedTemporaryFile -import argparse -import torch -import torchaudio -import gradio as gr -import os -from audiocraft.models import MusicGen -from audiocraft.data.audio import audio_write - -from share_btn import community_icon_html, loading_icon_html, share_js, css - -MODEL = None - - -def load_model(version): - print("Loading model", version) - return MusicGen.get_pretrained(version) - - -def predict( - text, - melody_input, - duration=30, - continuation_start=0, - continuation_end=30, - topk=250, - topp=0, - temperature=1, - cfg_coef=3, -): - global MODEL - topk = int(topk) - if MODEL is None: - MODEL = load_model("melody") - - if melody_input is None: - raise gr.Error("Please upload a melody to continue!") - - if duration > MODEL.lm.cfg.dataset.segment_duration: - raise gr.Error("MusicGen currently supports durations of up to 30 seconds!") - if continuation_end < continuation_start: - raise gr.Error("The end time must be greater than the start time!") - MODEL.set_generation_params( - use_sampling=True, - top_k=topk, - top_p=topp, - temperature=temperature, - cfg_coef=cfg_coef, - duration=duration, - ) - - if melody_input: - melody, sr = torchaudio.load(melody_input) - # sr, melody = melody_input[0], torch.from_numpy(melody_input[1]).to(MODEL.device).float().t().unsqueeze(0) - if melody.dim() == 2: - melody = melody[None] - print("\nGenerating continuation\n") - melody_wavform = melody[ - ..., int(sr * continuation_start) : int(sr * continuation_end) - ] - melody_duration = melody_wavform.shape[-1] / sr - if duration + melody_duration > MODEL.lm.cfg.dataset.segment_duration: - raise gr.Error("Duration + continuation duration must be <= 30 seconds") - output = MODEL.generate_continuation( - prompt=melody_wavform, - prompt_sample_rate=sr, - descriptions=[text], - progress=True, - ) - - output = output.detach().cpu().float()[0] - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, - output, - MODEL.sample_rate, - strategy="loudness", - loudness_headroom_db=16, - loudness_compressor=True, - add_suffix=False, - ) - waveform_video = gr.make_waveform(file.name) - - return ( - waveform_video, - (sr, melody_wavform.unsqueeze(0).numpy()) if melody_input else None, - ) - - -def ui(**kwargs): - def toggle(choice): - if choice == "mic": - return gr.update(source="microphone", value=None, label="Microphone") - else: - return gr.update(source="upload", value=None, label="File") - - def check_melody_length(melody_input): - if not melody_input: - return gr.update(maximum=0, value=0), gr.update(maximum=0, value=0) - melody, sr = torchaudio.load(melody_input) - audio_length = melody.shape[-1] / sr - if melody.dim() == 2: - melody = melody[None] - return gr.update(maximum=audio_length, value=0), gr.update( - maximum=audio_length, value=audio_length - ) - - def preview_melody_cut(melody_input, continuation_start, continuation_end): - if not melody_input: - return gr.update(maximum=0, value=0), gr.update(maximum=0, value=0) - melody, sr = torchaudio.load(melody_input) - audio_length = melody.shape[-1] / sr - if melody.dim() == 2: - melody = melody[None] - - if continuation_end < continuation_start: - raise gr.Error("The end time must be greater than the start time!") - if continuation_start < 0 or continuation_end > audio_length: - raise gr.Error("The continuation settings must be within the audio length!") - print("cutting", int(sr * continuation_start), int(sr * continuation_end)) - prompt_waveform = melody[ - ..., int(sr * continuation_start) : int(sr * continuation_end) - ] - - return (sr, prompt_waveform.unsqueeze(0).numpy()) - - with gr.Blocks(css=css) as interface: - gr.Markdown( - """ - # MusicGen Continuation - This a [MusicGen](https://github.com/facebookresearch/audiocraft), a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284) - - This Spaces only does melody continuation, you can try other features [here](https://huggingface.co/spaces/facebook/MusicGen) - """ - ) - gr.Markdown( - """ - - Duplicate Space - to use it privately - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text( - label="Describe your music", - lines=2, - interactive=True, - elem_id="text-input", - ) - with gr.Column(): - radio = gr.Radio( - ["file", "mic"], - value="file", - label="Melody Inital Condition File or Mic", - info="Make sure the audio is no longer than total generation duration which is max 30 seconds, you can trim the audio in the next section", - ) - melody = gr.Audio( - source="upload", - type="filepath", - label="File", - interactive=True, - elem_id="melody-input", - ) - with gr.Row(): - submit = gr.Button("Submit") - with gr.Row(): - duration = gr.Slider( - minimum=1, - maximum=30, - value=10, - label="Total Generation Duration", - interactive=True, - ) - with gr.Accordion(label="Input Melody Trimming (optional)", open=False): - with gr.Row(): - continuation_start = gr.Slider( - minimum=0, - maximum=30, - step=0.01, - value=0, - label="melody cut start", - interactive=True, - ) - continuation_end = gr.Slider( - minimum=0, - maximum=30, - step=0.01, - value=0, - label="melody cut end", - interactive=True, - ) - cut_btn = gr.Button("Cut Melody").style(full_width=False) - with gr.Row(): - preview_cut = gr.Audio( - type="numpy", - label="Cut Preview", - ) - with gr.Accordion(label="Advanced Settings", open=False): - with gr.Row(): - topk = gr.Number(label="Top-k", value=250, interactive=True) - topp = gr.Number(label="Top-p", value=0, interactive=True) - temperature = gr.Number( - label="Temperature", value=1.0, interactive=True - ) - cfg_coef = gr.Number( - label="Classifier Free Guidance", - value=3.0, - interactive=True, - ) - with gr.Column(): - output = gr.Video(label="Generated Music", elem_id="generated-video") - output_melody = gr.Audio(label="Melody ", elem_id="melody-output") - with gr.Row(visible=False) as share_row: - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button( - "Share to community", elem_id="share-btn" - ) - share_button.click(None, [], [], _js=share_js) - melody.change( - check_melody_length, - melody, - [continuation_start, continuation_end], - queue=False, - ) - cut_btn.click( - preview_melody_cut, - [melody, continuation_start, continuation_end], - preview_cut, - queue=False, - ) - - submit.click( - lambda x: gr.update(visible=False), - None, - [share_row], - queue=False, - show_progress=False, - ).then( - predict, - inputs=[ - text, - melody, - duration, - continuation_start, - continuation_end, - topk, - topp, - temperature, - cfg_coef, - ], - outputs=[output, output_melody], - ).then( - lambda x: gr.update(visible=True), - None, - [share_row], - queue=False, - show_progress=False, - ) - radio.change(toggle, radio, [melody], queue=False, show_progress=False) - examples = gr.Examples( - fn=predict, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - 25, - 0, - 5, - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - 25, - 0, - 5, - ], - [ - "90s rock song with electric guitar and heavy drums", - "./assets/bach.mp3", - 25, - 0, - 5, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions", - "./assets/bach.mp3", - 25, - 0, - 5, - ], - [ - "lofi slow bpm electro chill with organic samples", - "./assets/bolero_ravel.mp3", - 25, - 0, - 5, - ], - ], - inputs=[text, melody, duration, continuation_start, continuation_end], - outputs=[output], - ) - gr.Markdown( - """ - ### More details - - The model will generate a short music extract based on the description you provided. - You can generate up to 30 seconds of audio. - - We present 4 model variations: - 1. Melody -- a music generation model capable of generating music condition on text and melody inputs. **Note**, you can also use text only. - 2. Small -- a 300M transformer decoder conditioned on text only. - 3. Medium -- a 1.5B transformer decoder conditioned on text only. - 4. Large -- a 3.3B transformer decoder conditioned on text only (might OOM for the longest sequences.) - - When using `melody`, ou can optionaly provide a reference audio from - which a broad melody will be extracted. The model will then try to follow both the description and melody provided. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """ - ) - - # Show the interface - launch_kwargs = {} - username = kwargs.get("username") - password = kwargs.get("password") - server_port = kwargs.get("server_port", 0) - inbrowser = kwargs.get("inbrowser", False) - share = kwargs.get("share", False) - server_name = kwargs.get("listen") - - launch_kwargs["server_name"] = server_name - - if username and password: - launch_kwargs["auth"] = (username, password) - if server_port > 0: - launch_kwargs["server_port"] = server_port - if inbrowser: - launch_kwargs["inbrowser"] = inbrowser - if share: - launch_kwargs["share"] = share - - interface.queue().launch(**launch_kwargs, max_threads=1) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "--listen", - type=str, - default="0.0.0.0", - help="IP to listen on for connections to Gradio", - ) - parser.add_argument( - "--username", type=str, default="", help="Username for authentication" - ) - parser.add_argument( - "--password", type=str, default="", help="Password for authentication" - ) - parser.add_argument( - "--server_port", - type=int, - default=7860, - help="Port to run the server listener on", - ) - parser.add_argument("--inbrowser", action="store_true", help="Open in browser") - parser.add_argument("--share", action="store_true", help="Share the gradio UI") - - args = parser.parse_args() - - ui( - username=args.username, - password=args.password, - inbrowser=args.inbrowser, - server_port=args.server_port, - share=args.share, - listen=args.listen, - ) diff --git a/spaces/sub314xxl/SDXL-1.0-Img2Img-CPU/app.py b/spaces/sub314xxl/SDXL-1.0-Img2Img-CPU/app.py deleted file mode 100644 index 8de7c458469d49e85470c8213df012153f22fcf5..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/SDXL-1.0-Img2Img-CPU/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -import modin.pandas as pd -import torch -import numpy as np -from PIL import Image -from diffusers import DiffusionPipeline -from huggingface_hub import login -#import os - -#login(token=os.environ.get('HF_KEY')) - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16) if torch.cuda.is_available() else DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0") -pipe = pipe.to(device) - -def resize(value,img): - img = Image.open(img) - img = img.resize((value,value)) - return img - -def infer(source_img, prompt, negative_prompt, guide, steps, seed, Strength): - generator = torch.Generator(device).manual_seed(seed) - source_image = resize(768, source_img) - source_image.save('source.png') - image = pipe(prompt, negative_prompt=negative_prompt, image=source_image, strength=Strength, guidance_scale=guide, num_inference_steps=steps).images[0] - return image - -gr.Interface(fn=infer, inputs=[gr.Image(source="upload", type="filepath", label="Raw Image. Must Be .png"), gr.Textbox(label = 'Prompt Input Text. 77 Token (Keyword or Symbol) Maximum'), gr.Textbox(label='What you Do Not want the AI to generate.'), - gr.Slider(2, 15, value = 7, label = 'Guidance Scale'), - gr.Slider(1, 25, value = 10, step = 1, label = 'Number of Iterations'), - gr.Slider(label = "Seed", minimum = 0, maximum = 987654321987654321, step = 1, randomize = True), - gr.Slider(label='Strength', minimum = 0, maximum = 1, step = .05, value = .5)], - outputs='image', title = "Stable Diffusion XL 1.0 Image to Image Pipeline CPU", description = "For more information on Stable Diffusion XL 1.0 see https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0

        Upload an Image (MUST Be .PNG and 512x512 or 768x768) enter a Prompt, or let it just do its Thing, then click submit. 10 Iterations takes about ~900-1200 seconds currently. For more informationon about Stable Diffusion or Suggestions for prompts, keywords, artists or styles see https://github.com/Maks-s/sd-akashic", article = "Code Monkey: Manjushri").queue(max_size=5).launch() \ No newline at end of file diff --git "a/spaces/suchun/chatGPT_acdemic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/suchun/chatGPT_acdemic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index 7c6a7ffb5cb2c42e6543c75d6ad9dd643f412cd9..0000000000000000000000000000000000000000 --- "a/spaces/suchun/chatGPT_acdemic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,29 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - for i in range(5): - currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month - currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day - i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], - sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。" - ) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 diff --git a/spaces/sunwaee/Face-Mask-Detection/retinanet/utils.py b/spaces/sunwaee/Face-Mask-Detection/retinanet/utils.py deleted file mode 100644 index d47e3413cb022b28de46451eeef31c59d311385c..0000000000000000000000000000000000000000 --- a/spaces/sunwaee/Face-Mask-Detection/retinanet/utils.py +++ /dev/null @@ -1,144 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - -class BBoxTransform(nn.Module): - - def __init__(self, mean=None, std=None): - super(BBoxTransform, self).__init__() - if mean is None: - if torch.cuda.is_available(): - self.mean = torch.from_numpy(np.array([0, 0, 0, 0]).astype(np.float32)).cuda() - else: - self.mean = torch.from_numpy(np.array([0, 0, 0, 0]).astype(np.float32)) - - else: - self.mean = mean - if std is None: - if torch.cuda.is_available(): - self.std = torch.from_numpy(np.array([0.1, 0.1, 0.2, 0.2]).astype(np.float32)).cuda() - else: - self.std = torch.from_numpy(np.array([0.1, 0.1, 0.2, 0.2]).astype(np.float32)) - else: - self.std = std - - def forward(self, boxes, deltas): - - widths = boxes[:, :, 2] - boxes[:, :, 0] - heights = boxes[:, :, 3] - boxes[:, :, 1] - ctr_x = boxes[:, :, 0] + 0.5 * widths - ctr_y = boxes[:, :, 1] + 0.5 * heights - - dx = deltas[:, :, 0] * self.std[0] + self.mean[0] - dy = deltas[:, :, 1] * self.std[1] + self.mean[1] - dw = deltas[:, :, 2] * self.std[2] + self.mean[2] - dh = deltas[:, :, 3] * self.std[3] + self.mean[3] - - pred_ctr_x = ctr_x + dx * widths - pred_ctr_y = ctr_y + dy * heights - pred_w = torch.exp(dw) * widths - pred_h = torch.exp(dh) * heights - - pred_boxes_x1 = pred_ctr_x - 0.5 * pred_w - pred_boxes_y1 = pred_ctr_y - 0.5 * pred_h - pred_boxes_x2 = pred_ctr_x + 0.5 * pred_w - pred_boxes_y2 = pred_ctr_y + 0.5 * pred_h - - pred_boxes = torch.stack([pred_boxes_x1, pred_boxes_y1, pred_boxes_x2, pred_boxes_y2], dim=2) - - return pred_boxes - - -class ClipBoxes(nn.Module): - - def __init__(self, width=None, height=None): - super(ClipBoxes, self).__init__() - - def forward(self, boxes, img): - - batch_size, num_channels, height, width = img.shape - - boxes[:, :, 0] = torch.clamp(boxes[:, :, 0], min=0) - boxes[:, :, 1] = torch.clamp(boxes[:, :, 1], min=0) - - boxes[:, :, 2] = torch.clamp(boxes[:, :, 2], max=width) - boxes[:, :, 3] = torch.clamp(boxes[:, :, 3], max=height) - - return boxes diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/codeformer_model.py b/spaces/supertori/files/stable-diffusion-webui/modules/codeformer_model.py deleted file mode 100644 index 2b492d12373a0486d73af8cba2321940aedd1c69..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/codeformer_model.py +++ /dev/null @@ -1,143 +0,0 @@ -import os -import sys -import traceback - -import cv2 -import torch - -import modules.face_restoration -import modules.shared -from modules import shared, devices, modelloader -from modules.paths import models_path - -# codeformer people made a choice to include modified basicsr library to their project which makes -# it utterly impossible to use it alongside with other libraries that also use basicsr, like GFPGAN. -# I am making a choice to include some files from codeformer to work around this issue. -model_dir = "Codeformer" -model_path = os.path.join(models_path, model_dir) -model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth' - -have_codeformer = False -codeformer = None - - -def setup_model(dirname): - global model_path - if not os.path.exists(model_path): - os.makedirs(model_path) - - path = modules.paths.paths.get("CodeFormer", None) - if path is None: - return - - try: - from torchvision.transforms.functional import normalize - from modules.codeformer.codeformer_arch import CodeFormer - from basicsr.utils.download_util import load_file_from_url - from basicsr.utils import imwrite, img2tensor, tensor2img - from facelib.utils.face_restoration_helper import FaceRestoreHelper - from facelib.detection.retinaface import retinaface - from modules.shared import cmd_opts - - net_class = CodeFormer - - class FaceRestorerCodeFormer(modules.face_restoration.FaceRestoration): - def name(self): - return "CodeFormer" - - def __init__(self, dirname): - self.net = None - self.face_helper = None - self.cmd_dir = dirname - - def create_models(self): - - if self.net is not None and self.face_helper is not None: - self.net.to(devices.device_codeformer) - return self.net, self.face_helper - model_paths = modelloader.load_models(model_path, model_url, self.cmd_dir, download_name='codeformer-v0.1.0.pth', ext_filter=['.pth']) - if len(model_paths) != 0: - ckpt_path = model_paths[0] - else: - print("Unable to load codeformer model.") - return None, None - net = net_class(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9, connect_list=['32', '64', '128', '256']).to(devices.device_codeformer) - checkpoint = torch.load(ckpt_path)['params_ema'] - net.load_state_dict(checkpoint) - net.eval() - - if hasattr(retinaface, 'device'): - retinaface.device = devices.device_codeformer - face_helper = FaceRestoreHelper(1, face_size=512, crop_ratio=(1, 1), det_model='retinaface_resnet50', save_ext='png', use_parse=True, device=devices.device_codeformer) - - self.net = net - self.face_helper = face_helper - - return net, face_helper - - def send_model_to(self, device): - self.net.to(device) - self.face_helper.face_det.to(device) - self.face_helper.face_parse.to(device) - - def restore(self, np_image, w=None): - np_image = np_image[:, :, ::-1] - - original_resolution = np_image.shape[0:2] - - self.create_models() - if self.net is None or self.face_helper is None: - return np_image - - self.send_model_to(devices.device_codeformer) - - self.face_helper.clean_all() - self.face_helper.read_image(np_image) - self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5) - self.face_helper.align_warp_face() - - for idx, cropped_face in enumerate(self.face_helper.cropped_faces): - cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True) - normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - cropped_face_t = cropped_face_t.unsqueeze(0).to(devices.device_codeformer) - - try: - with torch.no_grad(): - output = self.net(cropped_face_t, w=w if w is not None else shared.opts.code_former_weight, adain=True)[0] - restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1)) - del output - torch.cuda.empty_cache() - except Exception as error: - print(f'\tFailed inference for CodeFormer: {error}', file=sys.stderr) - restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1)) - - restored_face = restored_face.astype('uint8') - self.face_helper.add_restored_face(restored_face) - - self.face_helper.get_inverse_affine(None) - - restored_img = self.face_helper.paste_faces_to_input_image() - restored_img = restored_img[:, :, ::-1] - - if original_resolution != restored_img.shape[0:2]: - restored_img = cv2.resize(restored_img, (0, 0), fx=original_resolution[1]/restored_img.shape[1], fy=original_resolution[0]/restored_img.shape[0], interpolation=cv2.INTER_LINEAR) - - self.face_helper.clean_all() - - if shared.opts.face_restoration_unload: - self.send_model_to(devices.cpu) - - return restored_img - - global have_codeformer - have_codeformer = True - - global codeformer - codeformer = FaceRestorerCodeFormer(dirname) - shared.face_restorers.append(codeformer) - - except Exception: - print("Error setting up CodeFormer:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - # sys.path = stored_sys_path diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rob Papen SubBoomBass VSTi RTAS.Team AiR..md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rob Papen SubBoomBass VSTi RTAS.Team AiR..md deleted file mode 100644 index ca188e37c24990e957c6794d783b298acec09fde..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rob Papen SubBoomBass VSTi RTAS.Team AiR..md +++ /dev/null @@ -1,6 +0,0 @@ -

        Rob Papen SubBoomBass VSTi RTAS.Team AiR.


        Download >>>>> https://cinurl.com/2uEYTf



        - -Rob.Papen.SubBoomBass.VSTi.RTAS.v1.0.3c.x86.x64.Incl.Keygen-AiR ... and ... TEAM AiR. PROTECTION : SYNCROSOFT.... Rob.Papen.ConcreteFX.Blue. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py deleted file mode 100644 index b634ce380421571e6e07fb45dd59717b3f63115c..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch -import numpy as np -import random -import onnxruntime as ort -def set_random_seed(seed=0): - ort.set_seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.backends.cudnn.deterministic = True - random.seed(seed) - np.random.seed(seed) - -def runonnx(model_path, **kwargs): - ort_session = ort.InferenceSession(model_path) - outputs = ort_session.run( - None, - kwargs - ) - return outputs \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py deleted file mode 100644 index 6376b7ff894280cb2782243b25e8973650591577..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/log_buffer.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/log_buffer.py deleted file mode 100644 index d949e2941c5400088c7cd8a1dc893d8b233ae785..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/log_buffer.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -import numpy as np - - -class LogBuffer: - - def __init__(self): - self.val_history = OrderedDict() - self.n_history = OrderedDict() - self.output = OrderedDict() - self.ready = False - - def clear(self): - self.val_history.clear() - self.n_history.clear() - self.clear_output() - - def clear_output(self): - self.output.clear() - self.ready = False - - def update(self, vars, count=1): - assert isinstance(vars, dict) - for key, var in vars.items(): - if key not in self.val_history: - self.val_history[key] = [] - self.n_history[key] = [] - self.val_history[key].append(var) - self.n_history[key].append(count) - - def average(self, n=0): - """Average latest n values or all values.""" - assert n >= 0 - for key in self.val_history: - values = np.array(self.val_history[key][-n:]) - nums = np.array(self.n_history[key][-n:]) - avg = np.sum(values * nums) / np.sum(nums) - self.output[key] = avg - self.ready = True diff --git a/spaces/teamnassim/Fictionista/torch_utils/__init__.py b/spaces/teamnassim/Fictionista/torch_utils/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/teamnassim/Fictionista/torch_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/terfces0erbo/CollegeProjectV2/Cimatron E12 Crack Serial Download.md b/spaces/terfces0erbo/CollegeProjectV2/Cimatron E12 Crack Serial Download.md deleted file mode 100644 index 2bf4d51785c6bf0691e716e00cc7c1c53355d33e..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Cimatron E12 Crack Serial Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        cimatron e12 crack serial download


        Download Ziphttps://bytlly.com/2uGlzH



        - -style and deliver the ultimate product with support for handling the entire production advancement. Cimatron e13 Permanent License. Design ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (avatar The Last Airbender Movie Free) [CRACKED].md b/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (avatar The Last Airbender Movie Free) [CRACKED].md deleted file mode 100644 index e04e3347bb83481563d127ab28ccc0dd2191a9b0..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (avatar The Last Airbender Movie Free) [CRACKED].md +++ /dev/null @@ -1,28 +0,0 @@ - -

        How to Watch The Last Airbender Movie Online for Free

        -

        The Last Airbender is a 2010 fantasy action-adventure film based on the animated TV series Avatar: The Last Airbender. The film follows the journey of Aang, a young boy who is the reincarnated Avatar, a being who can master all four elements: air, water, earth and fire. Aang must stop the Fire Nation from conquering the other nations and bring balance to the world.

        -

        HD Online Player (avatar the last airbender movie free)


        Download File ->->->-> https://bytlly.com/2uGkad



        -

        If you are a fan of the original TV series or just curious about this movie adaptation, you might be wondering how to watch The Last Airbender online for free. Here are some options you can try:

        -
          -
        • Netflix: Netflix is a popular streaming service that offers a wide range of movies and shows, including The Last Airbender. You can watch it on Netflix if you have a subscription or sign up for a free trial. Netflix is available in many countries and regions, but the availability of The Last Airbender may vary depending on your location[^3^].
        • -
        • Paramount Plus: Paramount Plus is another streaming service that has The Last Airbender in its library. You can watch it on Paramount Plus if you have a subscription or sign up for a free trial. Paramount Plus is only available in the United States and some Latin American countries[^1^].
        • -
        • JustWatch: JustWatch is a website that helps you find where to watch movies and shows online. You can use JustWatch to search for The Last Airbender and see which streaming platforms have it available in your country. You can also compare prices and quality options[^1^] [^2^].
        • -
        -

        These are some of the ways you can watch The Last Airbender online for free. However, please be aware that some of these options may require you to enter your credit card information or personal details, so be careful and read the terms and conditions before signing up. Also, some of these options may not be legal or authorized by the movie's producers or distributors, so watch at your own risk.

        -

        The Last Airbender is a movie that has received mixed reviews from critics and fans alike. Some people enjoyed it for its visual effects and action scenes, while others criticized it for its poor script, acting and deviation from the source material. Whether you love it or hate it, The Last Airbender is a movie that you can watch online for free if you know where to look.

        - -

        What are the critics saying about The Last Airbender?

        -

        The Last Airbender received mostly negative reviews from critics, who criticized its script, acting, direction, editing, and deviation from the source material. The film has a 5% approval rating on Rotten Tomatoes, based on 192 reviews, with an average rating of 2.7/10. The website's critical consensus reads: "The Last Airbender squanders its popular source material with incomprehensible plotting, horrible acting, and detached joyless direction."[^1^]

        -

        Some of the critics' comments include:

        -
        "A quite breathtakingly inept hodge-podge of vapid spirituality, playground chopsocky and visual effects that take 3D to an entirely new level: Zero-D." - Mark Kermode, The Observer[^1^]
        -
        "Wow. In the years of movie-going i have endured, every year there is one film that gets damned by the critics and film goers alike (Battlefield Earth, The Avengers), ans i have always found them rather endearing. The dreadful scripts, the hammy acting and the non-existent narrative all make a bad movie....somewhat appealing. So I was sadistically looking forward to this movie. And despite the fact I found the movie the funniest i've seen in a while (the word bender is slang for homosexual where I am from), it's truly a monstrosity of a turkey." - FlashCallahan, IMDb[^2^]
        -
        "\"The Last Airbender\" is an agonizing experience in every category I can think of and others still waiting to be invented. The laws of chance suggest that something should have gone right. Not here. It puts a nail in the coffin of low-rent 3D, but it will need a lot more coffins than that." - Roger Ebert, Chicago Sun-Times[^3^]
        -

        The film also received several nominations and awards for being one of the worst films of 2010, including nine nominations at the Golden Raspberry Awards (Razzies), where it won five awards: Worst Picture, Worst Director, Worst Screenplay, Worst Supporting Actor (Jackson Rathbone), and Worst Eye-Gouging Misuse of 3D.

        - -

        Is there any hope for a sequel or a reboot?

        -

        The Last Airbender was intended to be the first film of a trilogy based on the three seasons of the TV series. However, due to its poor performance and reception, the plans for sequels were put on hold indefinitely. M. Night Shyamalan expressed his interest in making a sequel as late as 2019, but no official announcement has been made.

        -

        On the other hand, fans of the TV series can look forward to a live-action adaptation by Netflix, which was announced in 2018. The original creators of the show, Michael Dante DiMartino and Bryan Konietzko, were initially involved as executive producers and showrunners, but they left the project in 2020 due to creative differences with Netflix. The cast and release date of the Netflix adaptation have not been revealed yet.

        -

        Another option for fans is to watch the animated sequel series The Legend of Korra, which follows the adventures of the next Avatar after Aang. The Legend of Korra ran for four seasons from 2012 to 2014 and received critical acclaim for its animation, story, characters, and themes. The Legend of Korra is available on various streaming platforms such as Netflix and Paramount Plus.

        -

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/test12356/SUI-svc-3.0/app.py b/spaces/test12356/SUI-svc-3.0/app.py deleted file mode 100644 index 96fcd0e66b8f0ae6356d1623e4e9d091c90cc9c9..0000000000000000000000000000000000000000 --- a/spaces/test12356/SUI-svc-3.0/app.py +++ /dev/null @@ -1,171 +0,0 @@ -import datetime -import io -import logging -import time - -from pathlib import Path -import gradio as gr -import librosa -import numpy as np -import soundfile -import os -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc -logging.getLogger('numba').setLevel(logging.WARNING) - -infer_tool.mkdir(["temp", "results","cuts"]) -model_path = "logs/48k/suiji.pth" -config_path = "configs/suiji.json" -svc_model = Svc(model_path, config_path) -sid_map = { - "岁己(本音)": "suiji" -} - -slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50 -def wav_concatenate(path_1,path_2,new_dir_path): - wavfile_1 = path_1.split('/')[-1] - wavfile_2 = path_2.split('/')[-1] - new_file_name = wavfile_2.split('.')[0]+'.wav' - if wavfile_2.split('.')[0]=='1': - signal_1,sr1 = soundfile.read(path_1) - signal_2,sr2 = soundfile.read(path_2) - if sr1==sr2: - new_signal = np.concatenate((signal_1,signal_2),axis=0) - new_path = os.path.join(new_dir_path,new_file_name) - soundfile.write(new_path,new_signal,sr1) - #soundfile.close() - else: - #soundfile.close() - print("音频采样率不一致,无法拼接") - else: - path_1=new_dir_path+str(int(wavfile_2.split('.')[0])-1)+'.wav' - new_file_name = wavfile_2.split('.')[0]+'.wav' - signal_1,sr1 = soundfile.read(path_1) - signal_2,sr2 = soundfile.read(path_2) - if sr1==sr2: - new_signal = np.concatenate((signal_1,signal_2),axis=0) - new_path = os.path.join(new_dir_path,new_file_name) - soundfile.write(new_path,new_signal,sr1) - #soundfile.close() - else: - #soundfile.close() - print("音频采样率不一致,无法拼接") - -def vc_fn(sid, input_audio, vc_transform): - if input_audio is None: - return "请先上传一段音频", None - temp_list=os.listdir('./temp/') - cut_list=os.listdir('./cuts/') - results_list=os.listdir('./results/') - t1=os.path.abspath('./cuts/') - t2=os.path.abspath('./temp/') - t3=os.path.abspath('./results/') - if cut_list: - for i in cut_list: - #print('1') - os.remove(os.path.join(t1,i)) - if temp_list: - for j in temp_list: - #print('2') - os.remove(os.path.join(t2,j)) - if results_list: - for k in results_list: - #print('3') - os.remove(os.path.join(t3,k)) - #current_path = os.path.split(os.path.realpath(__file__))[0] - sampling_rate, audio = input_audio - # print(audio.shape,sampling_rate) - duration = audio.shape[0] / sampling_rate - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate > 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - sampling_rate = 16000 - #print(audio.shape) - res_path = f'./temp/temp.wav' - - soundfile.write(res_path, audio, sampling_rate, format="wav") - infer_tool.format_wav(res_path) - wav_path = Path(res_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - sid = sid_map[sid] - #f_audio=[] - n=int(0) - for(slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, data, audio_sr, format="wav") - out_wav_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - - if n==0: - cut_path='./temp/'+str(n)+'.wav' - soundfile.write(cut_path, _audio, audio_sr, format="wav") - n=n+1 - else: - cut_path='./cuts/'+str(n)+'.wav' - soundfile.write(cut_path, _audio, audio_sr, format="wav") - n=n+1 - else: - out_audio, out_sr = svc_model.infer(sid, vc_transform, out_wav_path) - - _audio = out_audio.cpu().numpy() - if n==0: - cut_path='./temp/'+str(n)+'.wav' - soundfile.write(cut_path, _audio, audio_sr, format="wav") - n=n+1 - else: - cut_path='./cuts/'+str(n)+'.wav' - soundfile.write(cut_path, _audio, audio_sr, format="wav") - n=n+1 - #f_audio.extend(list(_audio)) - #print(f_audio) - out_wav_path.close() - path_1 = './temp/0.wav' - dir_path = './cuts/' - wav_list = os.listdir(dir_path) - wav_list.sort(key=lambda x:int(x[:-4])) - new_dir_path = './results/' - count=len(wav_list) - - if count==0: - re_audio = './temp/'+str(count)+'.wav' - else: - for o in wav_list: - path_2 = os.path.join(dir_path, o) - wav_concatenate(path_1,path_2,new_dir_path) - re_audio = './results/'+str(count)+'.wav' - - f_audio , f_sr= soundfile.read(re_audio) - return "成功", (48000, f_audio) -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("Basic"): - gr.Markdown(value=""" - 这是 sovits 3.0 48kHz AI岁己(本音)歌声音色转换的在线demo(测试,优化内存版),目前模型训练状态:560000steps / 512epochs - - 如果要训练自己的数据请访问[Github仓库](https://github.com/innnky/so-vits-svc) - - 极度不建议在本地使用该demo,磁盘可能会炸([原版](https://huggingface.co/spaces/Miuzarte/SUI-svc-3.0)无此限制) - - 更建议参考仓库[README.md上的推理部分](https://github.com/innnky/so-vits-svc#%E6%8E%A8%E7%90%86),在本地使用 inference_main.py 处理 - - 3060Ti 8G可推理一条20(建议) - 30s的音频,过长音频可分割后批量处理 - """) - sid = gr.Dropdown(label="音色", choices=["岁己(本音)"], value="岁己(本音)") - vc_input3 = gr.Audio(label="输入音频") - vc_transform = gr.Slider(minimum=-100,maximum=100,step=1,label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - vc_submit = gr.Button("转换", variant="primary") - vc_output1 = gr.Textbox(label="输出日志") - vc_output2 = gr.Audio(label="输出音频") - #dt = gr.Textbox(label="当前时间") - #app.load(get_time, inputs=None, outputs=dt) - vc_submit.click(vc_fn, [sid, vc_input3, vc_transform], [vc_output1, vc_output2]) - app.launch() \ No newline at end of file diff --git a/spaces/test12356/SUI-svc-3.0/inference/infer_tool.py b/spaces/test12356/SUI-svc-3.0/inference/infer_tool.py deleted file mode 100644 index 17781828effcb228794624e23659f83b53b239d0..0000000000000000000000000000000000000000 --- a/spaces/test12356/SUI-svc-3.0/inference/infer_tool.py +++ /dev/null @@ -1,327 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path - -import librosa -import maad -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class Svc(object): - def __init__(self, net_g_path, config_path, hubert_path="hubert/hubert-soft-0d54a1f4.pt", - onnx=False): - self.onnx = onnx - self.net_g_path = net_g_path - self.hubert_path = hubert_path - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.speakers = {} - for spk, sid in self.hps_ms.spk.items(): - self.speakers[sid] = spk - self.spk2id = self.hps_ms.spk - # 加载hubert - self.hubert_soft = hubert_model.hubert_soft(hubert_path) - if torch.cuda.is_available(): - self.hubert_soft = self.hubert_soft.cuda() - self.load_model() - - def load_model(self): - # 获取模型配置 - if self.onnx: - raise NotImplementedError - # self.net_g_ms = SynthesizerTrnForONNX( - # 178, - # self.hps_ms.data.filter_length // 2 + 1, - # self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - # n_speakers=self.hps_ms.data.n_speakers, - # **self.hps_ms.model) - # _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - else: - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - def get_units(self, source, sr): - - source = source.unsqueeze(0).to(self.dev) - with torch.inference_mode(): - start = time.time() - units = self.hubert_soft.units(source) - use_time = time.time() - start - print("hubert use time:{}".format(use_time)) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - f0 = resize2d_f0(f0, soft.shape[0]*3) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - if type(speaker_id) == str: - speaker_id = self.spk2id[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.dev) - if "half" in self.net_g_path and torch.cuda.is_available(): - stn_tst = torch.HalfTensor(soft) - else: - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.dev) - start = time.time() - x_tst = torch.repeat_interleave(x_tst, repeats=3, dim=1).transpose(1, 2) - audio = self.net_g_ms.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - -# class SvcONNXInferModel(object): -# def __init__(self, hubert_onnx, vits_onnx, config_path): -# self.config_path = config_path -# self.vits_onnx = vits_onnx -# self.hubert_onnx = hubert_onnx -# self.hubert_onnx_session = onnxruntime.InferenceSession(hubert_onnx, providers=['CUDAExecutionProvider', ]) -# self.inspect_onnx(self.hubert_onnx_session) -# self.vits_onnx_session = onnxruntime.InferenceSession(vits_onnx, providers=['CUDAExecutionProvider', ]) -# self.inspect_onnx(self.vits_onnx_session) -# self.hps_ms = utils.get_hparams_from_file(self.config_path) -# self.target_sample = self.hps_ms.data.sampling_rate -# self.feature_input = FeatureInput(self.hps_ms.data.sampling_rate, self.hps_ms.data.hop_length) -# -# @staticmethod -# def inspect_onnx(session): -# for i in session.get_inputs(): -# print("name:{}\tshape:{}\tdtype:{}".format(i.name, i.shape, i.type)) -# for i in session.get_outputs(): -# print("name:{}\tshape:{}\tdtype:{}".format(i.name, i.shape, i.type)) -# -# def infer(self, speaker_id, tran, raw_path): -# sid = np.array([int(speaker_id)], dtype=np.int64) -# soft, pitch = self.get_unit_pitch(raw_path, tran) -# pitch = np.expand_dims(pitch, axis=0).astype(np.int64) -# stn_tst = soft -# x_tst = np.expand_dims(stn_tst, axis=0) -# x_tst_lengths = np.array([stn_tst.shape[0]], dtype=np.int64) -# # 使用ONNX Runtime进行推理 -# start = time.time() -# audio = self.vits_onnx_session.run(output_names=["audio"], -# input_feed={ -# "hidden_unit": x_tst, -# "lengths": x_tst_lengths, -# "pitch": pitch, -# "sid": sid, -# })[0][0, 0] -# use_time = time.time() - start -# print("vits_onnx_session.run time:{}".format(use_time)) -# audio = torch.from_numpy(audio) -# return audio, audio.shape[-1] -# -# def get_units(self, source, sr): -# source = torchaudio.functional.resample(source, sr, 16000) -# if len(source.shape) == 2 and source.shape[1] >= 2: -# source = torch.mean(source, dim=0).unsqueeze(0) -# source = source.unsqueeze(0) -# # 使用ONNX Runtime进行推理 -# start = time.time() -# units = self.hubert_onnx_session.run(output_names=["embed"], -# input_feed={"source": source.numpy()})[0] -# use_time = time.time() - start -# print("hubert_onnx_session.run time:{}".format(use_time)) -# return units -# -# def transcribe(self, source, sr, length, transform): -# feature_pit = self.feature_input.compute_f0(source, sr) -# feature_pit = feature_pit * 2 ** (transform / 12) -# feature_pit = resize2d_f0(feature_pit, length) -# coarse_pit = self.feature_input.coarse_f0(feature_pit) -# return coarse_pit -# -# def get_unit_pitch(self, in_path, tran): -# source, sr = torchaudio.load(in_path) -# soft = self.get_units(source, sr).squeeze(0) -# input_pitch = self.transcribe(source.numpy()[0], sr, soft.shape[0], tran) -# return soft, input_pitch - - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path): - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav) - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/tfwang/PITI-Synthesis/glide_text2im/logger.py b/spaces/tfwang/PITI-Synthesis/glide_text2im/logger.py deleted file mode 100644 index 8b266729d5b428ebace7e4dc02df9b85c762954a..0000000000000000000000000000000000000000 --- a/spaces/tfwang/PITI-Synthesis/glide_text2im/logger.py +++ /dev/null @@ -1,497 +0,0 @@ -""" -Logger copied from OpenAI baselines to avoid extra RL-based dependencies: -https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/logger.py -""" - -import os -import sys -import shutil -import os.path as osp -import json -import time -import datetime -import tempfile -import warnings -from collections import defaultdict -from contextlib import contextmanager - -DEBUG = 10 -INFO = 20 -WARN = 30 -ERROR = 40 - -DISABLED = 50 - - -class KVWriter(object): - def writekvs(self, kvs): - raise NotImplementedError - - -class SeqWriter(object): - def writeseq(self, seq): - raise NotImplementedError - - -class HumanOutputFormat(KVWriter, SeqWriter): - def __init__(self, filename_or_file): - if isinstance(filename_or_file, str): - self.file = open(filename_or_file, "wt") - self.own_file = True - else: - assert hasattr(filename_or_file, "read"), ( - "expected file or str, got %s" % filename_or_file - ) - self.file = filename_or_file - self.own_file = False - - def writekvs(self, kvs): - # Create strings for printing - key2str = {} - for (key, val) in sorted(kvs.items()): - if hasattr(val, "__float__"): - valstr = "%-8.3g" % val - else: - valstr = str(val) - key2str[self._truncate(key)] = self._truncate(valstr) - - # Find max widths - if len(key2str) == 0: - print("WARNING: tried to write empty key-value dict") - return - else: - keywidth = max(map(len, key2str.keys())) - valwidth = max(map(len, key2str.values())) - - # Write out the data - dashes = "-" * (keywidth + valwidth + 7) - lines = [dashes] - for (key, val) in sorted(key2str.items(), key=lambda kv: kv[0].lower()): - lines.append( - "| %s%s | %s%s |" - % (key, " " * (keywidth - len(key)), val, " " * (valwidth - len(val))) - ) - lines.append(dashes) - self.file.write("\n".join(lines) + "\n") - - # Flush the output to the file - self.file.flush() - - def _truncate(self, s): - maxlen = 30 - return s[: maxlen - 3] + "..." if len(s) > maxlen else s - - def writeseq(self, seq): - seq = list(seq) - for (i, elem) in enumerate(seq): - self.file.write(elem) - if i < len(seq) - 1: # add space unless this is the last one - self.file.write(" ") - self.file.write("\n") - self.file.flush() - - def close(self): - if self.own_file: - self.file.close() - - -class JSONOutputFormat(KVWriter): - def __init__(self, filename): - self.file = open(filename, "wt") - - def writekvs(self, kvs): - for k, v in sorted(kvs.items()): - if hasattr(v, "dtype"): - kvs[k] = float(v) - self.file.write(json.dumps(kvs) + "\n") - self.file.flush() - - def close(self): - self.file.close() - - -class CSVOutputFormat(KVWriter): - def __init__(self, filename): - self.file = open(filename, "w+t") - self.keys = [] - self.sep = "," - - def writekvs(self, kvs): - # Add our current row to the history - extra_keys = list(kvs.keys() - self.keys) - extra_keys.sort() - if extra_keys: - self.keys.extend(extra_keys) - self.file.seek(0) - lines = self.file.readlines() - self.file.seek(0) - for (i, k) in enumerate(self.keys): - if i > 0: - self.file.write(",") - self.file.write(k) - self.file.write("\n") - for line in lines[1:]: - self.file.write(line[:-1]) - self.file.write(self.sep * len(extra_keys)) - self.file.write("\n") - for (i, k) in enumerate(self.keys): - if i > 0: - self.file.write(",") - v = kvs.get(k) - if v is not None: - self.file.write(str(v)) - self.file.write("\n") - self.file.flush() - - def close(self): - self.file.close() - - -class TensorBoardOutputFormat(KVWriter): - """ - Dumps key/value pairs into TensorBoard's numeric format. - """ - - def __init__(self, dir): - os.makedirs(dir, exist_ok=True) - self.dir = dir - self.step = 1 - prefix = "events" - path = osp.join(osp.abspath(dir), prefix) - import tensorflow as tf - from tensorflow.python import pywrap_tensorflow - from tensorflow.core.util import event_pb2 - from tensorflow.python.util import compat - - self.tf = tf - self.event_pb2 = event_pb2 - self.pywrap_tensorflow = pywrap_tensorflow - self.writer = pywrap_tensorflow.EventsWriter(compat.as_bytes(path)) - - def writekvs(self, kvs): - def summary_val(k, v): - kwargs = {"tag": k, "simple_value": float(v)} - return self.tf.Summary.Value(**kwargs) - - summary = self.tf.Summary(value=[summary_val(k, v) for k, v in kvs.items()]) - event = self.event_pb2.Event(wall_time=time.time(), summary=summary) - event.step = ( - self.step - ) # is there any reason why you'd want to specify the step? - self.writer.WriteEvent(event) - self.writer.Flush() - self.step += 1 - - def close(self): - if self.writer: - self.writer.Close() - self.writer = None - - -def make_output_format(format, ev_dir, log_suffix=""): - os.makedirs(ev_dir, exist_ok=True) - if format == "stdout": - return HumanOutputFormat(sys.stdout) - elif format == "log": - return HumanOutputFormat(osp.join(ev_dir, "log%s.txt" % log_suffix)) - elif format == "json": - return JSONOutputFormat(osp.join(ev_dir, "progress%s.json" % log_suffix)) - elif format == "csv": - return CSVOutputFormat(osp.join(ev_dir, "progress%s.csv" % log_suffix)) - elif format == "tensorboard": - return TensorBoardOutputFormat(osp.join(ev_dir, "tb%s" % log_suffix)) - else: - raise ValueError("Unknown format specified: %s" % (format,)) - - -# ================================================================ -# API -# ================================================================ - - -def logkv(key, val): - """ - Log a value of some diagnostic - Call this once for each diagnostic quantity, each iteration - If called many times, last value will be used. - """ - get_current().logkv(key, val) - - -def logkv_mean(key, val): - """ - The same as logkv(), but if called many times, values averaged. - """ - get_current().logkv_mean(key, val) - - -def logkvs(d): - """ - Log a dictionary of key-value pairs - """ - for (k, v) in d.items(): - logkv(k, v) - - -def dumpkvs(): - """ - Write all of the diagnostics from the current iteration - """ - return get_current().dumpkvs() - - -def getkvs(): - return get_current().name2val - - -def log(*args, level=INFO): - """ - Write the sequence of args, with no separators, to the console and output files (if you've configured an output file). - """ - get_current().log(*args, level=level) - - -def debug(*args): - log(*args, level=DEBUG) - - -def info(*args): - log(*args, level=INFO) - - -def warn(*args): - log(*args, level=WARN) - - -def error(*args): - log(*args, level=ERROR) - - -def set_level(level): - """ - Set logging threshold on current logger. - """ - get_current().set_level(level) - - -def set_comm(comm): - get_current().set_comm(comm) - -def save_args(args): - get_current().save_args(args) - -def get_dir(): - """ - Get directory that log files are being written to. - will be None if there is no output directory (i.e., if you didn't call start) - """ - return get_current().get_dir() - - -record_tabular = logkv -dump_tabular = dumpkvs - - -@contextmanager -def profile_kv(scopename): - logkey = "wait_" + scopename - tstart = time.time() - try: - yield - finally: - get_current().name2val[logkey] += time.time() - tstart - - -def profile(n): - """ - Usage: - @profile("my_func") - def my_func(): code - """ - - def decorator_with_name(func): - def func_wrapper(*args, **kwargs): - with profile_kv(n): - return func(*args, **kwargs) - - return func_wrapper - - return decorator_with_name - - -# ================================================================ -# Backend -# ================================================================ - - -def get_current(): - if Logger.CURRENT is None: - _configure_default_logger() - - return Logger.CURRENT - - -class Logger(object): - DEFAULT = None # A logger with no output files. (See right below class definition) - # So that you can still log to the terminal without setting up any output files - CURRENT = None # Current logger being used by the free functions above - - def __init__(self, dir, output_formats, comm=None): - self.name2val = defaultdict(float) # values this iteration - self.name2cnt = defaultdict(int) - self.level = INFO - self.dir = dir - self.output_formats = output_formats - self.comm = comm - - def save_args(self,args): - with open(osp.join(self.dir,"args.json"),'w') as f: - json.dump(args,f) - - # Logging API, forwarded - # ---------------------------------------- - def logkv(self, key, val): - self.name2val[key] = val - - def logkv_mean(self, key, val): - oldval, cnt = self.name2val[key], self.name2cnt[key] - self.name2val[key] = oldval * cnt / (cnt + 1) + val / (cnt + 1) - self.name2cnt[key] = cnt + 1 - - def dumpkvs(self): - if self.comm is None: - d = self.name2val - else: - d = mpi_weighted_mean( - self.comm, - { - name: (val, self.name2cnt.get(name, 1)) - for (name, val) in self.name2val.items() - }, - ) - if self.comm.rank != 0: - d["dummy"] = 1 # so we don't get a warning about empty dict - out = d.copy() # Return the dict for unit testing purposes - for fmt in self.output_formats: - if isinstance(fmt, KVWriter): - fmt.writekvs(d) - self.name2val.clear() - self.name2cnt.clear() - return out - - def log(self, *args, level=INFO): - if self.level <= level: - self._do_log(args) - - # Configuration - # ---------------------------------------- - def set_level(self, level): - self.level = level - - def set_comm(self, comm): - self.comm = comm - - def get_dir(self): - return self.dir - - def close(self): - for fmt in self.output_formats: - fmt.close() - - # Misc - # ---------------------------------------- - def _do_log(self, args): - for fmt in self.output_formats: - if isinstance(fmt, SeqWriter): - fmt.writeseq(map(str, args)) - - -def get_rank_without_mpi_import(): - # check environment variables here instead of importing mpi4py - # to avoid calling MPI_Init() when this module is imported - for varname in ["PMI_RANK", "OMPI_COMM_WORLD_RANK"]: - if varname in os.environ: - return int(os.environ[varname]) - return 0 - - -def mpi_weighted_mean(comm, local_name2valcount): - """ - Copied from: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/common/mpi_util.py#L110 - Perform a weighted average over dicts that are each on a different node - Input: local_name2valcount: dict mapping key -> (value, count) - Returns: key -> mean - """ - all_name2valcount = comm.gather(local_name2valcount) - if comm.rank == 0: - name2sum = defaultdict(float) - name2count = defaultdict(float) - for n2vc in all_name2valcount: - for (name, (val, count)) in n2vc.items(): - try: - val = float(val) - except ValueError: - if comm.rank == 0: - warnings.warn( - "WARNING: tried to compute mean on non-float {}={}".format( - name, val - ) - ) - else: - name2sum[name] += val * count - name2count[name] += count - return {name: name2sum[name] / name2count[name] for name in name2sum} - else: - return {} - - -def configure(dir=None, format_strs=None, comm=None, log_suffix=""): - """ - If comm is provided, average all numerical stats across that comm - """ - if dir is None: - dir = os.getenv("LOGDIR") - - assert isinstance(dir, str) - dir = os.path.expanduser(dir) - os.makedirs(os.path.expanduser(dir), exist_ok=True) - - rank = get_rank_without_mpi_import() - if rank > 0: - log_suffix = log_suffix + "-rank%03i" % rank - - if format_strs is None: - if rank == 0: - format_strs = os.getenv("OPENAI_LOG_FORMAT", "stdout,log,csv").split(",") - else: - format_strs = os.getenv("OPENAI_LOG_FORMAT_MPI", "log").split(",") - format_strs = filter(None, format_strs) - output_formats = [make_output_format(f, dir, log_suffix) for f in format_strs] - - Logger.CURRENT = Logger(dir=dir, output_formats=output_formats, comm=comm) - if output_formats: - log("Logging to %s" % dir) - - -def _configure_default_logger(): - configure() - Logger.DEFAULT = Logger.CURRENT - - -def reset(): - if Logger.CURRENT is not Logger.DEFAULT: - Logger.CURRENT.close() - Logger.CURRENT = Logger.DEFAULT - log("Reset logger") - - -@contextmanager -def scoped_configure(dir=None, format_strs=None, comm=None): - prevlogger = Logger.CURRENT - configure(dir=dir, format_strs=format_strs, comm=comm) - try: - yield - finally: - Logger.CURRENT.close() - Logger.CURRENT = prevlogger - diff --git a/spaces/thegenerativegeneration/FNeVR_demo/README.md b/spaces/thegenerativegeneration/FNeVR_demo/README.md deleted file mode 100644 index ba6c6bed8506f8d827a90d14a1240d69956bfc2f..0000000000000000000000000000000000000000 --- a/spaces/thegenerativegeneration/FNeVR_demo/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: FNeVR Demo -emoji: 👁 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: PascalLiu/FNeVR_demo -python_version: 3.9 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/themanas021/AI-TEXT-DETECTION/app.py b/spaces/themanas021/AI-TEXT-DETECTION/app.py deleted file mode 100644 index 63a7322224806acc21c2dc74b7681ac82d1ac7ae..0000000000000000000000000000000000000000 --- a/spaces/themanas021/AI-TEXT-DETECTION/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -import pickle -import numpy as np - -# Load the model from the pickle file -with open("pipeline_model.pkl", "rb") as file: - pipeline = pickle.load(file) - -def classify_text(text): - label_prob = pipeline.predict_proba([text]) - prob_human = label_prob[0][0] - prob_ai = label_prob[0][1] - - if prob_human > 0.5: - label = "Human Generated" - elif prob_ai > 0.5: - label = "AI-Generated" - else: - label = "uncertain" - - return f"This text is {label} \n" \ - f"Prediction Probabilities:\n" \ - f"Human: {prob_human:.3f}\n" \ - f"AI: {prob_ai:.3f}" - -iface = gr.Interface( - fn=classify_text, - inputs=gr.inputs.Textbox(lines=5, label="Enter a text:", placeholder="Type here..."), - outputs="text" -) - -# Launch the Gradio interface -iface.launch() diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/AutoCAD Architecture 2018 (x86x64) Incl Keygen Serial Key.md b/spaces/tialenAdioni/chat-gpt-api/logs/AutoCAD Architecture 2018 (x86x64) Incl Keygen Serial Key.md deleted file mode 100644 index 6180a778ac8d4e29b9790069400ead9f1ece8c37..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/AutoCAD Architecture 2018 (x86x64) Incl Keygen Serial Key.md +++ /dev/null @@ -1,25 +0,0 @@ - -

        How to Download and Install AutoCAD Architecture 2018 (x86x64) Incl Keygen Serial Key

        -

        AutoCAD Architecture 2018 is a powerful software developed by Autodesk for architectural engineering designs. It enables architects and designers to design and document engineering projects more efficiently. It is fully loaded with latest designing tools that help the architects to improve design productivity. In this article, we will show you how to download and install AutoCAD Architecture 2018 (x86x64) Incl Keygen Serial Key on your computer.

        -

        Step 1: Download AutoCAD Architecture 2018

        -

        You can download AutoCAD Architecture 2018 from the official Autodesk website[^1^] or from other trusted sources[^2^]. You will need to sign in to your Autodesk account or create one if you don't have one. You can also use your education license if you are a student or educator[^1^]. You can choose between different versions of AutoCAD Architecture 2018, such as the standard version, the update version, or the special edition version[^2^]. The file size is about 6.6 GB, so make sure you have enough space on your hard drive and a stable internet connection.

        -

        AutoCAD Architecture 2018 (x86x64) Incl Keygen Serial Key


        Download File →→→ https://urlcod.com/2uKaYh



        -

        Step 2: Install AutoCAD Architecture 2018

        -

        After downloading the file, you will need to extract it using a tool like WinRAR or 7-Zip. You will get an ISO file that you can mount using a virtual drive software like Daemon Tools or PowerISO. Alternatively, you can burn the ISO file to a DVD using a tool like Nero or ImgBurn. Then, you can run the setup.exe file from the mounted or burned ISO file. Follow the instructions on the screen to install AutoCAD Architecture 2018 on your computer. You will need to accept the license agreement, choose the installation type (typical or custom), select the components you want to install, and specify the installation location.

        -

        Step 3: Activate AutoCAD Architecture 2018

        -

        To activate AutoCAD Architecture 2018, you will need to use the keygen serial key that is included in the download file. You can find it in the Crack folder or in a separate file. You will need to run the keygen as administrator and generate a serial number and a product key for AutoCAD Architecture 2018. You can also use the following serial number and product key as an example:

        -
          -
        • Serial Number: 666-69696969
        • -
        • Product Key: 185J1
        • -
        -

        Then, you can launch AutoCAD Architecture 2018 and enter the serial number and product key when prompted. You will also need to select "I have an activation code from Autodesk" and copy the request code from the activation window. Then, go back to the keygen and paste the request code into it. Click on "Generate" and copy the activation code from the keygen. Finally, go back to the activation window and paste the activation code into it. Click on "Next" and enjoy your activated AutoCAD Architecture 2018.

        -

        Conclusion

        -

        AutoCAD Architecture 2018 is a great software for architectural engineering designs. It has many features and tools that can help you create and document your projects more efficiently. To download and install AutoCAD Architecture 2018 (x86x64) Incl Keygen Serial Key, you just need to follow these simple steps:

        -
          -
        1. Download AutoCAD Architecture 2018 from the official website or other sources.
        2. -
        3. Install AutoCAD Architecture 2018 on your computer.
        4. -
        5. Activate AutoCAD Architecture 2018 using the keygen serial key.
        6. -
        -

        We hope this article was helpful for you. If you have any questions or problems, please feel free to contact us or visit the Autodesk support website[^3^] [^4^] for more information.

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Experience the Thrill of Horse Riding with PC Riding Academy 2 Torrent.md b/spaces/tialenAdioni/chat-gpt-api/logs/Experience the Thrill of Horse Riding with PC Riding Academy 2 Torrent.md deleted file mode 100644 index 08c6d4a51217a93dacd3eba6fdd96407939b143e..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Experience the Thrill of Horse Riding with PC Riding Academy 2 Torrent.md +++ /dev/null @@ -1,143 +0,0 @@ -
        -

        PC Riding Academy 2 Torrent: A Guide to Download and Play the Game

        -

        If you are a fan of horse riding games, you might have heard of PC Riding Academy 2, a popular game that lets you take care of your horse and compete in various events. But how can you download and play this game on your computer? In this article, we will show you how to get PC Riding Academy 2 torrent and enjoy this amazing game.

        -

        What is PC Riding Academy 2?

        -

        PC Riding Academy 2 is a computer game for PC that was released in November 2012. It is a sequel to the original Horse Riding Academy game that came out in 2008. In this game, you can unlock more than 40 challenges across three eras. In the first era, you can unlock the horse, and in the second era, you can unlock a mage, and in the final era, you can unlock a new area that takes you to the next era.

        -

        pc riding academy 2 torrent


        Download >>> https://urlcod.com/2uK8px



        -

        A brief introduction to the game and its features

        -

        The game is based on the equestrian academy where you will learn all the skills of riding. You will have to take care of your horse, feed it, groom it, and train it. You will also have to pass tests and examinations to prove your abilities. You can choose from different breeds of horses, each with their own characteristics and personalities. You can also customize your horse's appearance and equipment.

        -

        The game has three modes: dressage, rallying, and show jumping. In dressage, you have to perform a series of movements with your horse in a specific order. In rallying, you have to ride your horse through different terrains and obstacles. In show jumping, you have to jump over fences and hurdles with your horse. You can compete in these events against other players or against the computer.

        -

        pc riding academy 2 full version download
        -pc riding academy 2 free game
        -pc riding academy 2 crack
        -pc riding academy 2 russian version
        -pc riding academy 2 horse game
        -pc riding academy 2 gameplay
        -pc riding academy 2 cheats
        -pc riding academy 2 review
        -pc riding academy 2 system requirements
        -pc riding academy 2 trainer
        -pc riding academy 2 patch
        -pc riding academy 2 serial key
        -pc riding academy 2 iso
        -pc riding academy 2 rar
        -pc riding academy 2 exe
        -pc riding academy 2 online
        -pc riding academy 2 multiplayer
        -pc riding academy 2 mods
        -pc riding academy 2 steam
        -pc riding academy 2 windows 10
        -pc riding academy 2 mac
        -pc riding academy 2 linux
        -pc riding academy 2 android
        -pc riding academy 2 ios
        -pc riding academy 2 apk
        -pc riding academy 2 obb
        -pc riding academy 2 soundcloud
        -pc riding academy 2 it-sbo.com
        -pc riding academy 2 sway.office.com
        -pc riding academy 2 dbtorrent.games
        -dbtorrent.games download pc riding academy 2
        -dbtorrent.games скачать на пк торрент игру академия верховой езды 2
        -soundcloud stream pc riding academy 2 torrent by gioconere
        -it-sbo.com horse riding academy 2 torrent pdf
        -sway.office.com juGNzNdLq7vBAQv2
        -horse racing game for pc torrent
        -horse simulator game for pc torrent
        -horse adventure game for pc torrent
        -horse training game for pc torrent
        -horse jumping game for pc torrent
        -horse dressage game for pc torrent
        -horse breeding game for pc torrent
        -horse care game for pc torrent
        -horse grooming game for pc torrent
        -horse feeding game for pc torrent
        -horse school game for pc torrent
        -horse castle game for pc torrent
        -horse magic game for pc torrent
        -horse era game for pc torrent

        -

        The requirements and compatibility of the game

        -

        The game requires a Windows XP/Vista/7/8/10 operating system, a Pentium IV processor or higher, a RAM memory of at least 512 MB, a DirectX compatible sound card, a video card with at least 128 MB of memory, and a hard disk space of at least 1 GB. The game is compatible with most web browsers (PC), Android/iOS (Mobile), and Yandex Games.

        -

        How to download PC Riding Academy 2 torrent?

        -

        If you want to download PC Riding Academy 2 torrent, you will need a torrent program such as BitTorrent or uTorrent. You will also need a reliable source for the torrent file such as Gameis.net or SoundCloud. Here are the steps to download the torrent file and the game itself:

        -

        The steps to download the torrent file and the game itself

        -
          -
        1. Go to one of the websites that offer PC Riding Academy 2 torrent such as Gameis.net or SoundCloud.
        2. -
        3. Click on the download button or link for the torrent file.
        4. -
        5. Save the torrent file on your computer.
        6. -
        7. Open the torrent file with your torrent program.
        8. -
        9. Select where you want to save the game files on your computer.
        10. -
        11. Wait for the download to finish.
        12. -
        13. Open the downloaded file with an ISO program such as UltraISO.
        14. -
        15. Run the setup.exe file and follow the instructions to install the game.
        16. -
        17. Enjoy playing PC Riding Academy 2!
        18. -
        -

        The precautions and tips to avoid viruses and malware

        -

        Downloading torrents can be risky if you are not careful. You might encounter viruses or malware that can harm your computer or steal your personal information. To avoid these problems, here are some precautions and tips:

        -
          -
        • Use a reputable website for downloading torrents such as Gameis.net or SoundCloud.
        • -
        • Use an antivirus program such as Avast or Norton to scan your computer before and after downloading torrents.
        • -
        • Use a VPN service such as ExpressVPN or NordVPN to hide your IP address and protect your privacy online.
        • -
        • Use an ad blocker such as AdBlock or uBlock Origin to block unwanted ads and pop-ups that might contain malicious links or scripts.
        • -
        • Read the comments and reviews of other users who have downloaded the same torrent file before you download it.
        • -
        • Check the file size and format of the torrent file before you download it. If it is too small or too large, or if it has an unusual extension such as .exe or .bat, it might be fake or infected.
        • -
        -

        How to play PC Riding Academy 2?

        -

        Once you have downloaded and installed PC Riding Academy 2 on your computer, you can start playing it right away. Here are some basics of horse riding and care in the game, as well as some different modes and challenges that you can try:

        -

        The basics of horse riding and care in the game

        -

        In order to ride your horse well, you need to take care of it first. You can access your horse's stable by clicking on its icon on the screen. There you can feed it, groom it, train it, and equip it with various items such as saddles, bridles, blankets, etc. You can also check your horse's health, mood, energy, hunger, thirst, cleanliness, loyalty, speed, agility, stamina, strength, intelligence, etc.

        -

        To ride your horse, you need to use your keyboard or mouse. You can move your horse forward by pressing W or clicking on its head. You can move it backward by pressing S or clicking on its tail. You can turn it left by pressing A or clicking on its left side. You can turn it right by pressing D or clicking on its right side. You can make it jump by pressing Spacebar or clicking on its legs. You can make it trot by pressing Shift or clicking on its back. You can make it gallop by pressing Ctrl or double-clicking on its back.

        -

        The different modes and challenges in the game

        -

        The game has three modes that you can choose from: dressage, rallying, and show jumping. Each mode has different levels of difficulty that you can select from easy to hard. Each mode also has different challenges that you can unlock by completing certain tasks or earning certain scores.

        - - - - - -
        ModeDescriptionChallenges
        DressageIn this mode, you have to perform a series of movements with your horse in a specific order within a limited time. You have to follow the instructions on the screen and match them with your keyboard or mouse commands. You will be judged by how accurately and gracefully you execute each movement.- Learn how to do basic movements such as halt, walk, trot, canter, circle, serpentine, etc.
        - Learn how to do advanced movements such as pirouette, half-pass, flying change, piaffe, passage, etc.
        - Compete in different arenas such as indoor, outdoor, forest, beach, etc.
        - Earn medals such as bronze, silver, gold, platinum, etc.
        RallyingIn this mode, you have to ride your horse through different terrains and obstacles within a limited time. You have to avoid crashing into trees, rocks, fences, etc. You also have to collect coins and power-ups along the way. You will be judged by how fast and safely you complete each course.- Learn how to control your horse's speed and direction.
        - Learn how to jump over obstacles and avoid hazards.
        - Compete in different courses such as forest, desert, mountain, etc.
        - Earn medals such as bronze, silver, gold, platinum, etc.
        Show jumpingIn this mode, you have to jump over fences and hurdles with your horse within a limited time. You have to follow the correct order and direction of the jumps. You also have to avoid knocking down the poles or refusing to jump. You will be judged by how high and accurately you jump each obstacle.- Learn how to approach and take off each jump.
        - Learn how to adjust your horse's stride and balance.
        - Compete in different arenas such as indoor, outdoor, stadium, etc.
        - Earn medals such as bronze, silver, gold, platinum, etc.
        -

        Why should you play PC Riding Academy 2?

        -

        PC Riding Academy 2 is not only a fun and exciting game, but also a beneficial and educational one. Here are some of the reasons why you should play this game:

        -

        The benefits and advantages of playing the game

        -
          -
        • You can improve your hand-eye coordination and reflexes by controlling your horse and avoiding obstacles.
        • -
        • You can enhance your creativity and imagination by customizing your horse and its equipment.
        • -
        • You can develop your problem-solving and decision-making skills by choosing the best strategy and tactics for each event.
        • -
        • You can increase your knowledge and awareness of horses and equestrian sports by learning about different breeds, movements, rules, etc.
        • -
        • You can experience the thrill and joy of riding a horse without having to own one or go to a real riding school.
        • -
        -

        The reviews and ratings of the game by other players

        -

        PC Riding Academy 2 has received positive reviews and ratings from other players who have played the game. Here are some of their comments:

        -
        "This game is awesome! I love horses and this game lets me feel like I'm really riding one. The graphics are great and the gameplay is smooth. I recommend this game to anyone who loves horses or riding games." - Anna
        -
        "This game is very realistic and challenging. I like how you have to take care of your horse and train it for different events. The game also teaches you a lot about horses and equestrian sports. It's like a virtual riding school." - Ben
        -
        "This game is fun and addictive. I enjoy competing in different modes and unlocking new challenges. The game also has a nice storyline that keeps me interested. It's one of the best horse riding games I've ever played." - Chloe
        -

        Conclusion

        -

        PC Riding Academy 2 is a computer game for PC that lets you take care of your horse and compete in various events such as dressage, rallying, and show jumping. You can download PC Riding Academy 2 torrent from reliable sources such as Gameis.net or SoundCloud using a torrent program such as BitTorrent or uTorrent. You can also play PC Riding Academy 2 online on web browsers (PC), Android/iOS (Mobile), or Yandex Games. PC Riding Academy 2 is a fun and exciting game that also has many benefits and advantages for your skills and knowledge. It is also a popular game that has received positive reviews and ratings from other players.

        -

        If you are a fan of horse riding games or if you want to try something new and different, you should definitely play PC Riding Academy 2. It is a game that will keep you entertained and engaged for hours. So what are you waiting for? Download PC Riding Academy 2 torrent today and enjoy this amazing game!

        -

        FAQs

        -

        Q1: Is PC Riding Academy 2 free to play?

        -

        A1: No, PC Riding Academy 2 is not free to play. You have to pay a certain amount of money to download the game from the official website or from other online platforms. However, you can download PC Riding Academy 2 torrent for free from some websites such as Gameis.net or SoundCloud.

        -

        Q2: How long is PC Riding Academy 2?

        -

        A2: PC Riding Academy 2 does not have a fixed length or duration. It depends on how fast you complete each mode and challenge in the game. However, according to some players who have played the game, it takes about 10 hours to finish the game on average.

        -

        Q3: Can I play PC Riding Academy 2 online with other players?

        -

        A3: Yes, you can play PC Riding Academy 2 online with other players. You can either join an existing online session or create your own online session. You can also chat with other players using the in-game chat feature.

        -

        Q4: What are some similar games to PC Riding Academy 2?

        -

        A4: Some similar games to PC Riding Academy 2 are:

        -
          -
        • Riding Club Championships: A multiplayer online horse riding game that lets you create your own horse club and compete in various events with other players.
        • -
        • Horse Life Adventures: A simulation game that lets you raise your own horses from foals to adults and explore different locations with them.
        • -
        • Riding Star: A realistic horse riding game that lets you choose from different disciplines such as dressage, cross country, or show jumping.
        • -
        • Horse Haven World Adventures: A casual game that lets you build your own dream ranch and breed different types of horses.
        • -
        • Riding Out: An open world horse riding game that lets you explore a vast map with your friends or alone.
        • -
        -

        Q5: Where can I find more information about PC Riding Academy 2?

        -

        A5: You can find more information about PC Riding Academy 2 on the following websites:

        -
          -
        • The official website of PC Riding Academy 2: https://www.viva-media.com/riding-academy-2/
        • -
        • The official Facebook page of PC Riding Academy 2: https://www.facebook.com/RidingAcademyGame/
        • -
        • The official YouTube channel of PC Riding Academy 2: https://www.youtube.com/channel/UCwZwZwZwZwZwZwZwZwZwZwZw/
        • -
        • The official Twitter account of PC Riding Academy 2: https://twitter.com/RidingAcademy_
        • -
        • The official Instagram account of PC Riding Academy 2: https://www.instagram.com/ridingacademygame/
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Draw Manga The Complete Beginners Guide to Drawing Manga Characters Like a Pro with This Absolute Step-By-Step Masterclass.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Draw Manga The Complete Beginners Guide to Drawing Manga Characters Like a Pro with This Absolute Step-By-Step Masterclass.md deleted file mode 100644 index 41e2f1c1816b9981a476ae7c60a637ad2fcae8cd..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Draw Manga The Complete Beginners Guide to Drawing Manga Characters Like a Pro with This Absolute Step-By-Step Masterclass.md +++ /dev/null @@ -1,54 +0,0 @@ - -```html -

        How to Draw Manga: The Absolute Step-By-Step Beginners Guide On Drawing Manga Characters (Mastering)

        -

        Manga is a popular style of comic book and graphic novel that originated in Japan. Manga characters have distinctive features, such as large eyes, expressive faces, and dynamic poses. If you want to learn how to draw manga, this article will show you the absolute step-by-step beginners guide on drawing manga characters (mastering).

        -

        The first step is to decide what kind of manga character you want to draw. There are many genres and subgenres of manga, such as shonen (action-adventure), shojo (romance), seinen (mature), josei (women's), horror, comedy, fantasy, sci-fi, and more. Each genre has its own conventions and tropes, so you should do some research and find some examples of manga that you like and inspire you.

        -

        How to Draw Manga: The Absolute Step-By-Step Beginners Guide On Drawing Manga Characters (Mastering


        DOWNLOADhttps://urlcod.com/2uKax4



        -

        The second step is to sketch the basic shape and proportions of your manga character. You can use a simple stick figure or a geometric shape to represent the body. The head is usually about one-sixth of the total height of the character, and the eyes are about halfway down the face. The shoulders are about two heads wide, and the waist is about one head wide. The arms and legs are about the same length as the torso. You can adjust these proportions depending on the age, gender, and style of your character.

        -

        The third step is to add details and features to your manga character. You can start by drawing the eyes, which are one of the most important and expressive parts of a manga character. The eyes are usually oval or almond-shaped, with thick eyelashes and eyebrows. The iris is usually large and colorful, with a highlight or a reflection to make it shiny. The nose and mouth are usually small and simple, with a curved line or a dot for the nose and a line or a curve for the mouth. You can also add some details like freckles, scars, or piercings.

        -

        The fourth step is to draw the hair and clothing of your manga character. The hair is usually one of the most distinctive and creative aspects of a manga character. You can draw it in any style, length, color, or texture that you want. You can also add accessories like hats, ribbons, or clips. The clothing should match the genre and personality of your character. You can draw it in any fashion or style that you want, but make sure to add some folds and wrinkles to make it realistic and dynamic.

        -

        The fifth step is to ink and color your manga character. You can use a pen, a marker, or a digital tool to trace over your sketch and make it clean and crisp. You can also erase any unwanted lines or mistakes. You can use any colors that you want for your manga character, but make sure to use some shading and highlights to make it more three-dimensional and realistic. You can also add some backgrounds or effects to make your manga character more interesting and appealing.

        -

        How to sketch manga characters for beginners
        -Learn the basics of drawing manga with this step-by-step guide
        -Mastering manga drawing: tips and tricks for beginners
        -How to draw manga eyes, hair, and expressions
        -The ultimate guide to drawing manga anatomy and proportions
        -How to create your own manga characters and stories
        -How to draw manga in different styles and genres
        -How to color and shade manga with digital tools
        -How to improve your manga drawing skills with practice and feedback
        -How to draw manga backgrounds and environments
        -How to draw chibi and cute manga characters
        -How to draw manga action scenes and poses
        -How to draw manga animals and fantasy creatures
        -How to draw manga clothing and accessories
        -How to draw manga faces and emotions
        -How to draw yaoi and yuri manga characters
        -How to draw manga hands and feet
        -How to draw manga weapons and props
        -How to draw mecha and sci-fi manga characters
        -How to draw shoujo and romance manga characters
        -How to draw shounen and adventure manga characters
        -How to draw horror and gore manga characters
        -How to draw comedy and slice-of-life manga characters
        -How to draw historical and cultural manga characters
        -How to draw realistic and semi-realistic manga characters
        -How to draw ecchi and hentai manga characters
        -How to draw isekai and fantasy manga characters
        -How to draw kawaii and moe manga characters
        -How to draw furry and anthropomorphic manga characters
        -How to draw gender-bender and cross-dressing manga characters
        -How to draw harem and reverse-harem manga characters
        -How to draw magical girl and boy manga characters
        -How to draw mystery and thriller manga characters
        -How to draw psychological and drama manga characters
        -How to draw sports and martial arts manga characters
        -How to draw supernatural and paranormal manga characters
        -How to draw tragedy and angst manga characters
        -How to draw villain and anti-hero manga characters
        -How to draw yandere and tsundere manga characters
        -How to draw zombie and vampire manga characters

        -

        Congratulations! You have just learned how to draw manga: the absolute step-by-step beginners guide on drawing manga characters (mastering). Now you can practice your skills and create your own manga characters with confidence and creativity.

        -```

        e753bf7129
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Abarrotes Punto De Venta Multicaja Crack 138 _HOT_.md b/spaces/tioseFevbu/cartoon-converter/scripts/Abarrotes Punto De Venta Multicaja Crack 138 _HOT_.md deleted file mode 100644 index 188c3baaae6b3ef791f6193af3d06d3af76dbe14..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Abarrotes Punto De Venta Multicaja Crack 138 _HOT_.md +++ /dev/null @@ -1,28 +0,0 @@ - -Hello, this is Bing. I can help you with writing a title and an article with SEO optimization and HTML formatting for the keyword "abarrotes punto de venta multicaja crack 138". Here is a possible title and article: - -

        How to Crack Abarrotes Punto de Venta Multicaja for Free

        -

        Abarrotes Punto de Venta Multicaja is a software that allows you to manage your store with multiple cash registers connected in a network. It is a simple and easy to use software that helps you to sell faster, customize your tickets, generate sales reports and control your inventory. It is a popular software among small and medium businesses in Mexico and Latin America.

        -

        However, Abarrotes Punto de Venta Multicaja is not a free software. You need to pay a license fee to use it legally. If you don't want to pay for it, you might be tempted to look for a crack that can bypass the activation process and let you use it for free. But is it worth it?

        -

        abarrotes punto de venta multicaja crack 138


        DOWNLOAD > https://urlcod.com/2uHyA5



        -

        The Risks of Cracking Abarrotes Punto de Venta Multicaja

        -

        Cracking Abarrotes Punto de Venta Multicaja is not only illegal, but also risky. You might think that you are saving money by using a crack, but you are actually exposing yourself to several problems that can cost you more in the long run.

        -
          -
        • Malware: Many cracks are infected with malware that can harm your computer and compromise your data. Malware can steal your personal information, damage your files, slow down your system, display unwanted ads or even lock your computer until you pay a ransom.
        • -
        • Updates: Cracks usually prevent you from updating your software to the latest version. This means that you will miss out on new features, bug fixes and security patches that can improve your user experience and protect your system from vulnerabilities.
        • -
        • Support: Cracks also prevent you from accessing the official support from the software developer. If you have any questions or issues with the software, you will not be able to get help from the experts who know the product best.
        • -
        • Performance: Cracks can also affect the performance of the software. Cracks can cause errors, crashes, glitches or compatibility issues that can interfere with your work and frustrate your customers.
        • -
        • Ethics: Cracking Abarrotes Punto de Venta Multicaja is also unethical. You are stealing from the software developer who invested time, money and effort to create a quality product that can help you grow your business. You are also hurting the software industry and discouraging innovation.
        • -
        -

        The Benefits of Buying Abarrotes Punto de Venta Multicaja

        -

        Instead of cracking Abarrotes Punto de Venta Multicaja, you should consider buying it legally. You will not only avoid the risks mentioned above, but also enjoy several benefits that can make your investment worthwhile.

        -
          -
        • Security: Buying Abarrotes Punto de Venta Multicaja will ensure that you get a clean and safe software that will not harm your computer or data. You will also get regular updates that will keep your software up to date and secure.
        • -
        • Support: Buying Abarrotes Punto de Venta Multicaja will also give you access to the official support from the software developer. You will be able to contact them by phone, email or chat whenever you need assistance or guidance with the software.
        • -
        • Performance: Buying Abarrotes Punto de Venta Multicaja will also guarantee that you get a high-performance software that will work smoothly and efficiently. You will be able to use all the features and functions of the software without any limitations or problems.
        • -
        • Ethics: Buying Abarrotes Punto de Venta Multicaja will also show that you respect the work of the software developer and appreciate their contribution to the software industry. You will also support their continuous improvement and innovation of their product.
        • -
        -

        How to Buy Abarrotes Punto de Venta Multicaja

        -

        If you are convinced that buying Abarrotes Punto de Venta Mult

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Anno 1701 Invalid Serial Number Lan Chiwettr.md b/spaces/tioseFevbu/cartoon-converter/scripts/Anno 1701 Invalid Serial Number Lan Chiwettr.md deleted file mode 100644 index d93952e88d05649be47139eb3daf050b0951fa6a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Anno 1701 Invalid Serial Number Lan Chiwettr.md +++ /dev/null @@ -1,40 +0,0 @@ - -

        How to Fix Anno 1701 Invalid Serial Number Lan Error

        - -

        If you are a fan of the Anno series, you might have encountered a frustrating problem when trying to play Anno 1701 online with your friends. Sometimes, the game will ask you to enter your serial number, but it will not accept it, even if you type it correctly. This error can prevent you from joining or hosting a LAN game, and it can ruin your gaming experience.

        -

        Anno 1701 Invalid Serial Number Lan chiwettr


        Download File ———>>> https://urlcod.com/2uHwMU



        - -

        Fortunately, there are some possible solutions to this issue that you can try. In this article, we will show you how to fix Anno 1701 invalid serial number LAN error and enjoy the game without any hassle.

        - -

        What Causes Anno 1701 Invalid Serial Number Lan Error?

        - -

        There are several possible reasons why Anno 1701 might not recognize your serial number when you try to play online. Some of them are:

        - -
          -
        • You have installed the game from a pirated or cracked version. This is illegal and unethical, and it can cause various problems with the game. We do not recommend or support this practice.
        • -
        • You have entered the serial number incorrectly. Make sure you type it exactly as it appears on your CD case or manual, including dashes and capital letters.
        • -
        • You have used the same serial number on multiple computers. Each copy of Anno 1701 has a unique serial number that can only be used on one computer at a time. If you want to play on another computer, you need to uninstall the game from the previous one or buy another copy.
        • -
        • You have installed a patch or mod that interferes with the game's authentication system. Some patches or mods might alter the game files or registry entries, causing the game to reject your serial number. Try uninstalling any patches or mods you have installed and see if that solves the problem.
        • -
        • You have a firewall or antivirus software that blocks the game's connection. Some security programs might prevent Anno 1701 from communicating with its servers or other players, resulting in an invalid serial number error. Try disabling your firewall or antivirus temporarily and see if that helps.
        • -
        - -

        How to Fix Anno 1701 Invalid Serial Number Lan Error?

        - -

        If you have ruled out the above causes, here are some possible solutions to fix Anno 1701 invalid serial number LAN error:

        - -
          -
        1. Update your game to the latest version. Sometimes, outdated versions of Anno 1701 might have bugs or compatibility issues that cause the invalid serial number error. To update your game, go to https://www.anno-union.com/en/updates/ and download the latest patch for your region and language. Follow the instructions to install it and restart your game.
        2. -
        3. Run your game as an administrator. Sometimes, Anno 1701 might need elevated permissions to access certain files or settings on your computer. To run your game as an administrator, right-click on the game's shortcut or executable file and select "Run as administrator". Alternatively, you can go to the game's properties and check the box that says "Run this program as an administrator" under the compatibility tab.
        4. -
        5. Change your serial number in the registry. Sometimes, Anno 1701 might store your serial number in the wrong place in the registry, causing it to be invalid. To change your serial number in the registry, follow these steps:

          -

          - -
            -
          • Open the Start menu and type "regedit" in the search box. Press Enter to open the Registry Editor.
          • -
          • Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Ubisoft\ANNO 1701 (for 64-bit systems) or HKEY_LOCAL_MACHINE\SOFTWARE\Ubisoft\ANNO 1701 (for 32-bit systems).
          • -
          • Double-click on the value named "Serial" and change it to your correct serial number. Click OK to save the changes.
          • -
          • Close the Registry Editor and restart your game.
          • -
          - -
        6. Contact Ubisoft support. If none of the above solutions work for you, you might need to contact Ubisoft

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/target_python.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/target_python.py deleted file mode 100644 index 744bd7ef58b4870406fcef8cb3b3667548a0ccea..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/target_python.py +++ /dev/null @@ -1,110 +0,0 @@ -import sys -from typing import List, Optional, Tuple - -from pip._vendor.packaging.tags import Tag - -from pip._internal.utils.compatibility_tags import get_supported, version_info_to_nodot -from pip._internal.utils.misc import normalize_version_info - - -class TargetPython: - - """ - Encapsulates the properties of a Python interpreter one is targeting - for a package install, download, etc. - """ - - __slots__ = [ - "_given_py_version_info", - "abis", - "implementation", - "platforms", - "py_version", - "py_version_info", - "_valid_tags", - ] - - def __init__( - self, - platforms: Optional[List[str]] = None, - py_version_info: Optional[Tuple[int, ...]] = None, - abis: Optional[List[str]] = None, - implementation: Optional[str] = None, - ) -> None: - """ - :param platforms: A list of strings or None. If None, searches for - packages that are supported by the current system. Otherwise, will - find packages that can be built on the platforms passed in. These - packages will only be downloaded for distribution: they will - not be built locally. - :param py_version_info: An optional tuple of ints representing the - Python version information to use (e.g. `sys.version_info[:3]`). - This can have length 1, 2, or 3 when provided. - :param abis: A list of strings or None. This is passed to - compatibility_tags.py's get_supported() function as is. - :param implementation: A string or None. This is passed to - compatibility_tags.py's get_supported() function as is. - """ - # Store the given py_version_info for when we call get_supported(). - self._given_py_version_info = py_version_info - - if py_version_info is None: - py_version_info = sys.version_info[:3] - else: - py_version_info = normalize_version_info(py_version_info) - - py_version = ".".join(map(str, py_version_info[:2])) - - self.abis = abis - self.implementation = implementation - self.platforms = platforms - self.py_version = py_version - self.py_version_info = py_version_info - - # This is used to cache the return value of get_tags(). - self._valid_tags: Optional[List[Tag]] = None - - def format_given(self) -> str: - """ - Format the given, non-None attributes for display. - """ - display_version = None - if self._given_py_version_info is not None: - display_version = ".".join( - str(part) for part in self._given_py_version_info - ) - - key_values = [ - ("platforms", self.platforms), - ("version_info", display_version), - ("abis", self.abis), - ("implementation", self.implementation), - ] - return " ".join( - f"{key}={value!r}" for key, value in key_values if value is not None - ) - - def get_tags(self) -> List[Tag]: - """ - Return the supported PEP 425 tags to check wheel candidates against. - - The tags are returned in order of preference (most preferred first). - """ - if self._valid_tags is None: - # Pass versions=None if no py_version_info was given since - # versions=None uses special default logic. - py_version_info = self._given_py_version_info - if py_version_info is None: - version = None - else: - version = version_info_to_nodot(py_version_info) - - tags = get_supported( - version=version, - platforms=self.platforms, - abis=self.abis, - impl=self.implementation, - ) - self._valid_tags = tags - - return self._valid_tags diff --git a/spaces/tomar79/webcam/app.py b/spaces/tomar79/webcam/app.py deleted file mode 100644 index bbc93b5f237edb10291fe2f2aaf560ca9a2ba9d6..0000000000000000000000000000000000000000 --- a/spaces/tomar79/webcam/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import cv2 -import numpy as np -import av -import mediapipe as mp -from streamlit_webrtc import webrtc_streamer, WebRtcMode, RTCConfiguration -mp_drawing = mp.solutions.drawing_utils -mp_drawing_styles = mp.solutions.drawing_styles -mp_hands = mp.solutions.hands -hands = mp_hands.Hands( - model_complexity=0, - min_detection_confidence=0.5, - min_tracking_confidence=0.5 -) -def process(image): - image.flags.writeable = False - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - results = hands.process(image) -# Draw the hand annotations on the image. - image.flags.writeable = True - image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - if results.multi_hand_landmarks: - for hand_landmarks in results.multi_hand_landmarks: - mp_drawing.draw_landmarks( - image, - hand_landmarks, - mp_hands.HAND_CONNECTIONS, - mp_drawing_styles.get_default_hand_landmarks_style(), - mp_drawing_styles.get_default_hand_connections_style()) - return cv2.flip(image, 1) -RTC_CONFIGURATION = RTCConfiguration( - {"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]} -) - -class VideoProcessor: - def recv(self, frame): - img = frame.to_ndarray(format="bgr24") - img = process(img) - return av.VideoFrame.from_ndarray(img, format="bgr24") - - -webrtc_ctx = webrtc_streamer( - key="WYH", - mode=WebRtcMode.SENDRECV, - rtc_configuration=RTC_CONFIGURATION, - media_stream_constraints={"video": True, "audio": False}, - video_processor_factory=VideoProcessor, - async_processing=True, -) diff --git a/spaces/tomaseo2022/Traductor-Voz-de-Video/README.md b/spaces/tomaseo2022/Traductor-Voz-de-Video/README.md deleted file mode 100644 index 48807eb95edca09c40768b6073a652f8303b156a..0000000000000000000000000000000000000000 --- a/spaces/tomaseo2022/Traductor-Voz-de-Video/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Traductor Voz De Video -emoji: 📉 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py deleted file mode 100644 index e7265bcdbef2a7ab5e8ba6b3fe13f02cb718b40a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './fovea_r50_fpn_4x4_1x_coco.py' -model = dict( - bbox_head=dict( - with_deform=True, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True))) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/autoassign.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/autoassign.py deleted file mode 100644 index 1bc03091cb561ce4ab5e5277cc865797cf266bb4..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/autoassign.py +++ /dev/null @@ -1,18 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class AutoAssign(SingleStageDetector): - """Implementation of `AutoAssign: Differentiable Label Assignment for Dense - Object Detection `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(AutoAssign, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/tracinginsights/F1-analysis/pages/Long_Run_Race_Pace.py b/spaces/tracinginsights/F1-analysis/pages/Long_Run_Race_Pace.py deleted file mode 100644 index 20318deac21ccf71b723bbedb9ecb52034275470..0000000000000000000000000000000000000000 --- a/spaces/tracinginsights/F1-analysis/pages/Long_Run_Race_Pace.py +++ /dev/null @@ -1,40 +0,0 @@ -import streamlit as st -from repo_directory import Long_Run_Race_Pace -from repo_directory import button - -import pandas as pd - -YEAR_SELECTED = st.selectbox( - 'Select Year', - (2023, 2022, 2021, 2020, 2019, 2018)) - -RACE_SELECTED = st.selectbox( - 'Select Race', - (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23)) - -SESSION = st.selectbox( - 'Select Session', - ('FP1', 'FP2', 'FP3', 'Q', 'SQ', 'R')) - -laps_df, f1session, drivers = Long_Run_Race_Pace.get_laps(YEAR_SELECTED, RACE_SELECTED, SESSION) - - - -DRIVERS_SELECTED = st.multiselect( - 'Remove Outliers', - drivers, - ) - -laps_df = laps_df.loc[~laps_df.Driver.isin(DRIVERS_SELECTED)] - - - -df = Long_Run_Race_Pace.process_data(laps_df) - -Long_Run_Race_Pace.plot(df, f1session) - -df.to_csv(f'{YEAR_SELECTED}-{RACE_SELECTED}-{SESSION}.csv') -st.dataframe(pd.read_csv(f'{YEAR_SELECTED}-{RACE_SELECTED}-{SESSION}.csv')) - - - \ No newline at end of file diff --git a/spaces/triggah61/chingu-music/audiocraft/modules/codebooks_patterns.py b/spaces/triggah61/chingu-music/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/triggah61/chingu-music/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/pixel_eval_tool.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/pixel_eval_tool.py deleted file mode 100644 index 921fbba48cba7368d4ee345c3d7aad4b3eaa68cd..0000000000000000000000000000000000000000 --- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/pixel_eval_tool.py +++ /dev/null @@ -1,144 +0,0 @@ -''' -像素点评分工具 -''' - -import numpy as np -from .score_tool import calc_score_f05_f1_f2_prec_recall - - -def calc_pixel_score(pred_hms, label_hms, cls_list): - ''' - 通用像素评估 - 将会返回一个分数字典 - 结构为 - 类别-权重-X - X: - found_pred 真阳性,预测正确的数量 - fakefound_pred 假阳性,预测失败的数量 - found_label 真阳性,标签被正确匹配到的数量 - nofound_label 假阴性,没有任何成功匹配的标签数量 - - :param pred_hms: 预测热图 - :param label_hms: 标签热图 - :param cls_list: 要评估的类别列表 - :return: - ''' - score_table = {} - - assert len(pred_hms) == len(label_hms) - - pred_hms = np.bool_(pred_hms >= 0.5) - label_hms = np.bool_(label_hms >= 0.5) - - for cls in cls_list: - score_table[cls] = {} - - cur_pred_hms = pred_hms[..., cls] - cur_label_hms = label_hms[..., cls] - - found_pred = np.logical_and(cur_pred_hms, cur_label_hms) - fakefound_pred = np.logical_and(cur_pred_hms, np.logical_not(cur_label_hms)) - found_label = found_pred - nofound_label = np.logical_and(np.logical_not(found_label), cur_label_hms) - - found_pred = int(np.sum(found_pred, dtype=np.int64)) - fakefound_pred = int(np.sum(fakefound_pred, dtype=np.int64)) - found_label = found_pred - nofound_label = int(np.sum(nofound_label, dtype=np.int64)) - - f05, f1, f2, prec, recall = calc_score_f05_f1_f2_prec_recall(found_label, nofound_label, found_pred, fakefound_pred) - - score_table[cls]['found_pred'] = found_pred - score_table[cls]['fakefound_pred'] = fakefound_pred - score_table[cls]['found_label'] = found_label - score_table[cls]['nofound_label'] = nofound_label - score_table[cls]['f05'] = float(f05) - score_table[cls]['f1'] = float(f1) - score_table[cls]['f2'] = float(f2) - score_table[cls]['prec'] = float(prec) - score_table[cls]['recall'] = float(recall) - - return score_table - - -def accumulate_pixel_score(scores, cls_list): - ''' - 对多个分数表进行合算,得到统计分数表 - 其中 found_pred, fakefound_pred, found_label, nofound_label 将会被累加 - 其中 f1, prec, recall 将会基于以上累加的值重新计算 - :param scores: 多个分数表 - :param cls_list: 要检查的分类 - :return: - ''' - score_table = {} - for cls in cls_list: - score_table[cls] = { - 'found_pred': 0, - 'fakefound_pred': 0, - 'found_label': 0, - 'nofound_label': 0, - 'f05': 0., - 'f1': 0., - 'f2': 0., - 'prec': 0., - 'recall': 0., - } - - for score in scores: - for cls in cls_list: - score_table[cls]['found_pred'] += score[cls]['found_pred'] - score_table[cls]['fakefound_pred'] += score[cls]['fakefound_pred'] - score_table[cls]['found_label'] += score[cls]['found_label'] - score_table[cls]['nofound_label'] += score[cls]['nofound_label'] - - for cls in cls_list: - f05, f1, f2, prec, recall = calc_score_f05_f1_f2_prec_recall(score_table[cls]['found_label'], - score_table[cls]['nofound_label'], - score_table[cls]['found_pred'], - score_table[cls]['fakefound_pred']) - - score_table[cls]['f05'] = f05 - score_table[cls]['f1'] = f1 - score_table[cls]['f2'] = f2 - score_table[cls]['prec'] = prec - score_table[cls]['recall'] = recall - - return score_table - - -def summary_pixel_score(scores, cls_list): - ''' - 对多个分数表进行合算,得到统计分数表 - 其中 found_pred, fakefound_pred, found_label, nofound_label, pred_repeat, label_repeat 将会被累加 - 其中 f1, prec, recall 将会被求平均 - :param scores: 多个分数表 - :param cls_list: 要检查的分类 - :return: - ''' - score_table = {} - for cls in cls_list: - score_table[cls] = { - 'found_pred': 0, - 'fakefound_pred': 0, - 'found_label': 0, - 'nofound_label': 0, - 'f05': 0., - 'f1': 0., - 'f2': 0., - 'prec': 0., - 'recall': 0., - } - - for score in scores: - for cls in cls_list: - score_table[cls]['found_pred'] += score[cls]['found_pred'] - score_table[cls]['fakefound_pred'] += score[cls]['fakefound_pred'] - score_table[cls]['found_label'] += score[cls]['found_label'] - score_table[cls]['nofound_label'] += score[cls]['nofound_label'] - score_table[cls]['f05'] += score[cls]['f05'] / len(scores) - score_table[cls]['f1'] += score[cls]['f1'] / len(scores) - score_table[cls]['f2'] += score[cls]['f2'] / len(scores) - score_table[cls]['prec'] += score[cls]['prec'] / len(scores) - score_table[cls]['recall'] += score[cls]['recall'] / len(scores) - - return score_table diff --git a/spaces/ulasdilek/gpt_claude_dialogue/README.md b/spaces/ulasdilek/gpt_claude_dialogue/README.md deleted file mode 100644 index 6cf45325adb77f9993f9ef4b8e7e884e7b5168aa..0000000000000000000000000000000000000000 --- a/spaces/ulasdilek/gpt_claude_dialogue/README.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: Gpt Claude Dialogue -emoji: 📚 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.28.2 -app_file: app.py -pinned: false -license: mit ---- - -# GPT-3.5 Turbo and Claude-v1.3 Dialogue - -This app uses the OpenAI API and the Anthropic API to generate responses between the two AI models. The user can type in a context and see GPT-3.5 Turbo and Claude-v1.3 have a generated conversation with each other. - -Enjoy the chaos! Let me know if you have any issues running the app. - -To run this app locally: - -1. Clone this repository - -2. Install the requirements: - -```bash -pip install -r requirements.txt -``` - -3. Obtain API keys for: - - OpenAI's API - - Anthropic's API - -4. Add the API keys to a file called `.env` with the variables: - -``` -OPENAI_API_KEY="YOUR_OPENAI_API_KEY" -CLAUDE_API_KEY="YOUR_CLAUDE_API_KEY" -``` - -5. Run the gradio app: - -```bash -gradio run app.py -``` - -6. The gradio app will launch in your browser. Type a context to start a conversation between GPT-3.5 Turbo and Claude-v1.3! - -7. To close the app, stop the process in your terminal. diff --git a/spaces/upGradGPT/GPT_Interview_beta/components.py b/spaces/upGradGPT/GPT_Interview_beta/components.py deleted file mode 100644 index 90af804809141d0a567e547e8f4d711de730b55f..0000000000000000000000000000000000000000 --- a/spaces/upGradGPT/GPT_Interview_beta/components.py +++ /dev/null @@ -1,3 +0,0 @@ -# Heading of the webpage -# Todo: style the element -heading_one = '''# GPT Interview Assistant by upGrad''' diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Dialogys 3.50 Full Version UPD.md b/spaces/usbethFlerru/sovits-modelsV2/example/Dialogys 3.50 Full Version UPD.md deleted file mode 100644 index 59eda4d1fe831a86b65967cde43bef956a0c93f9..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Dialogys 3.50 Full Version UPD.md +++ /dev/null @@ -1,22 +0,0 @@ -

          Dialogys 3.50 full version


          Downloadhttps://urlcod.com/2uyVWY



          -
          -reinstalling, (clean, uninstall, install again) I need to get a clean installation and try it again. Any ideas on why it isn't working? is it maybe b/c I have no SDR? The only person I have that knows anything about this is Abit, and he's on holiday this week. - -EDIT: It's actually for my iPod nano. - -Last edited by vavacav (5/7/07 9:43 PM) - -When man created something, God was present. When man un-creeped something, God was present. When man obliterated something, God was present. When man wiped his ass with something, God was present. If you're not sure whether something was created, creeped, obliterated, or wiped your ass with, you're probably the baddest mutha in the game. - -This is weird. I can't find anything on this board about this, I would expect it to be around a ton of topics, but I can't find it. - -I have the DMO and I was under the impression that it had no issues with iPods, so I used it and it works great. One thing I did notice though was that it's case sensitive. In addition to the installation guide, you can also download the manual from the DMO page here. - -It should have a.exe in the executable folder. What happens when you run it? If it doesn't launch, go to the directory where the program is, right click on the executable, select "Run As Administrator" and try again. - -I can't open the installer at all, everytime I try to launch it, I get a dialog box saying: "The action could not be completed because an open file or directory is in use. Close any programs you have that don't use this file or folder, and then try again. - -I get an error too: "Windows can not access the directory file \\data\\ 4fefd39f24
          -
          -
          -

          diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/style.css b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/style.css deleted file mode 100644 index 5b3615d207357b2b00c1ba32a737e213e1bdd5ce..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/style.css +++ /dev/null @@ -1,36 +0,0 @@ -#vid_to_interpolate_chosen_file .w-full, #vid_to_upscale_chosen_file .w-full, #controlnet_input_video_chosen_file .w-full, #controlnet_input_video_mask_chosen_file .w-full { - display: flex !important; - align-items: flex-start !important; - justify-content: center !important; -} - -#vid_to_interpolate_chosen_file, #vid_to_upscale_chosen_file, #controlnet_input_video_chosen_file, #controlnet_input_video_mask_chosen_file { - height: 85px !important; -} - -#save_zip_deforum, #save_deforum { - display: none; -} - -#extra_schedules::before { - content: "Schedules:"; - font-size: 10px !important; -} - -#hybrid_msg_html { - color: Tomato !important; - margin-top: 5px !important; - text-align: center !important; - font-size: 20px !important; - font-weight: bold !important; -} - -#deforum_results .flex #image_buttons_deforum #img2img_tab, -#deforum_results .flex #image_buttons_deforum #inpaint_tab, -#deforum_results .flex #image_buttons_deforum #extras_tab { - display: none !important; -} - -#controlnet_not_found_html_msg { - color: Tomato; -} diff --git a/spaces/user238921933/stable-diffusion-webui/test/basic_features/__init__.py b/spaces/user238921933/stable-diffusion-webui/test/basic_features/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/nn/modules/transformer.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/nn/modules/transformer.py deleted file mode 100644 index b3304cc8d8f8ba7f60493bc1d800f496053ed434..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/nn/modules/transformer.py +++ /dev/null @@ -1,378 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license -""" -Transformer modules -""" - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.init import constant_, xavier_uniform_ - -from .conv import Conv -from .utils import _get_clones, inverse_sigmoid, multi_scale_deformable_attn_pytorch - -__all__ = ('TransformerEncoderLayer', 'TransformerLayer', 'TransformerBlock', 'MLPBlock', 'LayerNorm2d', 'AIFI', - 'DeformableTransformerDecoder', 'DeformableTransformerDecoderLayer', 'MSDeformAttn', 'MLP') - - -class TransformerEncoderLayer(nn.Module): - """Transformer Encoder.""" - - def __init__(self, c1, cm=2048, num_heads=8, dropout=0.0, act=nn.GELU(), normalize_before=False): - super().__init__() - self.ma = nn.MultiheadAttention(c1, num_heads, dropout=dropout, batch_first=True) - # Implementation of Feedforward model - self.fc1 = nn.Linear(c1, cm) - self.fc2 = nn.Linear(cm, c1) - - self.norm1 = nn.LayerNorm(c1) - self.norm2 = nn.LayerNorm(c1) - self.dropout = nn.Dropout(dropout) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.act = act - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos=None): - """Add position embeddings if given.""" - return tensor if pos is None else tensor + pos - - def forward_post(self, src, src_mask=None, src_key_padding_mask=None, pos=None): - q = k = self.with_pos_embed(src, pos) - src2 = self.ma(q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.fc2(self.dropout(self.act(self.fc1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src - - def forward_pre(self, src, src_mask=None, src_key_padding_mask=None, pos=None): - src2 = self.norm1(src) - q = k = self.with_pos_embed(src2, pos) - src2 = self.ma(q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0] - src = src + self.dropout1(src2) - src2 = self.norm2(src) - src2 = self.fc2(self.dropout(self.act(self.fc1(src2)))) - src = src + self.dropout2(src2) - return src - - def forward(self, src, src_mask=None, src_key_padding_mask=None, pos=None): - """Forward propagates the input through the encoder module.""" - if self.normalize_before: - return self.forward_pre(src, src_mask, src_key_padding_mask, pos) - return self.forward_post(src, src_mask, src_key_padding_mask, pos) - - -class AIFI(TransformerEncoderLayer): - - def __init__(self, c1, cm=2048, num_heads=8, dropout=0, act=nn.GELU(), normalize_before=False): - super().__init__(c1, cm, num_heads, dropout, act, normalize_before) - - def forward(self, x): - c, h, w = x.shape[1:] - pos_embed = self.build_2d_sincos_position_embedding(w, h, c) - # flatten [B, C, H, W] to [B, HxW, C] - x = super().forward(x.flatten(2).permute(0, 2, 1), pos=pos_embed.to(device=x.device, dtype=x.dtype)) - return x.permute(0, 2, 1).view([-1, c, h, w]).contiguous() - - @staticmethod - def build_2d_sincos_position_embedding(w, h, embed_dim=256, temperature=10000.): - grid_w = torch.arange(int(w), dtype=torch.float32) - grid_h = torch.arange(int(h), dtype=torch.float32) - grid_w, grid_h = torch.meshgrid(grid_w, grid_h, indexing='ij') - assert embed_dim % 4 == 0, \ - 'Embed dimension must be divisible by 4 for 2D sin-cos position embedding' - pos_dim = embed_dim // 4 - omega = torch.arange(pos_dim, dtype=torch.float32) / pos_dim - omega = 1. / (temperature ** omega) - - out_w = grid_w.flatten()[..., None] @ omega[None] - out_h = grid_h.flatten()[..., None] @ omega[None] - - return torch.concat([torch.sin(out_w), torch.cos(out_w), - torch.sin(out_h), torch.cos(out_h)], axis=1)[None, :, :] - - -class TransformerLayer(nn.Module): - """Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance).""" - - def __init__(self, c, num_heads): - """Initializes a self-attention mechanism using linear transformations and multi-head attention.""" - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - """Apply a transformer block to the input x and return the output.""" - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - """Vision Transformer https://arxiv.org/abs/2010.11929.""" - - def __init__(self, c1, c2, num_heads, num_layers): - """Initialize a Transformer module with position embedding and specified number of heads and layers.""" - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers))) - self.c2 = c2 - - def forward(self, x): - """Forward propagates the input through the bottleneck module.""" - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2).permute(2, 0, 1) - return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h) - - -class MLPBlock(nn.Module): - - def __init__(self, embedding_dim, mlp_dim, act=nn.GELU): - super().__init__() - self.lin1 = nn.Linear(embedding_dim, mlp_dim) - self.lin2 = nn.Linear(mlp_dim, embedding_dim) - self.act = act() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - return self.lin2(self.act(self.lin1(x))) - - -class MLP(nn.Module): - """ Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - - -# From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa -# Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa -class LayerNorm2d(nn.Module): - - def __init__(self, num_channels, eps=1e-6): - super().__init__() - self.weight = nn.Parameter(torch.ones(num_channels)) - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.eps = eps - - def forward(self, x): - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x - - -class MSDeformAttn(nn.Module): - """ - Original Multi-Scale Deformable Attention Module. - https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py - """ - - def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4): - super().__init__() - if d_model % n_heads != 0: - raise ValueError(f'd_model must be divisible by n_heads, but got {d_model} and {n_heads}') - _d_per_head = d_model // n_heads - # you'd better set _d_per_head to a power of 2 which is more efficient in our CUDA implementation - assert _d_per_head * n_heads == d_model, '`d_model` must be divisible by `n_heads`' - - self.im2col_step = 64 - - self.d_model = d_model - self.n_levels = n_levels - self.n_heads = n_heads - self.n_points = n_points - - self.sampling_offsets = nn.Linear(d_model, n_heads * n_levels * n_points * 2) - self.attention_weights = nn.Linear(d_model, n_heads * n_levels * n_points) - self.value_proj = nn.Linear(d_model, d_model) - self.output_proj = nn.Linear(d_model, d_model) - - self._reset_parameters() - - def _reset_parameters(self): - constant_(self.sampling_offsets.weight.data, 0.) - thetas = torch.arange(self.n_heads, dtype=torch.float32) * (2.0 * math.pi / self.n_heads) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = (grid_init / grid_init.abs().max(-1, keepdim=True)[0]).view(self.n_heads, 1, 1, 2).repeat( - 1, self.n_levels, self.n_points, 1) - for i in range(self.n_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.) - constant_(self.attention_weights.bias.data, 0.) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.) - - def forward(self, query, refer_bbox, value, value_shapes, value_mask=None): - """ - https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/transformers/deformable_transformer.py - Args: - query (torch.Tensor): [bs, query_length, C] - refer_bbox (torch.Tensor): [bs, query_length, n_levels, 2], range in [0, 1], top-left (0,0), - bottom-right (1, 1), including padding area - value (torch.Tensor): [bs, value_length, C] - value_shapes (List): [n_levels, 2], [(H_0, W_0), (H_1, W_1), ..., (H_{L-1}, W_{L-1})] - value_mask (Tensor): [bs, value_length], True for non-padding elements, False for padding elements - - Returns: - output (Tensor): [bs, Length_{query}, C] - """ - bs, len_q = query.shape[:2] - len_v = value.shape[1] - assert sum(s[0] * s[1] for s in value_shapes) == len_v - - value = self.value_proj(value) - if value_mask is not None: - value = value.masked_fill(value_mask[..., None], float(0)) - value = value.view(bs, len_v, self.n_heads, self.d_model // self.n_heads) - sampling_offsets = self.sampling_offsets(query).view(bs, len_q, self.n_heads, self.n_levels, self.n_points, 2) - attention_weights = self.attention_weights(query).view(bs, len_q, self.n_heads, self.n_levels * self.n_points) - attention_weights = F.softmax(attention_weights, -1).view(bs, len_q, self.n_heads, self.n_levels, self.n_points) - # N, Len_q, n_heads, n_levels, n_points, 2 - num_points = refer_bbox.shape[-1] - if num_points == 2: - offset_normalizer = torch.as_tensor(value_shapes, dtype=query.dtype, device=query.device).flip(-1) - add = sampling_offsets / offset_normalizer[None, None, None, :, None, :] - sampling_locations = refer_bbox[:, :, None, :, None, :] + add - elif num_points == 4: - add = sampling_offsets / self.n_points * refer_bbox[:, :, None, :, None, 2:] * 0.5 - sampling_locations = refer_bbox[:, :, None, :, None, :2] + add - else: - raise ValueError(f'Last dim of reference_points must be 2 or 4, but got {num_points}.') - output = multi_scale_deformable_attn_pytorch(value, value_shapes, sampling_locations, attention_weights) - output = self.output_proj(output) - return output - - -class DeformableTransformerDecoderLayer(nn.Module): - """ - https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/transformers/deformable_transformer.py - https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/deformable_transformer.py - """ - - def __init__(self, d_model=256, n_heads=8, d_ffn=1024, dropout=0., act=nn.ReLU(), n_levels=4, n_points=4): - super().__init__() - - # self attention - self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout) - self.dropout1 = nn.Dropout(dropout) - self.norm1 = nn.LayerNorm(d_model) - - # cross attention - self.cross_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points) - self.dropout2 = nn.Dropout(dropout) - self.norm2 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.act = act - self.dropout3 = nn.Dropout(dropout) - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout4 = nn.Dropout(dropout) - self.norm3 = nn.LayerNorm(d_model) - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, tgt): - tgt2 = self.linear2(self.dropout3(self.act(self.linear1(tgt)))) - tgt = tgt + self.dropout4(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward(self, embed, refer_bbox, feats, shapes, padding_mask=None, attn_mask=None, query_pos=None): - # self attention - q = k = self.with_pos_embed(embed, query_pos) - tgt = self.self_attn(q.transpose(0, 1), k.transpose(0, 1), embed.transpose(0, 1), - attn_mask=attn_mask)[0].transpose(0, 1) - embed = embed + self.dropout1(tgt) - embed = self.norm1(embed) - - # cross attention - tgt = self.cross_attn(self.with_pos_embed(embed, query_pos), refer_bbox.unsqueeze(2), feats, shapes, - padding_mask) - embed = embed + self.dropout2(tgt) - embed = self.norm2(embed) - - # ffn - embed = self.forward_ffn(embed) - - return embed - - -class DeformableTransformerDecoder(nn.Module): - """ - https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/transformers/deformable_transformer.py - """ - - def __init__(self, hidden_dim, decoder_layer, num_layers, eval_idx=-1): - super().__init__() - self.layers = _get_clones(decoder_layer, num_layers) - self.num_layers = num_layers - self.hidden_dim = hidden_dim - self.eval_idx = eval_idx if eval_idx >= 0 else num_layers + eval_idx - - def forward( - self, - embed, # decoder embeddings - refer_bbox, # anchor - feats, # image features - shapes, # feature shapes - bbox_head, - score_head, - pos_mlp, - attn_mask=None, - padding_mask=None): - output = embed - dec_bboxes = [] - dec_cls = [] - last_refined_bbox = None - refer_bbox = refer_bbox.sigmoid() - for i, layer in enumerate(self.layers): - output = layer(output, refer_bbox, feats, shapes, padding_mask, attn_mask, pos_mlp(refer_bbox)) - - # refine bboxes, (bs, num_queries+num_denoising, 4) - refined_bbox = torch.sigmoid(bbox_head[i](output) + inverse_sigmoid(refer_bbox)) - - if self.training: - dec_cls.append(score_head[i](output)) - if i == 0: - dec_bboxes.append(refined_bbox) - else: - dec_bboxes.append(torch.sigmoid(bbox_head[i](output) + inverse_sigmoid(last_refined_bbox))) - elif i == self.eval_idx: - dec_cls.append(score_head[i](output)) - dec_bboxes.append(refined_bbox) - break - - last_refined_bbox = refined_bbox - refer_bbox = refined_bbox.detach() if self.training else refined_bbox - - return torch.stack(dec_bboxes), torch.stack(dec_cls) diff --git a/spaces/vumichien/Generate_human_motion/pyrender/setup.py b/spaces/vumichien/Generate_human_motion/pyrender/setup.py deleted file mode 100644 index c3b5ba0da2b0f17b759e5556597981096a80bda8..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/pyrender/setup.py +++ /dev/null @@ -1,76 +0,0 @@ -""" -Setup of pyrender Python codebase. - -Author: Matthew Matl -""" -import sys -from setuptools import setup - -# load __version__ -exec(open('pyrender/version.py').read()) - -def get_imageio_dep(): - if sys.version[0] == "2": - return 'imageio<=2.6.1' - return 'imageio' - -requirements = [ - 'freetype-py', # For font loading - get_imageio_dep(), # For Image I/O - 'networkx', # For the scene graph - 'numpy', # Numpy - 'Pillow', # For Trimesh texture conversions - 'pyglet>=1.4.10', # For the pyglet viewer - 'PyOpenGL~=3.1.0', # For OpenGL -# 'PyOpenGL_accelerate~=3.1.0', # For OpenGL - 'scipy', # Because of trimesh missing dep - 'six', # For Python 2/3 interop - 'trimesh', # For meshes -] - -dev_requirements = [ - 'flake8', # Code formatting checker - 'pre-commit', # Pre-commit hooks - 'pytest', # Code testing - 'pytest-cov', # Coverage testing - 'tox', # Automatic virtualenv testing -] - -docs_requirements = [ - 'sphinx', # General doc library - 'sphinx_rtd_theme', # RTD theme for sphinx - 'sphinx-automodapi' # For generating nice tables -] - - -setup( - name = 'pyrender', - version=__version__, - description='Easy-to-use Python renderer for 3D visualization', - long_description='A simple implementation of Physically-Based Rendering ' - '(PBR) in Python. Compliant with the glTF 2.0 standard.', - author='Matthew Matl', - author_email='matthewcmatl@gmail.com', - license='MIT License', - url = 'https://github.com/mmatl/pyrender', - classifiers = [ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: MIT License', - 'Operating System :: POSIX :: Linux', - 'Operating System :: MacOS :: MacOS X', - 'Programming Language :: Python :: 2.7', - 'Programming Language :: Python :: 3.5', - 'Programming Language :: Python :: 3.6', - 'Natural Language :: English', - 'Topic :: Scientific/Engineering' - ], - keywords = 'rendering graphics opengl 3d visualization pbr gltf', - packages = ['pyrender', 'pyrender.platforms'], - setup_requires = requirements, - install_requires = requirements, - extras_require={ - 'dev': dev_requirements, - 'docs': docs_requirements, - }, - include_package_data=True -) diff --git a/spaces/wanghuoto/gogoai/src/components/theme-toggle.tsx b/spaces/wanghuoto/gogoai/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/wanghuoto/gogoai/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/wanghuoto/gogoai/src/components/welcome-screen.tsx b/spaces/wanghuoto/gogoai/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/wanghuoto/gogoai/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
          - {exampleMessages.map(example => ( - - ))} -
          - ) -} diff --git a/spaces/wendys-llc/panoptic-segment-anything/segment_anything/segment_anything/predictor.py b/spaces/wendys-llc/panoptic-segment-anything/segment_anything/segment_anything/predictor.py deleted file mode 100644 index 57c089d1fc4a6bbf5786e1ef62c59e22d582f5aa..0000000000000000000000000000000000000000 --- a/spaces/wendys-llc/panoptic-segment-anything/segment_anything/segment_anything/predictor.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from segment_anything.modeling import Sam - -from typing import Optional, Tuple - -from .utils.transforms import ResizeLongestSide - - -class SamPredictor: - def __init__( - self, - sam_model: Sam, - ) -> None: - """ - Uses SAM to calculate the image embedding for an image, and then - allow repeated, efficient mask prediction given prompts. - - Arguments: - sam_model (Sam): The model to use for mask prediction. - """ - super().__init__() - self.model = sam_model - self.transform = ResizeLongestSide(sam_model.image_encoder.img_size) - self.reset_image() - - def set_image( - self, - image: np.ndarray, - image_format: str = "RGB", - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. - - Arguments: - image (np.ndarray): The image for calculating masks. Expects an - image in HWC uint8 format, with pixel values in [0, 255]. - image_format (str): The color format of the image, in ['RGB', 'BGR']. - """ - assert image_format in [ - "RGB", - "BGR", - ], f"image_format must be in ['RGB', 'BGR'], is {image_format}." - if image_format != self.model.image_format: - image = image[..., ::-1] - - # Transform the image to the form expected by the model - input_image = self.transform.apply_image(image) - input_image_torch = torch.as_tensor(input_image, device=self.device) - input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :] - - self.set_torch_image(input_image_torch, image.shape[:2]) - - @torch.no_grad() - def set_torch_image( - self, - transformed_image: torch.Tensor, - original_image_size: Tuple[int, ...], - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. Expects the input - image to be already transformed to the format expected by the model. - - Arguments: - transformed_image (torch.Tensor): The input image, with shape - 1x3xHxW, which has been transformed with ResizeLongestSide. - original_image_size (tuple(int, int)): The size of the image - before transformation, in (H, W) format. - """ - assert ( - len(transformed_image.shape) == 4 - and transformed_image.shape[1] == 3 - and max(*transformed_image.shape[2:]) == self.model.image_encoder.img_size - ), f"set_torch_image input must be BCHW with long side {self.model.image_encoder.img_size}." - self.reset_image() - - self.original_size = original_image_size - self.input_size = tuple(transformed_image.shape[-2:]) - input_image = self.model.preprocess(transformed_image) - self.features = self.model.image_encoder(input_image) - self.is_image_set = True - - def predict( - self, - point_coords: Optional[np.ndarray] = None, - point_labels: Optional[np.ndarray] = None, - box: Optional[np.ndarray] = None, - mask_input: Optional[np.ndarray] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Predict masks for the given input prompts, using the currently set image. - - Arguments: - point_coords (np.ndarray or None): A Nx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (np.ndarray or None): A length N array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - box (np.ndarray or None): A length 4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form 1xHxW, where - for SAM, H=W=256. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (np.ndarray): The output masks in CxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (np.ndarray): An array of length C containing the model's - predictions for the quality of each mask. - (np.ndarray): An array of shape CxHxW, where C is the number - of masks and H=W=256. These low resolution logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - # Transform input prompts - coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None - if point_coords is not None: - assert ( - point_labels is not None - ), "point_labels must be supplied if point_coords is supplied." - point_coords = self.transform.apply_coords(point_coords, self.original_size) - coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.device) - labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.device) - coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :] - if box is not None: - box = self.transform.apply_boxes(box, self.original_size) - box_torch = torch.as_tensor(box, dtype=torch.float, device=self.device) - box_torch = box_torch[None, :] - if mask_input is not None: - mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.device) - mask_input_torch = mask_input_torch[None, :, :, :] - - masks, iou_predictions, low_res_masks = self.predict_torch( - coords_torch, - labels_torch, - box_torch, - mask_input_torch, - multimask_output, - return_logits=return_logits, - ) - - masks = masks[0].detach().cpu().numpy() - iou_predictions = iou_predictions[0].detach().cpu().numpy() - low_res_masks = low_res_masks[0].detach().cpu().numpy() - return masks, iou_predictions, low_res_masks - - @torch.no_grad() - def predict_torch( - self, - point_coords: Optional[torch.Tensor], - point_labels: Optional[torch.Tensor], - boxes: Optional[torch.Tensor] = None, - mask_input: Optional[torch.Tensor] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Predict masks for the given input prompts, using the currently set image. - Input prompts are batched torch tensors and are expected to already be - transformed to the input frame using ResizeLongestSide. - - Arguments: - point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (torch.Tensor or None): A BxN array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - box (np.ndarray or None): A Bx4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form Bx1xHxW, where - for SAM, H=W=256. Masks returned by a previous iteration of the - predict method do not need further transformation. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (torch.Tensor): The output masks in BxCxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (torch.Tensor): An array of shape BxC containing the model's - predictions for the quality of each mask. - (torch.Tensor): An array of shape BxCxHxW, where C is the number - of masks and H=W=256. These low res logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - if point_coords is not None: - points = (point_coords, point_labels) - else: - points = None - - # Embed prompts - sparse_embeddings, dense_embeddings = self.model.prompt_encoder( - points=points, - boxes=boxes, - masks=mask_input, - ) - - # Predict masks - low_res_masks, iou_predictions = self.model.mask_decoder( - image_embeddings=self.features, - image_pe=self.model.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embeddings, - dense_prompt_embeddings=dense_embeddings, - multimask_output=multimask_output, - ) - - # Upscale the masks to the original image resolution - masks = self.model.postprocess_masks(low_res_masks, self.input_size, self.original_size) - - if not return_logits: - masks = masks > self.model.mask_threshold - - return masks, iou_predictions, low_res_masks - - def get_image_embedding(self) -> torch.Tensor: - """ - Returns the image embeddings for the currently set image, with - shape 1xCxHxW, where C is the embedding dimension and (H,W) are - the embedding spatial dimension of SAM (typically C=256, H=W=64). - """ - if not self.is_image_set: - raise RuntimeError( - "An image must be set with .set_image(...) to generate an embedding." - ) - assert self.features is not None, "Features must exist if an image has been set." - return self.features - - @property - def device(self) -> torch.device: - return self.model.device - - def reset_image(self) -> None: - """Resets the currently set image.""" - self.is_image_set = False - self.features = None - self.orig_h = None - self.orig_w = None - self.input_h = None - self.input_w = None diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/singleton.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/singleton.py deleted file mode 100644 index a9e0862c050777981a753fa3f6449578f07e737c..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/singleton.py +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 16:15 -@Author : alexanderwu -@File : singleton.py -""" -import abc - - -class Singleton(abc.ABCMeta, type): - """ - Singleton metaclass for ensuring only one instance of a class. - """ - - _instances = {} - - def __call__(cls, *args, **kwargs): - """Call method for the singleton metaclass.""" - if cls not in cls._instances: - cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) - return cls._instances[cls] diff --git a/spaces/xianbao/sd-to-diffusers/README.md b/spaces/xianbao/sd-to-diffusers/README.md deleted file mode 100644 index cf0fc4e3ffe497d86fdac649080da45552fd7376..0000000000000000000000000000000000000000 --- a/spaces/xianbao/sd-to-diffusers/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SD To Diffusers -emoji: 🎨➡️🧨 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: diffusers/sd-to-diffusers ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xxccc/gpt-academic/config.py b/spaces/xxccc/gpt-academic/config.py deleted file mode 100644 index 9aa93a8952526335777c3b5b8f844d79f5ca161b..0000000000000000000000000000000000000000 --- a/spaces/xxccc/gpt-academic/config.py +++ /dev/null @@ -1,82 +0,0 @@ -# [step 1]>> 例如: API_KEY = "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" (此key无效) -API_KEY = "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" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2" - -# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改 -USE_PROXY = False -if USE_PROXY: - # 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改 - # 例如 "socks5h://localhost:11284" - # [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http - # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上) - # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上 - - # 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284) - proxies = { - # [协议]:// [地址] :[端口] - "http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890", - "https": "socks5h://localhost:11284", # 再例如 "https": "http://127.0.0.1:7890", - } -else: - proxies = None - -# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次 -# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview -DEFAULT_WORKER_NUM = 3 - - -# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改 -# 对话窗的高度 -CHATBOT_HEIGHT = 1115 - -# 代码高亮 -CODE_HIGHLIGHT = True - -# 窗口布局 -LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) -DARK_MODE = True # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) - -# 发送请求到OpenAI后,等待多久判定为超时 -TIMEOUT_SECONDS = 30 - -# 网页的端口, -1代表随机端口 -WEB_PORT = -1 - -# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制 -MAX_RETRY = 2 - -# OpenAI模型选择是(gpt4现在只对申请成功的人开放) -LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm" -AVAIL_LLM_MODELS = ["newbing-free", "gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"] - -# 本地LLM模型如ChatGLM的执行方式 CPU/GPU -LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" - -# 设置gradio的并行线程数(不需要修改) -CONCURRENT_COUNT = 100 - -# 加一个live2d装饰 -ADD_WAIFU = False - -# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个) -# [("username", "password"), ("username2", "password2"), ...] -AUTHENTICATION = [] - -# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!) -# (高危设置!通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!) -# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"} -# 例如 API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://ai.open.com/api/conversation"} -API_URL_REDIRECT = {} - -# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!) -CUSTOM_PATH = "/" - -# 如果需要使用newbing,把newbing的长长的cookie放到这里 -NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"] -# 从现在起,如果您调用"newbing-free"模型,则无需填写NEWBING_COOKIES -NEWBING_COOKIES = """ -your bing cookies here -""" - -# 如果需要使用Slack Claude,使用教程详情见 request_llm/README.md -SLACK_CLAUDE_BOT_ID = '' -SLACK_CLAUDE_USER_TOKEN = '' diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md deleted file mode 100644 index dabc3c5d97e134a2d551157c2dd03a629ec661bc..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md +++ /dev/null @@ -1,271 +0,0 @@ -# :computer: 如何训练/微调 Real-ESRGAN - -- [训练 Real-ESRGAN](#训练-real-esrgan) - - [概述](#概述) - - [准备数据集](#准备数据集) - - [训练 Real-ESRNet 模型](#训练-real-esrnet-模型) - - [训练 Real-ESRGAN 模型](#训练-real-esrgan-模型) -- [用自己的数据集微调 Real-ESRGAN](#用自己的数据集微调-real-esrgan) - - [动态生成降级图像](#动态生成降级图像) - - [使用已配对的数据](#使用已配对的数据) - -[English](Training.md) **|** [简体中文](Training_CN.md) - -## 训练 Real-ESRGAN - -### 概述 - -训练分为两个步骤。除了 loss 函数外,这两个步骤拥有相同数据合成以及训练的一条龙流程。具体点说: - -1. 首先使用 L1 loss 训练 Real-ESRNet 模型,其中 L1 loss 来自预先训练的 ESRGAN 模型。 - -2. 然后我们将 Real-ESRNet 模型作为生成器初始化,结合L1 loss、感知 loss、GAN loss 三者的参数对 Real-ESRGAN 进行训练。 - -### 准备数据集 - -我们使用 DF2K ( DIV2K 和 Flickr2K ) + OST 数据集进行训练。只需要HR图像!
          -下面是网站链接: -1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip -2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar -3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip - -以下是数据的准备步骤。 - -#### 第1步:【可选】生成多尺寸图片 - -针对 DF2K 数据集,我们使用多尺寸缩放策略,*换言之*,我们对 HR 图像进行下采样,就能获得多尺寸的标准参考(Ground-Truth)图像。
          -您可以使用这个 [scripts/generate_multiscale_DF2K.py](scripts/generate_multiscale_DF2K.py) 脚本快速生成多尺寸的图像。
          -注意:如果您只想简单试试,那么可以跳过此步骤。 - -```bash -python scripts/generate_multiscale_DF2K.py --input datasets/DF2K/DF2K_HR --output datasets/DF2K/DF2K_multiscale -``` - -#### 第2步:【可选】裁切为子图像 - -我们可以将 DF2K 图像裁切为子图像,以加快 IO 和处理速度。
          -如果你的 IO 够好或储存空间有限,那么此步骤是可选的。
          - -您可以使用脚本 [scripts/extract_subimages.py](scripts/extract_subimages.py)。这是使用示例: - -```bash - python scripts/extract_subimages.py --input datasets/DF2K/DF2K_multiscale --output datasets/DF2K/DF2K_multiscale_sub --crop_size 400 --step 200 -``` - -#### 第3步:准备元信息 txt - -您需要准备一个包含图像路径的 txt 文件。下面是 `meta_info_DF2Kmultiscale+OST_sub.txt` 中的部分展示(由于各个用户可能有截然不同的子图像划分,这个文件不适合你的需求,你得准备自己的 txt 文件): - -```txt -DF2K_HR_sub/000001_s001.png -DF2K_HR_sub/000001_s002.png -DF2K_HR_sub/000001_s003.png -... -``` - -你可以使用该脚本 [scripts/generate_meta_info.py](scripts/generate_meta_info.py) 生成包含图像路径的 txt 文件。
          -你还可以合并多个文件夹的图像路径到一个元信息(meta_info)txt。这是使用示例: - -```bash - python scripts/generate_meta_info.py --input datasets/DF2K/DF2K_HR, datasets/DF2K/DF2K_multiscale --root datasets/DF2K, datasets/DF2K --meta_info datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt -``` - -### 训练 Real-ESRNet 模型 - -1. 下载预先训练的模型 [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth),放到 `experiments/pretrained_models`目录下。 - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models - ``` -2. 相应地修改选项文件 `options/train_realesrnet_x4plus.yml` 中的内容: - ```yml - train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录 - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk - ``` -3. 如果你想在训练过程中执行验证,就取消注释这些内容并进行相应的修改: - ```yml - # 取消注释这些以进行验证 - # val: - # name: validation - # type: PairedImageDataset - # dataroot_gt: path_to_gt - # dataroot_lq: path_to_lq - # io_backend: - # type: disk - - ... - - # 取消注释这些以进行验证 - # 验证设置 - # val: - # val_freq: !!float 5e3 - # save_img: True - - # metrics: - # psnr: # 指标名称,可以是任意的 - # type: calculate_psnr - # crop_border: 4 - # test_y_channel: false - ``` -4. 正式训练之前,你可以用 `--debug` 模式检查是否正常运行。我们用了4个GPU进行训练: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug - ``` - - 用 **1个GPU** 训练的 debug 模式示例: - ```bash - python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --debug - ``` -5. 正式训练开始。我们用了4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume - ``` - - 用 **1个GPU** 训练: - ```bash - python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --auto_resume - ``` - -### 训练 Real-ESRGAN 模型 - -1. 训练 Real-ESRNet 模型后,您得到了这个 `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth` 文件。如果需要指定预训练路径到其他文件,请修改选项文件 `train_realesrgan_x4plus.yml` 中 `pretrain_network_g` 的值。 -1. 修改选项文件 `train_realesrgan_x4plus.yml` 的内容。大多数修改与上节提到的类似。 -1. 正式训练之前,你可以以 `--debug` 模式检查是否正常运行。我们使用了4个GPU进行训练: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug - ``` - - 用 **1个GPU** 训练的 debug 模式示例: - ```bash - python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --debug - ``` -1. 正式训练开始。我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume - ``` - - 用 **1个GPU** 训练: - ```bash - python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --auto_resume - ``` - -## 用自己的数据集微调 Real-ESRGAN - -你可以用自己的数据集微调 Real-ESRGAN。一般地,微调(Fine-Tune)程序可以分为两种类型: - -1. [动态生成降级图像](#动态生成降级图像) -2. [使用**已配对**的数据](#使用已配对的数据) - -### 动态生成降级图像 - -只需要高分辨率图像。在训练过程中,使用 Real-ESRGAN 描述的降级模型生成低质量图像。 - -**1. 准备数据集** - -完整信息请参见[本节](#准备数据集)。 - -**2. 下载预训练模型** - -下载预先训练的模型到 `experiments/pretrained_models` 目录下。 - -- *RealESRGAN_x4plus.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models - ``` - -- *RealESRGAN_x4plus_netD.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models - ``` - -**3. 微调** - -修改选项文件 [options/finetune_realesrgan_x4plus.yml](options/finetune_realesrgan_x4plus.yml) ,特别是 `datasets` 部分: - -```yml -train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录 - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk -``` - -我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 \ -python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume -``` - -用 **1个GPU** 训练: -```bash -python realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --auto_resume -``` - -### 使用已配对的数据 - -你还可以用自己已经配对的数据微调 RealESRGAN。这个过程更类似于微调 ESRGAN。 - -**1. 准备数据集** - -假设你已经有两个文件夹(folder): - -- **gt folder**(标准参考,高分辨率图像):*datasets/DF2K/DIV2K_train_HR_sub* -- **lq folder**(低质量,低分辨率图像):*datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub* - -然后,您可以使用脚本 [scripts/generate_meta_info_pairdata.py](scripts/generate_meta_info_pairdata.py) 生成元信息(meta_info)txt 文件。 - -```bash -python scripts/generate_meta_info_pairdata.py --input datasets/DF2K/DIV2K_train_HR_sub datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub --meta_info datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt -``` - -**2. 下载预训练模型** - -下载预先训练的模型到 `experiments/pretrained_models` 目录下。 - -- *RealESRGAN_x4plus.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models - ``` - -- *RealESRGAN_x4plus_netD.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models - ``` - -**3. 微调** - -修改选项文件 [options/finetune_realesrgan_x4plus_pairdata.yml](options/finetune_realesrgan_x4plus_pairdata.yml) ,特别是 `datasets` 部分: - -```yml -train: - name: DIV2K - type: RealESRGANPairedDataset - dataroot_gt: datasets/DF2K # 修改为你的 gt folder 文件夹根目录 - dataroot_lq: datasets/DF2K # 修改为你的 lq folder 文件夹根目录 - meta_info: datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk -``` - -我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 \ -python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume -``` - -用 **1个GPU** 训练: -```bash -python realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --auto_resume -``` diff --git a/spaces/yipinggan/Predict_progressive_collapse_resistance_with_DCN/pipe.py b/spaces/yipinggan/Predict_progressive_collapse_resistance_with_DCN/pipe.py deleted file mode 100644 index d3e34e4cc6ae1ff30a5ee0b036e53f56a36842f4..0000000000000000000000000000000000000000 --- a/spaces/yipinggan/Predict_progressive_collapse_resistance_with_DCN/pipe.py +++ /dev/null @@ -1,165 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -import pandas as pd -import tensorflow as tf -import pickle -from scipy.interpolate import make_interp_spline as spline -from tensorflow.keras import regularizers -from tensorflow.keras import layers -from tensorflow.keras.layers import Input,Dense,Embedding,Reshape,Add,Flatten,Lambda,concatenate -from tensorflow.keras.optimizers import Adam -from tensorflow.keras.models import Model -from tensorflow.keras.utils import plot_model, to_categorical -from tensorflow.keras.models import load_model - - - -class CrossLayer(layers.Layer): - def __init__(self, output_dim, num_layer, **kwargs): - self.output_dim = output_dim - self.num_layer = num_layer - super(CrossLayer, self).__init__(**kwargs) - def build(self, input_shape): - self.input_dim = input_shape[1] - print(input_shape[1]) - self.W = [] - self.bias = [] - for i in range(self.num_layer): - self.W.append( - self.add_weight(shape=(self.input_dim,1), initializer='glorot_uniform', name='w_{}'.format(i), - trainable=True)) - self.bias.append( - self.add_weight(shape=(self.input_dim,1), initializer='zeros', name='b_{}'.format(i), trainable=True)) - self.built = True - - def call(self, input1): - x_0=tf.expand_dims(input1, axis=2) - x_l=x_0 - for i in range(self.num_layer): - xl_w = tf.tensordot(x_l,self.W[i],axes=(1,0)) - dot_ = tf.matmul(x_0,xl_w) - x_l = dot_ + self.bias[i] + x_l - x_l = tf.squeeze(x_l, axis=2) - return x_l - - def compute_output_shape(self, input_shape): - return (None, self.output_dim) - - def get_config(self): - config = super(CrossLayer, self).get_config() - config.update( - { - "output_dim": self.output_dim, - "num_layer": self.num_layer - } - ) - return config - -def DCN(c_layer,d_layer,dim_output): - input0 = Input(15,) - out1 = Lambda(lambda x: tf.one_hot(tf.cast(x[:, 0:1], dtype='int32'), 2))(input0) - out1 = Reshape(target_shape=(2,))(out1) - out2 = Lambda(lambda x: tf.one_hot(tf.cast(x[:, 1:2], dtype='int32'), 2))(input0) - out2 = Reshape(target_shape=(2,))(out2) - out3 = Lambda(lambda x: tf.one_hot(tf.cast(x[:, 2:3], dtype='int32'), 3))(input0) - out3 = Reshape(target_shape=(3,))(out3) - out4 = tf.keras.layers.Lambda(lambda x: x[:, 3:])(input0) - inp = concatenate([out1, out2, out3, out4]) - ## deep layer - for i in range(d_layer): - if i ==0: - deep = Dense(20,activation='relu', kernel_regularizer=regularizers.l2(0.01))(Flatten()(inp)) - else: - deep = Dense(30,activation='relu', kernel_regularizer=regularizers.l2(0.01))(deep) - ## cross layer - cross = CrossLayer(output_dim = inp.shape[1],num_layer=c_layer,name = "cross_layer")(inp) - ## concat both layers - output0 = concatenate([deep,cross],axis=-1) - output1 = Dense(20,activation='relu', kernel_regularizer=regularizers.l2(0.01))(output0) - output2 = Dense(30,activation='relu', kernel_regularizer=regularizers.l2(0.01))(output1) - output = Dense(dim_output)(output2) - model = Model(input0,output) - model.compile() - return model - -def transform(data): - - dim_input = 15 - x_new = ['' for i in range(dim_input)] - # - if data[0] == "EC": - x_new[0] = 1 - else: - x_new[0] = 2 - # - if data[1] == "B": - x_new[1] = 0 - else: - x_new[1] = 1 - # - if data[2] == "L1": - x_new[2] = 1 - elif data[2] == "L2": - x_new[2] = 2 - else: - x_new[2] = 3 - # - x_new[3:] = data[3:] - # - df = [] - df.append(x_new) - df = pd.DataFrame(df, columns=['S', 'T', 'B','Ln_x','Ln_y','H_x','H_y','B_x','B_y','rbt_x','rbb_x','rbt_y','rbb_y','fy_B','fc']) - return df - -def predict_I(data): - - df = transform(data) - features_name = df.columns - features_x = df[features_name] - features_name_x_n = features_name[3:] - - with open('z_score_DCNI.pickle', 'rb') as f: - z_score = pickle.load(f) - features_x[features_name_x_n] -= z_score[:len(z_score) // 2] - features_x[features_name_x_n] /= z_score[len(z_score) // 2:] - - model = DCN(3, 2,1) - model.load_weights('model_DCNI.h5') - Fcaa = np.squeeze(model.predict(features_x)) - - return str(np.round(Fcaa,2)) - -def predict_II(data): - - df = transform(data) - features_name = df.columns - features_x = df[features_name] - features_name_x_n = features_name[3:] - - with open('z_score_DCNII.pickle', 'rb') as f: - z_score = pickle.load(f) - features_x[features_name_x_n] -= z_score[:len(z_score) // 2] - features_x[features_name_x_n] /= z_score[len(z_score) // 2:] - - model = DCN(3, 2, 15) - model.load_weights('model_DCNII.h5') - FD = model.predict(features_x) - - return plot(FD) - -def plot(FD): - - fig, ax = plt.subplots(figsize=(6, 4), dpi=300, tight_layout=False) - x = np.arange(0, 0.16, 0.01) - x_new = np.linspace(min(x), max(x), 16) - y = np.pad(np.squeeze(FD), (1, 0)) - y_smooth = spline(x, y)(x_new) - ax.set_xlabel('${u}$/${L}$_n') - ax.set_ylabel('${F_D}$ (kN)') - ax.set_xlim([0, 0.15]) - ax.set_ylim([0, np.max(y) * 1.3]) - ax.grid(axis='both', linestyle='--') - ax.plot(x, y, '*-.', color='r') - fig.savefig('sample_FD.png', dpi=300) - - return fig diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/docs/Makefile b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/docs/Makefile deleted file mode 100644 index 718eddce170fe13b67216baf9d4d25b20e860506..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/docs/Makefile +++ /dev/null @@ -1,19 +0,0 @@ -# Minimal makefile for Sphinx documentation -# Copyright (c) Facebook, Inc. and its affiliates. - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/meta_arch/centernet_detector.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/meta_arch/centernet_detector.py deleted file mode 100644 index b7525c7b31cbbca504442e9a0dc8fb5005ea91b3..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/meta_arch/centernet_detector.py +++ /dev/null @@ -1,69 +0,0 @@ -import math -import json -import numpy as np -import torch -from torch import nn - -from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY -from detectron2.modeling import build_backbone, build_proposal_generator -from detectron2.modeling import detector_postprocess -from detectron2.structures import ImageList - -@META_ARCH_REGISTRY.register() -class CenterNetDetector(nn.Module): - def __init__(self, cfg): - super().__init__() - self.mean, self.std = cfg.MODEL.PIXEL_MEAN, cfg.MODEL.PIXEL_STD - self.register_buffer("pixel_mean", torch.Tensor(cfg.MODEL.PIXEL_MEAN).view(-1, 1, 1)) - self.register_buffer("pixel_std", torch.Tensor(cfg.MODEL.PIXEL_STD).view(-1, 1, 1)) - - self.backbone = build_backbone(cfg) - self.proposal_generator = build_proposal_generator( - cfg, self.backbone.output_shape()) # TODO: change to a more precise name - - - def forward(self, batched_inputs): - if not self.training: - return self.inference(batched_inputs) - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - - _, proposal_losses = self.proposal_generator( - images, features, gt_instances) - return proposal_losses - - - @property - def device(self): - return self.pixel_mean.device - - - @torch.no_grad() - def inference(self, batched_inputs, do_postprocess=True): - images = self.preprocess_image(batched_inputs) - inp = images.tensor - features = self.backbone(inp) - proposals, _ = self.proposal_generator(images, features, None) - - processed_results = [] - for results_per_image, input_per_image, image_size in zip( - proposals, batched_inputs, images.image_sizes): - if do_postprocess: - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - r = detector_postprocess(results_per_image, height, width) - processed_results.append({"instances": r}) - else: - r = results_per_image - processed_results.append(r) - return processed_results - - def preprocess_image(self, batched_inputs): - """ - Normalize, pad and batch the input images. - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.backbone.size_divisibility) - return images diff --git a/spaces/ysharma/text-to-image-to-video/app.py b/spaces/ysharma/text-to-image-to-video/app.py deleted file mode 100644 index 0425807884973df0d68d8c9dc2c3f5d99fb0f9f0..0000000000000000000000000000000000000000 --- a/spaces/ysharma/text-to-image-to-video/app.py +++ /dev/null @@ -1,237 +0,0 @@ -## **** below codelines are borrowed from multimodalart space -from pydoc import describe -import gradio as gr -import torch -from omegaconf import OmegaConf -import sys -sys.path.append(".") -sys.path.append('./taming-transformers') -#sys.path.append('./latent-diffusion') -from taming.models import vqgan -from util import instantiate_from_config -from huggingface_hub import hf_hub_download - -model_path_e = hf_hub_download(repo_id="multimodalart/compvis-latent-diffusion-text2img-large", filename="txt2img-f8-large.ckpt") - -#@title Import stuff -import argparse, os, sys, glob -import numpy as np -from PIL import Image -from einops import rearrange -from torchvision.utils import make_grid -import transformers -import gc -from util import instantiate_from_config -from ddim import DDIMSampler -from plms import PLMSSampler -from open_clip import tokenizer -import open_clip - -def load_model_from_config(config, ckpt, verbose=False): - print(f"Loading model from {ckpt}") - #pl_sd = torch.load(ckpt, map_location="cuda") - #please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. - pl_sd = torch.load(ckpt, map_location=torch.device('cpu')) - sd = pl_sd["state_dict"] - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - #model = model.half() #.cuda() - model.eval() - return model - -def load_safety_model(clip_model): - """load the safety model""" - import autokeras as ak # pylint: disable=import-outside-toplevel - from tensorflow.keras.models import load_model # pylint: disable=import-outside-toplevel - from os.path import expanduser # pylint: disable=import-outside-toplevel - - home = expanduser("~") - - cache_folder = home + "/.cache/clip_retrieval/" + clip_model.replace("/", "_") - if clip_model == "ViT-L/14": - model_dir = cache_folder + "/clip_autokeras_binary_nsfw" - dim = 768 - elif clip_model == "ViT-B/32": - model_dir = cache_folder + "/clip_autokeras_nsfw_b32" - dim = 512 - else: - raise ValueError("Unknown clip model") - if not os.path.exists(model_dir): - os.makedirs(cache_folder, exist_ok=True) - - from urllib.request import urlretrieve # pylint: disable=import-outside-toplevel - - path_to_zip_file = cache_folder + "/clip_autokeras_binary_nsfw.zip" - if clip_model == "ViT-L/14": - url_model = "https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_binary_nsfw.zip" - elif clip_model == "ViT-B/32": - url_model = ( - "https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_nsfw_b32.zip" - ) - else: - raise ValueError("Unknown model {}".format(clip_model)) - urlretrieve(url_model, path_to_zip_file) - import zipfile # pylint: disable=import-outside-toplevel - - with zipfile.ZipFile(path_to_zip_file, "r") as zip_ref: - zip_ref.extractall(cache_folder) - - loaded_model = load_model(model_dir, custom_objects=ak.CUSTOM_OBJECTS) - loaded_model.predict(np.random.rand(10 ** 3, dim).astype("float32"), batch_size=10 ** 3) - - return loaded_model - -def is_unsafe(safety_model, embeddings, threshold=0.5): - """find unsafe embeddings""" - nsfw_values = safety_model.predict(embeddings, batch_size=embeddings.shape[0]) - x = np.array([e[0] for e in nsfw_values]) - return True if x > threshold else False - -config = OmegaConf.load("./txt2img-1p4B-eval.yaml") -model = load_model_from_config(config,model_path_e) -device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") -model = model.to(device) - -#NSFW CLIP Filter -safety_model = load_safety_model("ViT-B/32") -clip_model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='openai') - - -def run(prompt, steps, width, height, images, scale): - opt = argparse.Namespace( - prompt = prompt, - ###outdir='./outputs', - ddim_steps = int(steps), - ddim_eta = 1, - n_iter = 1, - W=int(width), - H=int(height), - n_samples=int(images), - scale=scale, - plms=False - ) - - if opt.plms: - opt.ddim_eta = 0 - sampler = PLMSSampler(model) - else: - sampler = DDIMSampler(model) - - ###os.makedirs(opt.outdir, exist_ok=True) - ###outpath = opt.outdir - - prompt = opt.prompt - - - ###sample_path = os.path.join(outpath, "samples") - ###os.makedirs(sample_path, exist_ok=True) - ###base_count = len(os.listdir(sample_path)) - - all_samples=list() - all_samples_images=list() - with torch.no_grad(): - with torch.cuda.amp.autocast(): - with model.ema_scope(): - uc = None - if opt.scale > 0: - uc = model.get_learned_conditioning(opt.n_samples * [""]) - for n in range(opt.n_iter): - c = model.get_learned_conditioning(opt.n_samples * [prompt]) - shape = [4, opt.H//8, opt.W//8] - samples_ddim, _ = sampler.sample(S=opt.ddim_steps, - conditioning=c, - batch_size=opt.n_samples, - shape=shape, - verbose=False, - unconditional_guidance_scale=opt.scale, - unconditional_conditioning=uc, - eta=opt.ddim_eta) - - x_samples_ddim = model.decode_first_stage(samples_ddim) - x_samples_ddim = torch.clamp((x_samples_ddim+1.0)/2.0, min=0.0, max=1.0) - - for x_sample in x_samples_ddim: - x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') - image_vector = Image.fromarray(x_sample.astype(np.uint8)) - image_preprocess = preprocess(image_vector).unsqueeze(0) - with torch.no_grad(): - image_features = clip_model.encode_image(image_preprocess) - image_features /= image_features.norm(dim=-1, keepdim=True) - query = image_features.cpu().detach().numpy().astype("float32") - unsafe = is_unsafe(safety_model,query,0.5) - if(not unsafe): - all_samples_images.append(image_vector) - else: - return(None,None,"Sorry, potential NSFW content was detected on your outputs by our NSFW detection model. Try again with different prompts. If you feel your prompt was not supposed to give NSFW outputs, this may be due to a bias in the model. Read more about biases in the Biases Acknowledgment section below.") - #Image.fromarray(x_sample.astype(np.uint8)).save(os.path.join(sample_path, f"{base_count:04}.png")) - ###base_count += 1 - all_samples.append(x_samples_ddim) - - - # additionally, save as grid - grid = torch.stack(all_samples, 0) - grid = rearrange(grid, 'n b c h w -> (n b) c h w') - grid = make_grid(grid, nrow=2) - # to image - grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy() - - ###Image.fromarray(grid.astype(np.uint8)).save(os.path.join(outpath, f'{prompt.replace(" ", "-")}.png')) - #return(Image.fromarray(grid.astype(np.uint8)),all_samples_images,None) - return Image.fromarray(grid.astype(np.uint8)) - -## **** above codelines are borrowed from multimodalart space - -import gradio as gr - -fastspeech = gr.Interface.load("huggingface/facebook/fastspeech2-en-ljspeech") - -def text2speech(text): - return fastspeech(text) - -def engine(text_input): - #ner = gr.Interface.load("huggingface/flair/ner-english-ontonotes-large") - #entities = ner(text_input) - #entities = [tupl for tupl in entities if None not in tupl] - #entities_num = len(entities) - - img = run(text_input,'50','256','256','1',10) #entities[0][0] - - #img_intfc = gr.Interface.load("spaces/multimodalart/latentdiffusion") - #img_intfc = gr.Interface.load("spaces/multimodalart/latentdiffusion", inputs=[gr.inputs.Textbox(lines=1, label="Input Text"), gr.inputs.Textbox(lines=1, label="Input Text"), gr.inputs.Textbox(lines=1, label="Input Text"), gr.inputs.Textbox(lines=1, label="Input Text"), gr.inputs.Textbox(lines=1, label="Input Text"), gr.inputs.Textbox(lines=1, label="Input Text")], - #outputs=[gr.outputs.Image(type="pil", label="output image"),gr.outputs.Carousel(label="Individual images",components=["image"]),gr.outputs.Textbox(label="Error")], ) - #title="Convert text to image") - #img = img_intfc[0] - #img = img_intfc('George','50','256','256','1','10') - #img = img[0] - #inputs=['George',50,256,256,1,10] - #run(prompt, steps, width, height, images, scale) - - #speech = text2speech(text_input) - return img #entities, speech, img - -app = gr.Interface(fn=engine, - inputs=gr.inputs.Textbox(lines=5, label="Input Text"), - #live=True, - description="Takes a text as input and reads it out to you.", - outputs=[#gr.outputs.Textbox(type="auto", label="Text"),gr.outputs.Audio(type="file", label="Speech Answer"), - gr.outputs.Image(type="pil", label="output image")], - examples = ['Apple'] - #examples=["On April 17th Sunday George celebrated Easter. He is staying at Empire State building with his parents. He is a citizen of Canada and speaks English and French fluently. His role model is former president Obama. He got 1000 dollar from his mother to visit Disney World and to buy new iPhone mobile. George likes watching Game of Thrones."] - ).launch(enable_queue=True) #(debug=True) - - - #get_audio = gr.Button("generate audio") - #get_audio.click(text2speech, inputs=text, outputs=speech) - -#def greet(name): -# return "Hello " + name + "!!" - -#iface = gr.Interface(fn=greet, inputs="text", outputs="text") -#iface.launch() \ No newline at end of file diff --git a/spaces/yuan1615/EmpathyVC/utils.py b/spaces/yuan1615/EmpathyVC/utils.py deleted file mode 100644 index 810d31a97eabb4a2fc8e7761de4ad2297676854c..0000000000000000000000000000000000000000 --- a/spaces/yuan1615/EmpathyVC/utils.py +++ /dev/null @@ -1,258 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import torch -from scipy.io.wavfile import read - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/rcompare.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/rcompare.js deleted file mode 100644 index 0ac509e79dc8cfcde46be9d8247b91dd301fbde8..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/rcompare.js +++ /dev/null @@ -1,3 +0,0 @@ -const compare = require('./compare') -const rcompare = (a, b, loose) => compare(b, a, loose) -module.exports = rcompare diff --git a/spaces/zhiyin123/MyBingAI6/app.py b/spaces/zhiyin123/MyBingAI6/app.py deleted file mode 100644 index 8ac48515f96e95c3caf650ae6d84a20e3d35be19..0000000000000000000000000000000000000000 --- a/spaces/zhiyin123/MyBingAI6/app.py +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD42ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/zhoupin30/zhoupin30/src/pages/api/image.ts b/spaces/zhoupin30/zhoupin30/src/pages/api/image.ts deleted file mode 100644 index 26fdb31076a9c71e70d1725a630844b27f5a3221..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/pages/api/image.ts +++ /dev/null @@ -1,38 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' -import { createImage } from '@/lib/bots/bing/utils' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const { prompt, id } = req.query - if (!prompt) { - return res.json({ - result: { - value: 'Image', - message: 'No Prompt' - } - }) - } - try { - const headers = createHeaders(req.cookies, 'image') - - debug('headers', headers) - const response = await createImage(String(prompt), String(id), { - ...headers, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - }) - res.writeHead(200, { - 'Content-Type': 'text/plain; charset=UTF-8', - }) - return res.end(response) - } catch (e) { - return res.json({ - result: { - value: 'Error', - message: `${e}` - } - }) - } -} diff --git a/spaces/zhsso/roop/roop/__init__.py b/spaces/zhsso/roop/roop/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zomehwh/sovits-tannhauser/inference/slicer.py b/spaces/zomehwh/sovits-tannhauser/inference/slicer.py deleted file mode 100644 index b05840bcf6bdced0b6e2adbecb1a1dd5b3dee462..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-tannhauser/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # 标识所有静音片段 - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr