diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Bugs Life Movie Free [UPDATED] Download In Hindi.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Bugs Life Movie Free [UPDATED] Download In Hindi.md deleted file mode 100644 index ff398048771c03b35476df61fce889a4e74e3be3..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Bugs Life Movie Free [UPDATED] Download In Hindi.md +++ /dev/null @@ -1,23 +0,0 @@ -
-

How to Watch A Bug's Life Movie Free Download In Hindi Online

-

A Bug's Life is a 1998 animated comedy film produced by Pixar Animation Studios and distributed by Walt Disney Pictures. The film tells the story of an ant colony that is oppressed by a gang of grasshoppers and how a misfit ant named Flik tries to save the day with the help of a circus troupe of bugs.

-

A Bug's Life Movie Free Download In Hindi


Download Ziphttps://byltly.com/2uKyKz



-

If you are looking for a way to watch A Bug's Life movie free download in Hindi online, you have come to the right place. In this article, we will show you how to stream or download the movie legally and safely without any hassle.

-

Why You Should Watch A Bug's Life Movie Free Download In Hindi Online

-

A Bug's Life is a classic animated film that has won many awards and accolades, including an Academy Award nomination for Best Original Score. The film features a talented voice cast, including Dave Foley, Kevin Spacey, Julia Louis-Dreyfus, Hayden Panettiere, Phyllis Diller, David Hyde Pierce, Denis Leary, and more.

-

The film is also full of humor, adventure, and heartwarming messages about courage, friendship, and teamwork. It is a great movie for kids and adults alike, as it offers a lot of fun and entertainment for the whole family.

-

Moreover, watching A Bug's Life movie free download in Hindi online can help you enjoy the film in your native language and understand the cultural references better. You can also learn some new words and phrases in Hindi while watching the movie.

-

How to Watch A Bug's Life Movie Free Download In Hindi Online

-

There are many websites and apps that claim to offer A Bug's Life movie free download in Hindi online, but not all of them are reliable or legal. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also violate the copyright laws and put you at risk of legal trouble.

-

-

Therefore, we recommend that you watch A Bug's Life movie free download in Hindi online only from trusted and authorized sources. Here are some of the best options that you can choose from:

- -

Conclusion

-

A Bug's Life is a wonderful animated film that you should not miss out on. You can watch A Bug's Life movie free download in Hindi online from any of the above-mentioned sources and enjoy the movie in your preferred language.

-

We hope this article has helped you find the best way to watch A Bug's Life movie free download in Hindi online. If you have any questions or feedback, please feel free to leave a comment below.

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dr. Folder 2.7.0.1 full key change icons for all folders on Windows A complete guide to customize your folders.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dr. Folder 2.7.0.1 full key change icons for all folders on Windows A complete guide to customize your folders.md deleted file mode 100644 index 1b534a6bb0ce568af83bfe177359f0a7e8dd163d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dr. Folder 2.7.0.1 full key change icons for all folders on Windows A complete guide to customize your folders.md +++ /dev/null @@ -1,87 +0,0 @@ -
-

Dr. Folder 2.7.0.1 full key change icons for all folders on Windows update 6 13 2019

-

Introduction

-

Are you bored with the same old folder icons on your Windows computer? Do you want to customize your folders with different colors, shapes, and images? If yes, then you need Dr. Folder, a powerful and easy-to-use software that lets you change icons for all folders on Windows in just a few clicks.

-

Dr. Folder 2.7.0.1 full key change icons for all folders on Windows update 6 13 2019


DOWNLOAD ✓✓✓ https://byltly.com/2uKyqf



-

What is Dr. Folder?

-

Dr. Folder is a folder icon changer software that can replace the default folder icon with any icon you want. It has a large collection of icons for various categories, such as animals, cartoons, games, movies, music, nature, sports, etc. You can also use your own icons or download more from the internet.

-

Why use Dr. Folder?

-

Dr. Folder can help you organize your folders better by making them more recognizable and attractive. You can also use different icons to indicate the status or priority of your folders, such as important, private, locked, shared, etc. Moreover, Dr. Folder can protect your folders from being deleted or modified by hiding them or making them read-only.

-

How to download and install Dr. Folder?

-

You can download Dr. Folder from its official website or from other trusted sources online. The latest version is 2.7.0.1 and it was updated on June 13, 2019. The file size is about 8 MB and it supports Windows XP/Vista/7/8/10 (32-bit and 64-bit). To install Dr. Folder, you need to run the setup file and follow the instructions on the screen.

-

How to change icons for all folders on Windows with Dr. Folder?

-

Changing icons for all folders on Windows with Dr. Folder is very simple and fast. Here are the steps you need to follow:

-

Step 1: Launch Dr. Folder

-

After installing Dr. Folder, you can find it in your Start menu or on your desktop. Double-click on its icon to open it.

-

Step 2: Select the folders you want to change icons for

-

You can select one or more folders from your computer by using the Add Folders button or by dragging and dropping them into the main window of Dr. Folder.

-

Step 3: Choose an icon from the built-in library or your own collection

-

You can browse through the built-in library of icons by clicking on the Icons button at the top of the window. You can also use the Search function to find a specific icon by its name or keyword.

-

How to customize folder icons with Dr. Folder 2.7.0.1 full key
-Dr. Folder 2.7.0.1 full key download link and installation guide
-Best folder icon changer software for Windows: Dr. Folder 2.7.0.1 full key
-Dr. Folder 2.7.0.1 full key features and benefits
-Dr. Folder 2.7.0.1 full key review and rating
-How to get Dr. Folder 2.7.0.1 full key for free
-Dr. Folder 2.7.0.1 full key vs other folder icon changer tools
-How to update Dr. Folder 2.7.0.1 full key to the latest version
-How to use Dr. Folder 2.7.0.1 full key to change icons for multiple folders
-How to fix Dr. Folder 2.7.0.1 full key errors and issues
-How to uninstall Dr. Folder 2.7.0.1 full key from Windows
-How to backup and restore folder icons with Dr. Folder 2.7.0.1 full key
-How to create custom folder icons with Dr. Folder 2.7.0.1 full key
-How to apply different folder icons for different file types with Dr. Folder 2.7.0.1 full key
-How to change folder icons according to themes with Dr. Folder 2.7.0.1 full key
-How to change folder icons in Windows Explorer and Desktop with Dr. Folder 2.7.0.1 full key
-How to change folder icons in OneDrive and Dropbox with Dr.
-How to change folder icons in network drives and external devices with Dr.
-How to change folder icons in Windows Start Menu and Taskbar with Dr.
-How to change folder icons in Windows Registry and System Files with Dr.
-How to change folder icons for hidden and protected folders with Dr.
-How to change folder icons for shortcuts and links with Dr.
-How to change folder icons for compressed and encrypted folders with Dr.
-How to change folder icons for shared and synced folders with Dr.
-How to change folder icons for special and system folders with Dr.
-How to batch change folder icons with Dr.
-How to preview folder icons before changing them with Dr.
-How to revert folder icons back to default with Dr.
-How to find and replace folder icons with Dr.
-How to sort and filter folder icons with Dr.
-How to export and import folder icons with Dr.

-

If you want to use your own icons, you can click on the Add Icons button and select them from your computer.

-

Step 4: Apply the changes and enjoy your new folder icons

-

Once you have chosen an icon for your folders, you can click on the Apply button at the bottom of the window to make the changes effective.

-

You can also preview how your folders will look like before applying by clicking on the Preview button.

-

Congratulations! You have successfully changed icons for all folders on Windows with Dr. Folder.

-

How to restore the default icons for all folders on Windows with Dr. Folder?

-

If you want to go back to the original folder icons on Windows, you can easily do that with Dr. Folder as well. Here are the steps you need to follow:

-

Step 1: Launch Dr. Folder

-

Open Dr. Folder as described in step 1 above.

-

Step 2: Select the folders you want to restore icons for

-

Select one or more folders from your computer by using the Add Folders button or by dragging and dropping them into the main window of Dr. Folder.

-

Step 3: Click on the Restore Default Icon button

-

You can find this button at the top of the window next to the Icons button.

-

Step 4: Confirm the changes and revert to the original folder icons

-

A pop-up window will ask you if you are sure you want to restore the default icon for your folders.

-

Click Yes to confirm and No to cancel.

-

You have successfully restored the default icons for all folders on Windows with Dr. Folder.

-

Conclusion

-

In this article, we have shown you how to change icons for all folders on Windows with Dr. Folder, a folder icon changer software that can make your folders more personalized and organized.

-

We have also shown you how to restore the default icons for all folders on Windows with Dr. Folder in case you want to undo your changes.

-

We hope you have enjoyed this article and found it useful.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Audiorealism Bassline 2 Abl2 Crackl [2021].md b/spaces/1gistliPinn/ChatGPT4/Examples/Audiorealism Bassline 2 Abl2 Crackl [2021].md deleted file mode 100644 index 4a77486d5a6bc305a667bc37746d20176aa64f80..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Audiorealism Bassline 2 Abl2 Crackl [2021].md +++ /dev/null @@ -1,6 +0,0 @@ -

Audiorealism Bassline 2 Abl2 Crackl


Download Ziphttps://imgfil.com/2uxZSt



- -Then click the Open button. Installation on Windows 10 or later. Sometimes Windows will give you a warning when you run the installer. If the software. NET Framework is not installed, you will be prompted to: Run the .NET Framework installer, or Run the .NET Framework installer You can now start installing the .NET Framework software. Once the installation is complete (usually takes one to two minutes), you can proceed to the next step. Run the .NET Framework installer (rather than just running it). Once installed, you can use the Close button to exit the installer. 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Baixarsonar85completoportuguestorrent WORK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Baixarsonar85completoportuguestorrent WORK.md deleted file mode 100644 index 36bb64c5e08c49f55d593db652fb6cec3b34da19..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Baixarsonar85completoportuguestorrent WORK.md +++ /dev/null @@ -1,6 +0,0 @@ -

baixarsonar85completoportuguestorrent


Download File ☆☆☆☆☆ https://imgfil.com/2uxZWZ



- - 3cee63e6c2
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/El Diabolico Inconsciente Pdf Download [TOP].md b/spaces/1gistliPinn/ChatGPT4/Examples/El Diabolico Inconsciente Pdf Download [TOP].md deleted file mode 100644 index 371c38a615a55a4a9fa531db219ac7f9cffc6bcf..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/El Diabolico Inconsciente Pdf Download [TOP].md +++ /dev/null @@ -1,5 +0,0 @@ -
-

babahiddenz. el diabolico, 1939. Consciencia dei profeti antichissimi libro di. Diabola, diabolico inconsciente, vita di uranus diabolo, cultura dell'antichristo, diabolico incansante. Diablo inconsciente - Authobiografia - Segunda edición y correo electrónico - PDF bajo licencia. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la guerra n. El Diabolo Inconsciente. El Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane. Inconsciente, el diabolo y el propio, fueron los ponentes de la gu. El Diabolo Inconsciente, o Diabolo Insane.

-

el diabolico inconsciente pdf download


DOWNLOADhttps://imgfil.com/2uy1oQ



899543212b
-
-
\ No newline at end of file diff --git a/spaces/1line/AutoGPT/autogpt/workspace.py b/spaces/1line/AutoGPT/autogpt/workspace.py deleted file mode 100644 index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/workspace.py +++ /dev/null @@ -1,47 +0,0 @@ -from __future__ import annotations - -import os -from pathlib import Path - -from autogpt.config import Config - -CFG = Config() - -# Set a dedicated folder for file I/O -WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKSPACE_PATH): - os.makedirs(WORKSPACE_PATH) - - -def path_in_workspace(relative_path: str | Path) -> Path: - """Get full path for item in workspace - - Parameters: - relative_path (str | Path): Path to translate into the workspace - - Returns: - Path: Absolute path for the given path in the workspace - """ - return safe_path_join(WORKSPACE_PATH, relative_path) - - -def safe_path_join(base: Path, *paths: str | Path) -> Path: - """Join one or more path components, asserting the resulting path is within the workspace. - - Args: - base (Path): The base path - *paths (str): The paths to join to the base path - - Returns: - Path: The joined path - """ - joined_path = base.joinpath(*paths).resolve() - - if CFG.restrict_to_workspace and not joined_path.is_relative_to(base): - raise ValueError( - f"Attempted to access path '{joined_path}' outside of workspace '{base}'." - ) - - return joined_path diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Best Soccer Experience with Real Football 2009 - Download Now.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Best Soccer Experience with Real Football 2009 - Download Now.md deleted file mode 100644 index c4d109dd44cbb36ed7604b51dea6101d4c607346..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy the Best Soccer Experience with Real Football 2009 - Download Now.md +++ /dev/null @@ -1,139 +0,0 @@ -
-

Download 2009 Real Football: A Review of the Best Soccer Game for Nintendo DS

-

If you are a fan of soccer, you probably know that there are many games available for different platforms. But if you own a Nintendo DS, you might be wondering which one is the best. Well, look no further than 2009 Real Football, also known as Real Soccer 2009 in some regions. This game is widely considered as the definitive soccer title for the DS, offering a never-before-seen experience with its unique features and capabilities. In this article, we will review 2009 Real Football and show you how to download it for your device.

-

download 2009 real football


Download Ziphttps://jinyurl.com/2uNUqC



-

Introduction

-

What is 2009 Real Football?

-

2009 Real Football is a soccer game developed by Gameloft and published by Ubisoft for the Nintendo DS in 2008. It is part of the long-running Real Football series, which started in 2004 for mobile phones. The game features over 200 teams and players from around the world, including licensed ones from FIFA. It also boasts realistic physics, animations, and sound effects that make you feel like you are on the pitch.

-

Why should you download it?

-

There are many reasons why you should download 2009 Real Football for your Nintendo DS. Here are some of them:

- -

Features of 2009 Real Football

-

Realistic graphics and animations

-

One of the most impressive aspects of 2009 Real Football is its graphics and animations. The game uses a 3D engine that renders the players, stadiums, and crowds in high detail. The players have realistic faces, expressions, movements, and reactions that match their real-life counterparts. The stadiums are based on real ones from different countries, such as Wembley, Camp Nou, or Maracana. The crowds are also lively and responsive, cheering or booing depending on the situation.

-

Various game modes and options

-

Another great feature of 2009 Real Football is its variety of game modes and options. You can choose from different types of matches, such as exhibition, league, cup, penalty shootout, training, or custom. You can also adjust the difficulty level, time limit, weather conditions, camera angles, and rules to suit your preference. You can also create your own team and player with the editor mode, where you can customize their name, appearance, skills, nationality, and position.

-

download 2009 real football java game
-download 2009 real football for android
-download 2009 real football apk
-download 2009 real football jar
-download 2009 real football mobile game
-download 2009 real football 3d
-download 2009 real football hd
-download 2009 real football for nokia
-download 2009 real football for samsung
-download 2009 real football for pc
-download 2009 real football mod apk
-download 2009 real football gameloft
-download 2009 real football free
-download 2009 real football full version
-download 2009 real football offline
-download 2009 real football online
-download 2009 real football multiplayer
-download 2009 real football cheats
-download 2009 real football hack
-download 2009 real football unlimited money
-download 2009 real football latest version
-download 2009 real football old version
-download 2009 real football update
-download 2009 real football patch
-download 2009 real football crack
-download 2009 real football serial key
-download 2009 real football license key
-download 2009 real football activation key
-download 2009 real football registration key
-download 2009 real football product key
-download 2009 real football review
-download 2009 real football gameplay
-download 2009 real football trailer
-download 2009 real football video
-download 2009 real football tips and tricks
-download 2009 real football guide and walkthrough
-download 2009 real football best players and teams
-download 2009 real football skills and goals
-download 2009 real football tournaments and leagues
-download 2009 real football stadiums and kits
-how to download 2009 real football on android phone or tablet?
-how to install and play 2009 real football on pc or laptop?
-how to run and enjoy 2009 real football on java or symbian device?
-where to find and get the link to download 2009 real football?
-what are the requirements and specifications to download and play 2009 real football?

-

Online multiplayer and community

-

The last but not least feature of 2009 Real Football is its online multiplayer and community. You can connect your Nintendo DS to the internet via Wi-Fi or Bluetooth and play with or against other players around the world. You can also join the online community and chat with other fans, share your scores, and download new content. You can also access the official website of the game and get updates, news, tips, tricks, and more.

-

How to download 2009 Real Football

-

Requirements and compatibilityRequirements and compatibility

-

Before you download 2009 Real Football, you need to make sure that your Nintendo DS meets the requirements and compatibility of the game. Here are some of them:

- -

Sources and links

-

There are different sources and links where you can download 2009 Real Football for your Nintendo DS. Here are some of them:

- - - - - - - - - - - - - - - - - - - - - - - - - -
SourceLink
Nintendo eShophttps://www.nintendo.com/games/detail/real-soccer-2009-ds/
Gameloft official websitehttps://www.gameloft.com/en/game/real-football-2009-ds/
Ubisoft official websitehttps://www.ubisoft.com/en-us/game/real-football-2009-ds/
Amazonhttps://www.amazon.com/Real-Soccer-2009-Nintendo-DS/dp/B001E27DLM/
eBayhttps://www.ebay.com/sch/i.html?_nkw=real+football+2009+ds
-

Installation and setup

-

After you download 2009 Real Football from one of the sources and links above, you need to install and set up the game on your Nintendo DS. Here are the steps:

-
    -
  1. Insert the game card into the slot of your DS device and turn it on.
  2. -
  3. Select the game icon from the menu and press A to start.
  4. -
  5. Follow the instructions on the screen to create your profile, choose your language, and adjust your settings.
  6. -
  7. Enjoy playing 2009 Real Football on your Nintendo DS!
  8. -
-

Conclusion

Summary of the main points

-

In this article, we have reviewed 2009 Real Football, the best soccer game for Nintendo DS. We have discussed its features, such as realistic graphics and animations, various game modes and options, and online multiplayer and community. We have also shown you how to download it from different sources and links, and how to install and set up it on your device.

-

Recommendations and ratings

-

We highly recommend 2009 Real Football to anyone who loves soccer and owns a Nintendo DS. It is a fun, challenging, and immersive game that will keep you entertained for hours. It is also a great way to connect with other players and fans around the world. We give it a rating of 4.5 out of 5 stars, based on its gameplay, graphics, sound, and online features.

-

Call to action

-

If you are interested in downloading 2009 Real Football for your Nintendo DS, don't hesitate to do so. You can find it on the Nintendo eShop, Gameloft official website, Ubisoft official website, Amazon, or eBay. You can also visit the official website of the game for more information and updates. Don't miss this opportunity to enjoy the best soccer game for Nintendo DS!

-

FAQs

-

Here are some frequently asked questions about 2009 Real Football:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/models/__init__.py deleted file mode 100644 index 3208e987f694fabf7569ff9e586bb5eacb0d912f..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/models/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# flake8: noqa - -from ..utils import is_paddle_available - -if is_paddle_available(): - from .attention import Transformer2DModel - from .prior_transformer import PriorTransformer - from .unet_1d import UNet1DModel - from .unet_2d import UNet2DModel - from .unet_2d_condition import UNet2DConditionModel - from .vae import AutoencoderKL, VQModel diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py deleted file mode 100644 index 58f340bf2b849005a49efebbfb8bed4d56d694d6..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import List, Optional, Tuple, Union - -import paddle - -from ...models import UNet2DModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import ScoreSdeVeScheduler - - -class ScoreSdeVePipeline(DiffusionPipeline): - r""" - Parameters: - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.) - unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image. scheduler ([`SchedulerMixin`]): - The [`ScoreSdeVeScheduler`] scheduler to be used in combination with `unet` to denoise the encoded image. - """ - unet: UNet2DModel - scheduler: ScoreSdeVeScheduler - - def __init__(self, unet: UNet2DModel, scheduler: DiffusionPipeline): - super().__init__() - self.register_modules(unet=unet, scheduler=scheduler) - - @paddle.no_grad() - def __call__( - self, - batch_size: int = 1, - num_inference_steps: int = 2000, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ) -> Union[ImagePipelineOutput, Tuple]: - r""" - Args: - batch_size (`int`, *optional*, defaults to 1): - The number of images to generate. - generator (`paddle.Generator`, *optional*): - One or a list of paddle generator(s) to make generation deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. - - Returns: - [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if - `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the - generated images. - """ - - img_size = self.unet.config.sample_size - shape = (batch_size, 3, img_size, img_size) - - model = self.unet - - sample = paddle.randn(shape, generator=generator) * self.scheduler.init_noise_sigma - - self.scheduler.set_timesteps(num_inference_steps) - self.scheduler.set_sigmas(num_inference_steps) - - for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)): - sigma_t = self.scheduler.sigmas[i] * paddle.ones((shape[0],)) - - # correction step - for _ in range(self.scheduler.config.correct_steps): - model_output = self.unet(sample, sigma_t).sample - sample = self.scheduler.step_correct(model_output, sample, generator=generator).prev_sample - - # prediction step - model_output = model(sample, sigma_t).sample - output = self.scheduler.step_pred(model_output, t, sample, generator=generator) - - sample, sample_mean = output.prev_sample, output.prev_sample_mean - - sample = sample_mean.clip(0, 1) - sample = sample.transpose([0, 2, 3, 1]).numpy() - if output_type == "pil": - sample = self.numpy_to_pil(sample) - - if not return_dict: - return (sample,) - - return ImagePipelineOutput(images=sample) diff --git a/spaces/1toTree/lora_test/ppdiffusers/utils/outputs.py b/spaces/1toTree/lora_test/ppdiffusers/utils/outputs.py deleted file mode 100644 index c2682001b7d3cb78139f914ba346f73ba9ff8fc8..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/utils/outputs.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Generic utilities -""" - -from collections import OrderedDict -from dataclasses import fields -from typing import Any, Tuple - -import numpy as np - -from .import_utils import is_paddle_available - - -def is_tensor(x): - """ - Tests if `x` is a `paddle.Tensor` or `np.ndarray`. - """ - if is_paddle_available(): - import paddle - - return paddle.is_tensor(x) - - return isinstance(x, np.ndarray) - - -class BaseOutput(OrderedDict): - """ - Base class for all model outputs as dataclass. Has a `__getitem__` that allows indexing by integer or slice (like a - tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular - python dictionary. - - - - You can't unpack a `BaseOutput` directly. Use the [`~utils.BaseOutput.to_tuple`] method to convert it to a tuple - before. - - - """ - - def __post_init__(self): - class_fields = fields(self) - - # Safety and consistency checks - if not len(class_fields): - raise ValueError(f"{self.__class__.__name__} has no fields.") - - first_field = getattr(self, class_fields[0].name) - other_fields_are_none = all(getattr(self, field.name) is None for field in class_fields[1:]) - - if other_fields_are_none and isinstance(first_field, dict): - for key, value in first_field.items(): - self[key] = value - else: - for field in class_fields: - v = getattr(self, field.name) - if v is not None: - self[field.name] = v - - def __delitem__(self, *args, **kwargs): - raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.") - - def setdefault(self, *args, **kwargs): - raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.") - - def pop(self, *args, **kwargs): - raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.") - - def update(self, *args, **kwargs): - raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.") - - def __getitem__(self, k): - if isinstance(k, str): - inner_dict = {k: v for (k, v) in self.items()} - return inner_dict[k] - else: - return self.to_tuple()[k] - - def __setattr__(self, name, value): - if name in self.keys() and value is not None: - # Don't call self.__setitem__ to avoid recursion errors - super().__setitem__(name, value) - super().__setattr__(name, value) - - def __setitem__(self, key, value): - # Will raise a KeyException if needed - super().__setitem__(key, value) - # Don't call self.__setattr__ to avoid recursion errors - super().__setattr__(key, value) - - def to_tuple(self) -> Tuple[Any]: - """ - Convert self to a tuple containing all the attributes/keys that are not `None`. - """ - # try to fix: https://github.com/PaddlePaddle/PaddleNLP/issues/3355 - # when trying to get the keys of `OrderedDict`, `keys` method return empty values. - # TODO(wj-Mcat): this bug should be fixed in Paddle framework - tuples = () - for field in fields(self): - if getattr(self, field.name, None) is None: - continue - tuples = tuples + (getattr(self, field.name),) - - return tuples diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/Changelog_CN.md b/spaces/AI-Hobbyist/Hoyo-RVC/Changelog_CN.md deleted file mode 100644 index 42a71ee366a0c21afc0c8e05a42cd8508aa2db0a..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/Changelog_CN.md +++ /dev/null @@ -1,80 +0,0 @@ -### 20230618更新 -- v2增加32k和48k两个新预训练模型 -- 修复非f0模型推理报错 -- 对于超过一小时的训练集的索引建立环节,自动kmeans缩小特征处理以加速索引训练、加入和查询 -- 附送一个人声转吉他玩具仓库 -- 数据处理剔除异常值切片 -- onnx导出选项卡 - -失败的实验: -- ~~特征检索增加时序维度:寄,没啥效果~~ -- ~~特征检索增加PCAR降维可选项:寄,数据大用kmeans缩小数据量,数据小降维操作耗时比省下的匹配耗时还多~~ -- ~~支持onnx推理(附带仅推理的小压缩包):寄,生成nsf还是需要pytorch~~ -- ~~训练时在音高、gender、eq、噪声等方面对输入进行随机增强:寄,没啥效果~~ - -todolist: -- 接入小型声码器调研 -- 训练集音高识别支持crepe -- crepe的精度支持和RVC-config同步 -- 对接F0编辑器 - - -### 20230528更新 -- 增加v2的jupyter notebook,韩文changelog,增加一些环境依赖 -- 增加呼吸、清辅音、齿音保护模式 -- 支持crepe-full推理 -- UVR5人声伴奏分离加上3个去延迟模型和MDX-Net去混响模型,增加HP3人声提取模型 -- 索引名称增加版本和实验名称 -- 人声伴奏分离、推理批量导出增加音频导出格式选项 -- 废弃32k模型的训练 - -### 20230513更新 -- 清除一键包内部老版本runtime内残留的infer_pack和uvr5_pack -- 修复训练集预处理伪多进程的bug -- 增加harvest识别音高可选通过中值滤波削弱哑音现象,可调整中值滤波半径 -- 导出音频增加后处理重采样 -- 训练n_cpu进程数从"仅调整f0提取"改为"调整数据预处理和f0提取" -- 自动检测logs文件夹下的index路径,提供下拉列表功能 -- tab页增加"常见问题解答"(也可参考github-rvc-wiki) -- 相同路径的输入音频推理增加了音高缓存(用途:使用harvest音高提取,整个pipeline会经历漫长且重复的音高提取过程,如果不使用缓存,实验不同音色、索引、音高中值滤波半径参数的用户在第一次测试后的等待结果会非常痛苦) - -### 20230514更新 -- 音量包络对齐输入混合(可以缓解“输入静音输出小幅度噪声”的问题。如果输入音频背景底噪大则不建议开启,默认不开启(值为1可视为不开启)) -- 支持按照指定频率保存提取的小模型(假如你想尝试不同epoch下的推理效果,但是不想保存所有大checkpoint并且每次都要ckpt手工处理提取小模型,这项功能会非常实用) -- 通过设置环境变量解决服务端开了系统全局代理导致浏览器连接错误的问题 -- 支持v2预训练模型(目前只公开了40k版本进行测试,另外2个采样率还没有训练完全) -- 推理前限制超过1的过大音量 -- 微调数据预处理参数 - - -### 20230409更新 -- 修正训练参数,提升显卡平均利用率,A100最高从25%提升至90%左右,V100:50%->90%左右,2060S:60%->85%左右,P40:25%->95%左右,训练速度显著提升 -- 修正参数:总batch_size改为每张卡的batch_size -- 修正total_epoch:最大限制100解锁至1000;默认10提升至默认20 -- 修复ckpt提取识别是否带音高错误导致推理异常的问题 -- 修复分布式训练每个rank都保存一次ckpt的问题 -- 特征提取进行nan特征过滤 -- 修复静音输入输出随机辅音or噪声的问题(老版模型需要重做训练集重训) - -### 20230416更新 -- 新增本地实时变声迷你GUI,双击go-realtime-gui.bat启动 -- 训练推理均对<50Hz的频段进行滤波过滤 -- 训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑 -- WebUI支持根据系统区域变更语言(现支持en_US,ja_JP,zh_CN,zh_HK,zh_SG,zh_TW,不支持的默认en_US) -- 修正部分显卡识别(例如V100-16G识别失败,P4识别失败) - -### 20230428更新 -- 升级faiss索引设置,速度更快,质量更高 -- 取消total_npy依赖,后续分享模型不再需要填写total_npy -- 解锁16系限制。4G显存GPU给到4G的推理设置。 -- 修复部分音频格式下UVR5人声伴奏分离的bug -- 实时变声迷你gui增加对非40k与不懈怠音高模型的支持 - -### 后续计划: -功能: -- 支持多人训练选项卡(至多4人) - -底模: -- 收集呼吸wav加入训练集修正呼吸变声电音的问题 -- 我们正在训练增加了歌声训练集的底模,未来会公开 - diff --git a/spaces/AIFILMS/ControlNet-Video/model.py b/spaces/AIFILMS/ControlNet-Video/model.py deleted file mode 100644 index f426fd606e4c1fd4fc23785218aaa0b0fa6a5279..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/ControlNet-Video/model.py +++ /dev/null @@ -1,760 +0,0 @@ -# This file is adapted from gradio_*.py in https://github.com/lllyasviel/ControlNet/tree/f4748e3630d8141d7765e2bd9b1e348f47847707 -# The original license file is LICENSE.ControlNet in this repo. -from __future__ import annotations - -import pathlib -import random -import shlex -import subprocess -import sys - -import cv2 -import einops -import numpy as np -import torch -from huggingface_hub import hf_hub_url -from pytorch_lightning import seed_everything - -sys.path.append('ControlNet') - -import config -from annotator.canny import apply_canny -from annotator.hed import apply_hed, nms -from annotator.midas import apply_midas -from annotator.mlsd import apply_mlsd -from annotator.openpose import apply_openpose -from annotator.uniformer import apply_uniformer -from annotator.util import HWC3, resize_image -from cldm.model import create_model, load_state_dict -from ldm.models.diffusion.ddim import DDIMSampler -from share import * - - -MODEL_NAMES = { - 'canny': 'control_canny-fp16.safetensors', - 'hough': 'control_mlsd-fp16.safetensors', - 'hed': 'control_hed-fp16.safetensors', - 'scribble': 'control_scribble-fp16.safetensors', - 'pose': 'control_openpose-fp16.safetensors', - 'seg': 'control_seg-fp16.safetensors', - 'depth': 'control_depth-fp16.safetensors', - 'normal': 'control_normal-fp16.safetensors', -} - -MODEL_REPO = 'webui/ControlNet-modules-safetensors' - -DEFAULT_BASE_MODEL_REPO = 'runwayml/stable-diffusion-v1-5' -DEFAULT_BASE_MODEL_FILENAME = 'v1-5-pruned-emaonly.safetensors' -DEFAULT_BASE_MODEL_URL = 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors' - -class Model: - def __init__(self, - model_config_path: str = 'ControlNet/models/cldm_v15.yaml', - model_dir: str = 'models'): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.model = create_model(model_config_path).to(self.device) - self.ddim_sampler = DDIMSampler(self.model) - self.task_name = '' - - self.base_model_url = '' - - self.model_dir = pathlib.Path(model_dir) - self.model_dir.mkdir(exist_ok=True, parents=True) - - self.download_models() - self.set_base_model(DEFAULT_BASE_MODEL_REPO, - DEFAULT_BASE_MODEL_FILENAME) - - def set_base_model(self, model_id: str, filename: str) -> str: - if not model_id or not filename: - return self.base_model_url - base_model_url = hf_hub_url(model_id, filename) - if base_model_url != self.base_model_url: - self.load_base_model(base_model_url) - self.base_model_url = base_model_url - return self.base_model_url - - - def download_base_model(self, model_url: str) -> pathlib.Path: - self.model_dir.mkdir(exist_ok=True, parents=True) - model_name = model_url.split('/')[-1] - out_path = self.model_dir / model_name - if not out_path.exists(): - subprocess.run(shlex.split(f'wget {model_url} -O {out_path}')) - return out_path - - def load_base_model(self, model_url: str) -> None: - model_path = self.download_base_model(model_url) - self.model.load_state_dict(load_state_dict(model_path, - location=self.device.type), - strict=False) - - def load_weight(self, task_name: str) -> None: - if task_name == self.task_name: - return - weight_path = self.get_weight_path(task_name) - self.model.control_model.load_state_dict( - load_state_dict(weight_path, location=self.device.type)) - self.task_name = task_name - - def get_weight_path(self, task_name: str) -> str: - if 'scribble' in task_name: - task_name = 'scribble' - return f'{self.model_dir}/{MODEL_NAMES[task_name]}' - - def download_models(self) -> None: - self.model_dir.mkdir(exist_ok=True, parents=True) - for name in MODEL_NAMES.values(): - out_path = self.model_dir / name - if out_path.exists(): - continue - model_url = hf_hub_url(MODEL_REPO, name) - subprocess.run(shlex.split(f'wget {model_url} -O {out_path}')) - - @torch.inference_mode() - def process_canny(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, ddim_steps, scale, seed, - eta, low_threshold, high_threshold): - self.load_weight('canny') - - img = resize_image(HWC3(input_image), image_resolution) - H, W, C = img.shape - - detected_map = apply_canny(img, low_threshold, high_threshold) - detected_map = HWC3(detected_map) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_hough(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta, value_threshold, - distance_threshold): - self.load_weight('hough') - - input_image = HWC3(input_image) - detected_map = apply_mlsd(resize_image(input_image, detect_resolution), - value_threshold, distance_threshold) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_NEAREST) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [ - 255 - cv2.dilate(detected_map, - np.ones(shape=(3, 3), dtype=np.uint8), - iterations=1) - ] + results - - @torch.inference_mode() - def process_hed(self, input_image, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, - seed, eta): - self.load_weight('hed') - - input_image = HWC3(input_image) - detected_map = apply_hed(resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_scribble(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, ddim_steps, scale, - seed, eta): - self.load_weight('scribble') - - img = resize_image(HWC3(input_image), image_resolution) - H, W, C = img.shape - - detected_map = np.zeros_like(img, dtype=np.uint8) - detected_map[np.min(img, axis=2) < 127] = 255 - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_scribble_interactive(self, input_image, prompt, a_prompt, - n_prompt, num_samples, image_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('scribble') - - img = resize_image(HWC3(input_image['mask'][:, :, 0]), - image_resolution) - H, W, C = img.shape - - detected_map = np.zeros_like(img, dtype=np.uint8) - detected_map[np.min(img, axis=2) > 127] = 255 - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_fake_scribble(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('scribble') - - input_image = HWC3(input_image) - detected_map = apply_hed(resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - detected_map = nms(detected_map, 127, 3.0) - detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0) - detected_map[detected_map > 4] = 255 - detected_map[detected_map < 255] = 0 - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_pose(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('pose') - - input_image = HWC3(input_image) - detected_map, _ = apply_openpose( - resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_NEAREST) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_seg(self, input_image, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, - seed, eta): - self.load_weight('seg') - - input_image = HWC3(input_image) - detected_map = apply_uniformer( - resize_image(input_image, detect_resolution)) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_NEAREST) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_depth(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('depth') - - input_image = HWC3(input_image) - detected_map, _ = apply_midas( - resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_normal(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta, bg_threshold): - self.load_weight('normal') - - input_image = HWC3(input_image) - _, detected_map = apply_midas(resize_image(input_image, - detect_resolution), - bg_th=bg_threshold) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - - control = torch.from_numpy( - detected_map[:, :, ::-1].copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/autoencoder_multi.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/autoencoder_multi.py deleted file mode 100644 index cc4f830e24e99950f5ff412e8c5776e6a3489bf2..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/autoencoder_multi.py +++ /dev/null @@ -1,201 +0,0 @@ -""" -与autoencoder.py的区别在于,autoencoder.py计算loss时只有一个discriminator,而此处又多了个multiwindowDiscriminator,所以优化器 -优化的参数改为: -opt_disc = torch.optim.Adam(list(self.loss.discriminator.parameters()) + list(self.loss.discriminator_multi.parameters()), - lr=lr, betas=(0.5, 0.9)) -""" - -import os -import torch -import pytorch_lightning as pl -import torch.nn.functional as F -from contextlib import contextmanager - -from packaging import version -import numpy as np -from ldm.modules.diffusionmodules.model import Encoder, Decoder -from ldm.modules.distributions.distributions import DiagonalGaussianDistribution -from torch.optim.lr_scheduler import LambdaLR -from ldm.util import instantiate_from_config - - - -class AutoencoderKL(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - ): - super().__init__() - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - def encode(self, x): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior - - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - return dec - - def forward(self, input, sample_posterior=True): - posterior = self.encode(input) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val") - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val") - - self.log("val/rec_loss", log_dict_ae["val/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def test_step(self, batch, batch_idx): - inputs = self.get_input(batch, self.image_key)# inputs shape:(b,c,mel_len,T) or (b,c,h,w) - reconstructions, posterior = self(inputs)# reconstructions:(b,c,mel_len,T) or (b,c,h,w) - reconstructions = (reconstructions + 1)/2 # to mel scale - test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path) - savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class') - if not os.path.exists(savedir): - os.makedirs(savedir) - - file_names = batch['f_name'] - # print(f"reconstructions.shape:{reconstructions.shape}",file_names) - reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim - for b in range(reconstructions.shape[0]): - vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num - v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:] - save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}.npy') - np.save(save_img_path,reconstructions[b]) - - return None - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(list(self.loss.discriminator.parameters()) + list(self.loss.discriminator_multi.parameters()), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class IdentityFirstStage(torch.nn.Module): - def __init__(self, *args, vq_interface=False, **kwargs): - self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff - super().__init__() - - def encode(self, x, *args, **kwargs): - return x - - def decode(self, x, *args, **kwargs): - return x - - def quantize(self, x, *args, **kwargs): - if self.vq_interface: - return x, None, [None, None, None] - return x - - def forward(self, x, *args, **kwargs): - return x \ No newline at end of file diff --git a/spaces/AILab-CVC/SEED-LLaMA/scripts/start_backend_14b.sh b/spaces/AILab-CVC/SEED-LLaMA/scripts/start_backend_14b.sh deleted file mode 100644 index 358a5d956ae2064a3641297273c7121a5ca4e28b..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/scripts/start_backend_14b.sh +++ /dev/null @@ -1,10 +0,0 @@ - -python3 gradio_demo/seed_llama_flask.py \ - --image_transform configs/transform/clip_transform.yaml \ - --tokenizer configs/tokenizer/seed_llama_tokenizer.yaml \ - --model configs/llm/seed_llama_14b_8bit.yaml \ - --port 7890 \ - --llm_device cuda:0 \ - --tokenizer_device cuda:0 \ - --offload_encoder \ - --offload_decoder diff --git a/spaces/Adapter/CoAdapter/ldm/data/dataset_depth.py b/spaces/Adapter/CoAdapter/ldm/data/dataset_depth.py deleted file mode 100644 index e3afe28da237c62795625574b89b60072da79cd2..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/data/dataset_depth.py +++ /dev/null @@ -1,35 +0,0 @@ -import json -import cv2 -import os -from basicsr.utils import img2tensor - - -class DepthDataset(): - def __init__(self, meta_file): - super(DepthDataset, self).__init__() - - self.files = [] - with open(meta_file, 'r') as f: - lines = f.readlines() - for line in lines: - img_path = line.strip() - depth_img_path = img_path.rsplit('.', 1)[0] + '.depth.png' - txt_path = img_path.rsplit('.', 1)[0] + '.txt' - self.files.append({'img_path': img_path, 'depth_img_path': depth_img_path, 'txt_path': txt_path}) - - def __getitem__(self, idx): - file = self.files[idx] - - im = cv2.imread(file['img_path']) - im = img2tensor(im, bgr2rgb=True, float32=True) / 255. - - depth = cv2.imread(file['depth_img_path']) # [:,:,0] - depth = img2tensor(depth, bgr2rgb=True, float32=True) / 255. # [0].unsqueeze(0)#/255. - - with open(file['txt_path'], 'r') as fs: - sentence = fs.readline().strip() - - return {'im': im, 'depth': depth, 'sentence': sentence} - - def __len__(self): - return len(self.files) diff --git a/spaces/AiMimicry/sovits-models/cluster/train_cluster.py b/spaces/AiMimicry/sovits-models/cluster/train_cluster.py deleted file mode 100644 index 4ac025d400414226e66849407f477ae786c3d5d3..0000000000000000000000000000000000000000 --- a/spaces/AiMimicry/sovits-models/cluster/train_cluster.py +++ /dev/null @@ -1,89 +0,0 @@ -import os -from glob import glob -from pathlib import Path -import torch -import logging -import argparse -import torch -import numpy as np -from sklearn.cluster import KMeans, MiniBatchKMeans -import tqdm -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) -import time -import random - -def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False): - - logger.info(f"Loading features from {in_dir}") - features = [] - nums = 0 - for path in tqdm.tqdm(in_dir.glob("*.soft.pt")): - features.append(torch.load(path).squeeze(0).numpy().T) - # print(features[-1].shape) - features = np.concatenate(features, axis=0) - print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype) - features = features.astype(np.float32) - logger.info(f"Clustering features of shape: {features.shape}") - t = time.time() - if use_minibatch: - kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features) - else: - kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features) - print(time.time()-t, "s") - - x = { - "n_features_in_": kmeans.n_features_in_, - "_n_threads": kmeans._n_threads, - "cluster_centers_": kmeans.cluster_centers_, - } - print("end") - - return x - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument('--dataset', type=Path, default="./dataset/44k", - help='path of training data directory') - parser.add_argument('--output', type=Path, default="logs/44k", - help='path of model output directory') - - args = parser.parse_args() - - checkpoint_dir = args.output - dataset = args.dataset - n_clusters = 10000 - - ckpt = {} - for spk in os.listdir(dataset): - if os.path.isdir(dataset/spk): - print(f"train kmeans for {spk}...") - in_dir = dataset/spk - x = train_cluster(in_dir, n_clusters, verbose=False) - ckpt[spk] = x - - checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt" - checkpoint_path.parent.mkdir(exist_ok=True, parents=True) - torch.save( - ckpt, - checkpoint_path, - ) - - - # import cluster - # for spk in tqdm.tqdm(os.listdir("dataset")): - # if os.path.isdir(f"dataset/{spk}"): - # print(f"start kmeans inference for {spk}...") - # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)): - # mel_path = feature_path.replace(".discrete.npy",".mel.npy") - # mel_spectrogram = np.load(mel_path) - # feature_len = mel_spectrogram.shape[-1] - # c = np.load(feature_path) - # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy() - # feature = c.T - # feature_class = cluster.get_cluster_result(feature, spk) - # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class) - - diff --git a/spaces/AkiKagura/Marco-Generation-Img2img/app.py b/spaces/AkiKagura/Marco-Generation-Img2img/app.py deleted file mode 100644 index 7e6f4461e9fe79da0e255e3e641fe20bda701ed0..0000000000000000000000000000000000000000 --- a/spaces/AkiKagura/Marco-Generation-Img2img/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import gradio as gr -import torch -#from torch import autocast // only for GPU - -from PIL import Image -import numpy as np -from io import BytesIO -import os -MY_SECRET_TOKEN=os.environ.get('HF_TOKEN_SD') - -#from diffusers import StableDiffusionPipeline -from diffusers import StableDiffusionImg2ImgPipeline - -def empty_checker(images, **kwargs): return images, False - -print("hello") - -YOUR_TOKEN=MY_SECRET_TOKEN - -device="cpu" - -# img2img pipeline -img_pipe = StableDiffusionImg2ImgPipeline.from_pretrained("AkiKagura/mkgen-diffusion", duse_auth_token=YOUR_TOKEN) -img_pipe.safety_checker = empty_checker -img_pipe.to(device) - -source_img = gr.Image(source="upload", type="filepath", label="init_img") -gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[1], height="auto") - -def resize(img): - #baseheight = value - img = Image.open(img) - #hpercent = (baseheight/float(img.size[1])) - #wsize = int((float(img.size[0])*float(hpercent))) - #img = img.resize((wsize,baseheight), Image.Resampling.LANCZOS) - hsize = img.size[1] - wsize = img.size[0] - if 6*wsize <= 5*hsize: - wsize = 512 - hsize = 768 - elif 4*wsize >= 5*hsize: - wsize = 768 - hsize = 512 - else: - wsize = 512 - hsize = 512 - img = img.resize((wsize,hsize), Image.Resampling.LANCZOS) - return img, wsize, hsize - - -def infer(source_img, prompt, guide, steps, seed, strength): - generator = torch.Generator('cpu').manual_seed(seed) - - source_image, img_w, img_h = resize(source_img) - source_image.save('source.png') - images_list = img_pipe([prompt] * 1, init_image=source_image, strength=strength, guidance_scale=guide, num_inference_steps=steps, width=img_w, height=img_h) - images = [] - - for i, image in enumerate(images_list["images"]): - images.append(image) - return images - -print("done") - -title="Marco Generation Img2img" -description="

Upload your image and input 'mkmk woman' to get Marco image.
Warning: Slow process... about 10 min inference time.

" - -gr.Interface(fn=infer, inputs=[source_img, - "text", - gr.Slider(2, 15, value = 7, label = 'Guidence Scale'), - gr.Slider(10, 50, value = 25, step = 1, label = 'Number of Iterations'), - gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True), - gr.Slider(label='Strength', minimum = 0, maximum = 1, step = .05, value = .75)], - outputs=gallery,title=title,description=description, allow_flagging="manual", flagging_dir="flagged").queue(max_size=100).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Alesteba/NeRF_ficus-pxl/README.md b/spaces/Alesteba/NeRF_ficus-pxl/README.md deleted file mode 100644 index bd7421d48539a14bb7be26890a7411e859863da1..0000000000000000000000000000000000000000 --- a/spaces/Alesteba/NeRF_ficus-pxl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NeRF Ficus-pxl -emoji: 🐠 -colorFrom: indigo -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_sac_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_sac_1x_coco.py deleted file mode 100644 index ccd9319b2d1badebf3b891c8e3bdd55a435a4b7c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_sac_1x_coco.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = [ - '../_base_/models/cascade_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - type='DetectoRS_ResNet', - conv_cfg=dict(type='ConvAWS'), - sac=dict(type='SAC', use_deform=True), - stage_with_sac=(False, True, True, True))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py b/spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py deleted file mode 100644 index 439c39a93a8a12119ffa408987c8cea6d8cb313a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index 9927f8f07510b2bc6d1c92f397bc2075e38c104c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './retinanet_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/AngoHF/ANGO-Leaderboard/assets/color.py b/spaces/AngoHF/ANGO-Leaderboard/assets/color.py deleted file mode 100644 index f1e19ec55269efd40919dcf6042b6a8af31ee689..0000000000000000000000000000000000000000 --- a/spaces/AngoHF/ANGO-Leaderboard/assets/color.py +++ /dev/null @@ -1,8 +0,0 @@ -color_dict = { - '言语理解与表达': '#B22222', - '数量关系': '#CC6600', - '判断推理': '#CC9900', - '资料分析': '#228B22', - '常识判断': '#0077BE', - '': '#9400D3' -} \ No newline at end of file diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/modules/models.py b/spaces/Anthony7906/MengHuiMXD_GPT/modules/models.py deleted file mode 100644 index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000 --- a/spaces/Anthony7906/MengHuiMXD_GPT/modules/models.py +++ /dev/null @@ -1,625 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import platform -import base64 -from io import BytesIO -from PIL import Image - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum -import uuid - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy -from modules import config -from .base_model import BaseLLMModel, ModelType - - -class OpenAIClient(BaseLLMModel): - def __init__( - self, - model_name, - api_key, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - ) -> None: - super().__init__( - model_name=model_name, - temperature=temperature, - top_p=top_p, - system_prompt=system_prompt, - ) - self.api_key = api_key - self.need_api_key = True - self._refresh_header() - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def get_answer_at_once(self): - response = self._get_response() - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - total_token_count = response["usage"]["total_tokens"] - return content, total_token_count - - def count_token(self, user_input): - input_token_count = count_token(construct_user(user_input)) - if self.system_prompt is not None and len(self.all_token_counts) == 0: - system_prompt_token_count = count_token( - construct_system(self.system_prompt) - ) - return input_token_count + system_prompt_token_count - return input_token_count - - def billing_info(self): - try: - curr_time = datetime.datetime.now() - last_day_of_month = get_last_day_of_month( - curr_time).strftime("%Y-%m-%d") - first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = self._get_billing_data(usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:" + str(e)) - return i18n("**获取API使用情况失败**") - rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100) - return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}" - except requests.exceptions.ConnectTimeout: - status_text = ( - STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - ) - return status_text - except requests.exceptions.ReadTimeout: - status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - return status_text - except Exception as e: - import traceback - traceback.print_exc() - logging.error(i18n("获取API使用情况失败:") + str(e)) - return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG - - def set_token_upper_limit(self, new_upper_limit): - pass - - @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用 - def _get_response(self, stream=False): - openai_api_key = self.api_key - system_prompt = self.system_prompt - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - if system_prompt is not None: - history = [construct_system(system_prompt), *history] - - payload = { - "model": self.model_name, - "messages": history, - "temperature": self.temperature, - "top_p": self.top_p, - "n": self.n_choices, - "stream": stream, - "presence_penalty": self.presence_penalty, - "frequency_penalty": self.frequency_penalty, - } - - if self.max_generation_token is not None: - payload["max_tokens"] = self.max_generation_token - if self.stop_sequence is not None: - payload["stop"] = self.stop_sequence - if self.logit_bias is not None: - payload["logit_bias"] = self.logit_bias - if self.user_identifier is not None: - payload["user"] = self.user_identifier - - if stream: - timeout = TIMEOUT_STREAMING - else: - timeout = TIMEOUT_ALL - - # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求 - if shared.state.completion_url != COMPLETION_URL: - logging.info(f"使用自定义API URL: {shared.state.completion_url}") - - with retrieve_proxy(): - try: - response = requests.post( - shared.state.completion_url, - headers=headers, - json=payload, - stream=stream, - timeout=timeout, - ) - except: - return None - return response - - def _refresh_header(self): - self.headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}", - } - - def _get_billing_data(self, billing_url): - with retrieve_proxy(): - response = requests.get( - billing_url, - headers=self.headers, - timeout=TIMEOUT_ALL, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception( - f"API request failed with status code {response.status_code}: {response.text}" - ) - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - break - try: - yield chunk["choices"][0]["delta"]["content"] - except Exception as e: - # logging.error(f"Error: {e}") - continue - if error_msg: - raise Exception(error_msg) - - def set_key(self, new_access_key): - ret = super().set_key(new_access_key) - self._refresh_header() - return ret - - -class ChatGLM_Client(BaseLLMModel): - def __init__(self, model_name) -> None: - super().__init__(model_name=model_name) - from transformers import AutoTokenizer, AutoModel - import torch - global CHATGLM_TOKENIZER, CHATGLM_MODEL - if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None: - system_name = platform.system() - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"THUDM/{model_name}" - CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained( - model_source, trust_remote_code=True - ) - quantified = False - if "int4" in model_name: - quantified = True - model = AutoModel.from_pretrained( - model_source, trust_remote_code=True - ) - if torch.cuda.is_available(): - # run on CUDA - logging.info("CUDA is available, using CUDA") - model = model.half().cuda() - # mps加速还存在一些问题,暂时不使用 - elif system_name == "Darwin" and model_path is not None and not quantified: - logging.info("Running on macOS, using MPS") - # running on macOS and model already downloaded - model = model.half().to("mps") - else: - logging.info("GPU is not available, using CPU") - model = model.float() - model = model.eval() - CHATGLM_MODEL = model - - def _get_glm_style_input(self): - history = [x["content"] for x in self.history] - query = history.pop() - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - assert ( - len(history) % 2 == 0 - ), f"History should be even length. current history is: {history}" - history = [[history[i], history[i + 1]] - for i in range(0, len(history), 2)] - return history, query - - def get_answer_at_once(self): - history, query = self._get_glm_style_input() - response, _ = CHATGLM_MODEL.chat( - CHATGLM_TOKENIZER, query, history=history) - return response, len(response) - - def get_answer_stream_iter(self): - history, query = self._get_glm_style_input() - for response, history in CHATGLM_MODEL.stream_chat( - CHATGLM_TOKENIZER, - query, - history, - max_length=self.token_upper_limit, - top_p=self.top_p, - temperature=self.temperature, - ): - yield response - - -class LLaMA_Client(BaseLLMModel): - def __init__( - self, - model_name, - lora_path=None, - ) -> None: - super().__init__(model_name=model_name) - from lmflow.datasets.dataset import Dataset - from lmflow.pipeline.auto_pipeline import AutoPipeline - from lmflow.models.auto_model import AutoModel - from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments - - self.max_generation_token = 1000 - self.end_string = "\n\n" - # We don't need input data - data_args = DatasetArguments(dataset_path=None) - self.dataset = Dataset(data_args) - self.system_prompt = "" - - global LLAMA_MODEL, LLAMA_INFERENCER - if LLAMA_MODEL is None or LLAMA_INFERENCER is None: - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"decapoda-research/{model_name}" - # raise Exception(f"models目录下没有这个模型: {model_name}") - if lora_path is not None: - lora_path = f"lora/{lora_path}" - model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None, - use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True) - pipeline_args = InferencerArguments( - local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16') - - with open(pipeline_args.deepspeed, "r") as f: - ds_config = json.load(f) - LLAMA_MODEL = AutoModel.get_model( - model_args, - tune_strategy="none", - ds_config=ds_config, - ) - LLAMA_INFERENCER = AutoPipeline.get_pipeline( - pipeline_name="inferencer", - model_args=model_args, - data_args=data_args, - pipeline_args=pipeline_args, - ) - - def _get_llama_style_input(self): - history = [] - instruction = "" - if self.system_prompt: - instruction = (f"Instruction: {self.system_prompt}\n") - for x in self.history: - if x["role"] == "user": - history.append(f"{instruction}Input: {x['content']}") - else: - history.append(f"Output: {x['content']}") - context = "\n\n".join(history) - context += "\n\nOutput: " - return context - - def get_answer_at_once(self): - context = self._get_llama_style_input() - - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [{"text": context}]} - ) - - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=self.max_generation_token, - temperature=self.temperature, - ) - - response = output_dataset.to_dict()["instances"][0]["text"] - return response, len(response) - - def get_answer_stream_iter(self): - context = self._get_llama_style_input() - partial_text = "" - step = 1 - for _ in range(0, self.max_generation_token, step): - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [ - {"text": context + partial_text}]} - ) - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=step, - temperature=self.temperature, - ) - response = output_dataset.to_dict()["instances"][0]["text"] - if response == "" or response == self.end_string: - break - partial_text += response - yield partial_text - - -class XMChat(BaseLLMModel): - def __init__(self, api_key): - super().__init__(model_name="xmchat") - self.api_key = api_key - self.session_id = None - self.reset() - self.image_bytes = None - self.image_path = None - self.xm_history = [] - self.url = "https://xmbot.net/web" - self.last_conv_id = None - - def reset(self): - self.session_id = str(uuid.uuid4()) - self.last_conv_id = None - return [], "已重置" - - def image_to_base64(self, image_path): - # 打开并加载图片 - img = Image.open(image_path) - - # 获取图片的宽度和高度 - width, height = img.size - - # 计算压缩比例,以确保最长边小于4096像素 - max_dimension = 2048 - scale_ratio = min(max_dimension / width, max_dimension / height) - - if scale_ratio < 1: - # 按压缩比例调整图片大小 - new_width = int(width * scale_ratio) - new_height = int(height * scale_ratio) - img = img.resize((new_width, new_height), Image.ANTIALIAS) - - # 将图片转换为jpg格式的二进制数据 - buffer = BytesIO() - if img.mode == "RGBA": - img = img.convert("RGB") - img.save(buffer, format='JPEG') - binary_image = buffer.getvalue() - - # 对二进制数据进行Base64编码 - base64_image = base64.b64encode(binary_image).decode('utf-8') - - return base64_image - - def try_read_image(self, filepath): - def is_image_file(filepath): - # 判断文件是否为图片 - valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"] - file_extension = os.path.splitext(filepath)[1].lower() - return file_extension in valid_image_extensions - - if is_image_file(filepath): - logging.info(f"读取图片文件: {filepath}") - self.image_bytes = self.image_to_base64(filepath) - self.image_path = filepath - else: - self.image_bytes = None - self.image_path = None - - def like(self): - if self.last_conv_id is None: - return "点赞失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "good" - } - response = requests.post(self.url, json=data) - return "👍点赞成功,,感谢反馈~" - - def dislike(self): - if self.last_conv_id is None: - return "点踩失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "bad" - } - response = requests.post(self.url, json=data) - return "👎点踩成功,感谢反馈~" - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = real_inputs - display_append = "" - limited_context = False - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - # XMChat的一轮对话中实际上只能处理一张图片 - self.reset() - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "imgbase64", - "data": self.image_bytes - } - response = requests.post(self.url, json=data) - response = json.loads(response.text) - logging.info(f"图片回复: {response['data']}") - return None, chatbot, None - - def get_answer_at_once(self): - question = self.history[-1]["content"] - conv_id = str(uuid.uuid4()) - self.last_conv_id = conv_id - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "text", - "data": question - } - response = requests.post(self.url, json=data) - try: - response = json.loads(response.text) - return response["data"], len(response["data"]) - except Exception as e: - return response.text, len(response.text) - - - - -def get_model( - model_name, - lora_model_path=None, - access_key=None, - temperature=None, - top_p=None, - system_prompt=None, -) -> BaseLLMModel: - msg = i18n("模型设置为了:") + f" {model_name}" - model_type = ModelType.get_type(model_name) - lora_selector_visibility = False - lora_choices = [] - dont_change_lora_selector = False - if model_type != ModelType.OpenAI: - config.local_embedding = True - # del current_model.model - model = None - try: - if model_type == ModelType.OpenAI: - logging.info(f"正在加载OpenAI模型: {model_name}") - model = OpenAIClient( - model_name=model_name, - api_key=access_key, - system_prompt=system_prompt, - temperature=temperature, - top_p=top_p, - ) - elif model_type == ModelType.ChatGLM: - logging.info(f"正在加载ChatGLM模型: {model_name}") - model = ChatGLM_Client(model_name) - elif model_type == ModelType.LLaMA and lora_model_path == "": - msg = f"现在请为 {model_name} 选择LoRA模型" - logging.info(msg) - lora_selector_visibility = True - if os.path.isdir("lora"): - lora_choices = get_file_names( - "lora", plain=True, filetypes=[""]) - lora_choices = ["No LoRA"] + lora_choices - elif model_type == ModelType.LLaMA and lora_model_path != "": - logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}") - dont_change_lora_selector = True - if lora_model_path == "No LoRA": - lora_model_path = None - msg += " + No LoRA" - else: - msg += f" + {lora_model_path}" - model = LLaMA_Client(model_name, lora_model_path) - elif model_type == ModelType.XMChat: - if os.environ.get("XMCHAT_API_KEY") != "": - access_key = os.environ.get("XMCHAT_API_KEY") - model = XMChat(api_key=access_key) - elif model_type == ModelType.Unknown: - raise ValueError(f"未知模型: {model_name}") - logging.info(msg) - except Exception as e: - logging.error(e) - msg = f"{STANDARD_ERROR_MSG}: {e}" - if dont_change_lora_selector: - return model, msg - else: - return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility) - - -if __name__ == "__main__": - with open("config.json", "r") as f: - openai_api_key = cjson.load(f)["openai_api_key"] - # set logging level to debug - logging.basicConfig(level=logging.DEBUG) - # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key) - client = get_model(model_name="chatglm-6b-int4") - chatbot = [] - stream = False - # 测试账单功能 - logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET) - logging.info(client.billing_info()) - # 测试问答 - logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET) - question = "巴黎是中国的首都吗?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试问答后history : {client.history}") - # 测试记忆力 - logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET) - question = "我刚刚问了你什么问题?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试记忆力后history : {client.history}") - # 测试重试功能 - logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET) - for i in client.retry(chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"重试后history : {client.history}") - # # 测试总结功能 - # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET) - # chatbot, msg = client.reduce_token_size(chatbot=chatbot) - # print(chatbot, msg) - # print(f"总结后history: {client.history}") diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_ratio.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_ratio.py deleted file mode 100644 index e8a3a674e0070159b956c29c5092b0f72abc969d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_ratio.py +++ /dev/null @@ -1,160 +0,0 @@ -import sys -from fractions import Fraction -from math import ceil -from typing import cast, List, Optional, Sequence - -if sys.version_info >= (3, 8): - from typing import Protocol -else: - from pip._vendor.typing_extensions import Protocol # pragma: no cover - - -class Edge(Protocol): - """Any object that defines an edge (such as Layout).""" - - size: Optional[int] = None - ratio: int = 1 - minimum_size: int = 1 - - -def ratio_resolve(total: int, edges: Sequence[Edge]) -> List[int]: - """Divide total space to satisfy size, ratio, and minimum_size, constraints. - - The returned list of integers should add up to total in most cases, unless it is - impossible to satisfy all the constraints. For instance, if there are two edges - with a minimum size of 20 each and `total` is 30 then the returned list will be - greater than total. In practice, this would mean that a Layout object would - clip the rows that would overflow the screen height. - - Args: - total (int): Total number of characters. - edges (List[Edge]): Edges within total space. - - Returns: - List[int]: Number of characters for each edge. - """ - # Size of edge or None for yet to be determined - sizes = [(edge.size or None) for edge in edges] - - _Fraction = Fraction - - # While any edges haven't been calculated - while None in sizes: - # Get flexible edges and index to map these back on to sizes list - flexible_edges = [ - (index, edge) - for index, (size, edge) in enumerate(zip(sizes, edges)) - if size is None - ] - # Remaining space in total - remaining = total - sum(size or 0 for size in sizes) - if remaining <= 0: - # No room for flexible edges - return [ - ((edge.minimum_size or 1) if size is None else size) - for size, edge in zip(sizes, edges) - ] - # Calculate number of characters in a ratio portion - portion = _Fraction( - remaining, sum((edge.ratio or 1) for _, edge in flexible_edges) - ) - - # If any edges will be less than their minimum, replace size with the minimum - for index, edge in flexible_edges: - if portion * edge.ratio <= edge.minimum_size: - sizes[index] = edge.minimum_size - # New fixed size will invalidate calculations, so we need to repeat the process - break - else: - # Distribute flexible space and compensate for rounding error - # Since edge sizes can only be integers we need to add the remainder - # to the following line - remainder = _Fraction(0) - for index, edge in flexible_edges: - size, remainder = divmod(portion * edge.ratio + remainder, 1) - sizes[index] = size - break - # Sizes now contains integers only - return cast(List[int], sizes) - - -def ratio_reduce( - total: int, ratios: List[int], maximums: List[int], values: List[int] -) -> List[int]: - """Divide an integer total in to parts based on ratios. - - Args: - total (int): The total to divide. - ratios (List[int]): A list of integer ratios. - maximums (List[int]): List of maximums values for each slot. - values (List[int]): List of values - - Returns: - List[int]: A list of integers guaranteed to sum to total. - """ - ratios = [ratio if _max else 0 for ratio, _max in zip(ratios, maximums)] - total_ratio = sum(ratios) - if not total_ratio: - return values[:] - total_remaining = total - result: List[int] = [] - append = result.append - for ratio, maximum, value in zip(ratios, maximums, values): - if ratio and total_ratio > 0: - distributed = min(maximum, round(ratio * total_remaining / total_ratio)) - append(value - distributed) - total_remaining -= distributed - total_ratio -= ratio - else: - append(value) - return result - - -def ratio_distribute( - total: int, ratios: List[int], minimums: Optional[List[int]] = None -) -> List[int]: - """Distribute an integer total in to parts based on ratios. - - Args: - total (int): The total to divide. - ratios (List[int]): A list of integer ratios. - minimums (List[int]): List of minimum values for each slot. - - Returns: - List[int]: A list of integers guaranteed to sum to total. - """ - if minimums: - ratios = [ratio if _min else 0 for ratio, _min in zip(ratios, minimums)] - total_ratio = sum(ratios) - assert total_ratio > 0, "Sum of ratios must be > 0" - - total_remaining = total - distributed_total: List[int] = [] - append = distributed_total.append - if minimums is None: - _minimums = [0] * len(ratios) - else: - _minimums = minimums - for ratio, minimum in zip(ratios, _minimums): - if total_ratio > 0: - distributed = max(minimum, ceil(ratio * total_remaining / total_ratio)) - else: - distributed = total_remaining - append(distributed) - total_ratio -= ratio - total_remaining -= distributed - return distributed_total - - -if __name__ == "__main__": - from dataclasses import dataclass - - @dataclass - class E: - - size: Optional[int] = None - ratio: int = 1 - minimum_size: int = 1 - - resolved = ratio_resolve(110, [E(None, 1, 1), E(None, 1, 1), E(None, 1, 1)]) - print(sum(resolved)) diff --git a/spaces/AttendAndExcite/Attend-and-Excite/README.md b/spaces/AttendAndExcite/Attend-and-Excite/README.md deleted file mode 100644 index e71657b13be44764b1699c34515da6c8b9401f0b..0000000000000000000000000000000000000000 --- a/spaces/AttendAndExcite/Attend-and-Excite/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Attend And Excite -emoji: 💻 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.47.1 -python_version: 3.10.13 -app_file: app.py -pinned: false -license: mit -suggested_hardware: a10g-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BAAI/AltDiffusion-m9/header.html b/spaces/BAAI/AltDiffusion-m9/header.html deleted file mode 100644 index c199e399bc8f2dba71d03af4885c920f3b1d227f..0000000000000000000000000000000000000000 --- a/spaces/BAAI/AltDiffusion-m9/header.html +++ /dev/null @@ -1,43 +0,0 @@ -
-
FlagAI -
-
-

- FlagStudio -

-
-

- FlagStudio 项目致力于贡献优秀AI生成艺术作品。此九语文生图模型项目基于 stable diffusion,由BAAI旗下的FlagAI团队提供支持,相关代码和模型权重在AltDiffusion中进行开源。 -

-

- FlagStudio aims to provide high-quality AI-generated artwork. Our current multilingual model is based on the original stable diffusion model and is capable to generate images from both Chinese and English text. FlagStudio is developed and supported by the FlagAI team. Relevant code and model weights released in AltDiffusion-m9.(open.platform@baai.ac.cn) -

-

- AltDiffusion has been added to 🧨Diffusers, see the documentation page: 🧨 Pipeline doc -

-

- 我们在colab设置了一个脚本,你可以在colab试用我们的模型!(We have a script on colab, You can try our models on colab.Enjoy it!) - Open In Colab -

-
\ No newline at end of file diff --git a/spaces/Badaleeloveashley/badaleeloveashley/app.py b/spaces/Badaleeloveashley/badaleeloveashley/app.py deleted file mode 100644 index f0c7c4b5ad52db038bbb93ffb085f49243bc1495..0000000000000000000000000000000000000000 --- a/spaces/Badaleeloveashley/badaleeloveashley/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load('huggingface/gpt2').Iaunch() \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escsm.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escsm.py deleted file mode 100644 index 11d4adf771f3f90bb5f1cc11043599b48e955c22..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escsm.py +++ /dev/null @@ -1,261 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .codingstatemachinedict import CodingStateMachineDict -from .enums import MachineState - -# fmt: off -HZ_CLS = ( - 1, 0, 0, 0, 0, 0, 0, 0, # 00 - 07 - 0, 0, 0, 0, 0, 0, 0, 0, # 08 - 0f - 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17 - 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f - 0, 0, 0, 0, 0, 0, 0, 0, # 20 - 27 - 0, 0, 0, 0, 0, 0, 0, 0, # 28 - 2f - 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37 - 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f - 0, 0, 0, 0, 0, 0, 0, 0, # 40 - 47 - 0, 0, 0, 0, 0, 0, 0, 0, # 48 - 4f - 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57 - 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f - 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67 - 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f - 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77 - 0, 0, 0, 4, 0, 5, 2, 0, # 78 - 7f - 1, 1, 1, 1, 1, 1, 1, 1, # 80 - 87 - 1, 1, 1, 1, 1, 1, 1, 1, # 88 - 8f - 1, 1, 1, 1, 1, 1, 1, 1, # 90 - 97 - 1, 1, 1, 1, 1, 1, 1, 1, # 98 - 9f - 1, 1, 1, 1, 1, 1, 1, 1, # a0 - a7 - 1, 1, 1, 1, 1, 1, 1, 1, # a8 - af - 1, 1, 1, 1, 1, 1, 1, 1, # b0 - b7 - 1, 1, 1, 1, 1, 1, 1, 1, # b8 - bf - 1, 1, 1, 1, 1, 1, 1, 1, # c0 - c7 - 1, 1, 1, 1, 1, 1, 1, 1, # c8 - cf - 1, 1, 1, 1, 1, 1, 1, 1, # d0 - d7 - 1, 1, 1, 1, 1, 1, 1, 1, # d8 - df - 1, 1, 1, 1, 1, 1, 1, 1, # e0 - e7 - 1, 1, 1, 1, 1, 1, 1, 1, # e8 - ef - 1, 1, 1, 1, 1, 1, 1, 1, # f0 - f7 - 1, 1, 1, 1, 1, 1, 1, 1, # f8 - ff -) - -HZ_ST = ( -MachineState.START, MachineState.ERROR, 3, MachineState.START, MachineState.START, MachineState.START, MachineState.ERROR, MachineState.ERROR, # 00-07 -MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 08-0f -MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.START, MachineState.START, 4, MachineState.ERROR, # 10-17 - 5, MachineState.ERROR, 6, MachineState.ERROR, 5, 5, 4, MachineState.ERROR, # 18-1f - 4, MachineState.ERROR, 4, 4, 4, MachineState.ERROR, 4, MachineState.ERROR, # 20-27 - 4, MachineState.ITS_ME, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 28-2f -) -# fmt: on - -HZ_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) - -HZ_SM_MODEL: CodingStateMachineDict = { - "class_table": HZ_CLS, - "class_factor": 6, - "state_table": HZ_ST, - "char_len_table": HZ_CHAR_LEN_TABLE, - "name": "HZ-GB-2312", - "language": "Chinese", -} - -# fmt: off -ISO2022CN_CLS = ( - 2, 0, 0, 0, 0, 0, 0, 0, # 00 - 07 - 0, 0, 0, 0, 0, 0, 0, 0, # 08 - 0f - 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17 - 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f - 0, 0, 0, 0, 0, 0, 0, 0, # 20 - 27 - 0, 3, 0, 0, 0, 0, 0, 0, # 28 - 2f - 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37 - 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f - 0, 0, 0, 4, 0, 0, 0, 0, # 40 - 47 - 0, 0, 0, 0, 0, 0, 0, 0, # 48 - 4f - 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57 - 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f - 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67 - 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f - 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77 - 0, 0, 0, 0, 0, 0, 0, 0, # 78 - 7f - 2, 2, 2, 2, 2, 2, 2, 2, # 80 - 87 - 2, 2, 2, 2, 2, 2, 2, 2, # 88 - 8f - 2, 2, 2, 2, 2, 2, 2, 2, # 90 - 97 - 2, 2, 2, 2, 2, 2, 2, 2, # 98 - 9f - 2, 2, 2, 2, 2, 2, 2, 2, # a0 - a7 - 2, 2, 2, 2, 2, 2, 2, 2, # a8 - af - 2, 2, 2, 2, 2, 2, 2, 2, # b0 - b7 - 2, 2, 2, 2, 2, 2, 2, 2, # b8 - bf - 2, 2, 2, 2, 2, 2, 2, 2, # c0 - c7 - 2, 2, 2, 2, 2, 2, 2, 2, # c8 - cf - 2, 2, 2, 2, 2, 2, 2, 2, # d0 - d7 - 2, 2, 2, 2, 2, 2, 2, 2, # d8 - df - 2, 2, 2, 2, 2, 2, 2, 2, # e0 - e7 - 2, 2, 2, 2, 2, 2, 2, 2, # e8 - ef - 2, 2, 2, 2, 2, 2, 2, 2, # f0 - f7 - 2, 2, 2, 2, 2, 2, 2, 2, # f8 - ff -) - -ISO2022CN_ST = ( - MachineState.START, 3, MachineState.ERROR, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 00-07 - MachineState.START, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 08-0f - MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 10-17 - MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 4, MachineState.ERROR, # 18-1f - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 20-27 - 5, 6, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 28-2f - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 30-37 - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.START, # 38-3f -) -# fmt: on - -ISO2022CN_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0) - -ISO2022CN_SM_MODEL: CodingStateMachineDict = { - "class_table": ISO2022CN_CLS, - "class_factor": 9, - "state_table": ISO2022CN_ST, - "char_len_table": ISO2022CN_CHAR_LEN_TABLE, - "name": "ISO-2022-CN", - "language": "Chinese", -} - -# fmt: off -ISO2022JP_CLS = ( - 2, 0, 0, 0, 0, 0, 0, 0, # 00 - 07 - 0, 0, 0, 0, 0, 0, 2, 2, # 08 - 0f - 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17 - 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f - 0, 0, 0, 0, 7, 0, 0, 0, # 20 - 27 - 3, 0, 0, 0, 0, 0, 0, 0, # 28 - 2f - 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37 - 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f - 6, 0, 4, 0, 8, 0, 0, 0, # 40 - 47 - 0, 9, 5, 0, 0, 0, 0, 0, # 48 - 4f - 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57 - 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f - 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67 - 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f - 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77 - 0, 0, 0, 0, 0, 0, 0, 0, # 78 - 7f - 2, 2, 2, 2, 2, 2, 2, 2, # 80 - 87 - 2, 2, 2, 2, 2, 2, 2, 2, # 88 - 8f - 2, 2, 2, 2, 2, 2, 2, 2, # 90 - 97 - 2, 2, 2, 2, 2, 2, 2, 2, # 98 - 9f - 2, 2, 2, 2, 2, 2, 2, 2, # a0 - a7 - 2, 2, 2, 2, 2, 2, 2, 2, # a8 - af - 2, 2, 2, 2, 2, 2, 2, 2, # b0 - b7 - 2, 2, 2, 2, 2, 2, 2, 2, # b8 - bf - 2, 2, 2, 2, 2, 2, 2, 2, # c0 - c7 - 2, 2, 2, 2, 2, 2, 2, 2, # c8 - cf - 2, 2, 2, 2, 2, 2, 2, 2, # d0 - d7 - 2, 2, 2, 2, 2, 2, 2, 2, # d8 - df - 2, 2, 2, 2, 2, 2, 2, 2, # e0 - e7 - 2, 2, 2, 2, 2, 2, 2, 2, # e8 - ef - 2, 2, 2, 2, 2, 2, 2, 2, # f0 - f7 - 2, 2, 2, 2, 2, 2, 2, 2, # f8 - ff -) - -ISO2022JP_ST = ( - MachineState.START, 3, MachineState.ERROR, MachineState.START, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 00-07 - MachineState.START, MachineState.START, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 08-0f - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 10-17 - MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, # 18-1f - MachineState.ERROR, 5, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 4, MachineState.ERROR, MachineState.ERROR, # 20-27 - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 6, MachineState.ITS_ME, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, # 28-2f - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, # 30-37 - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 38-3f - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ERROR, MachineState.START, MachineState.START, # 40-47 -) -# fmt: on - -ISO2022JP_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0) - -ISO2022JP_SM_MODEL: CodingStateMachineDict = { - "class_table": ISO2022JP_CLS, - "class_factor": 10, - "state_table": ISO2022JP_ST, - "char_len_table": ISO2022JP_CHAR_LEN_TABLE, - "name": "ISO-2022-JP", - "language": "Japanese", -} - -# fmt: off -ISO2022KR_CLS = ( - 2, 0, 0, 0, 0, 0, 0, 0, # 00 - 07 - 0, 0, 0, 0, 0, 0, 0, 0, # 08 - 0f - 0, 0, 0, 0, 0, 0, 0, 0, # 10 - 17 - 0, 0, 0, 1, 0, 0, 0, 0, # 18 - 1f - 0, 0, 0, 0, 3, 0, 0, 0, # 20 - 27 - 0, 4, 0, 0, 0, 0, 0, 0, # 28 - 2f - 0, 0, 0, 0, 0, 0, 0, 0, # 30 - 37 - 0, 0, 0, 0, 0, 0, 0, 0, # 38 - 3f - 0, 0, 0, 5, 0, 0, 0, 0, # 40 - 47 - 0, 0, 0, 0, 0, 0, 0, 0, # 48 - 4f - 0, 0, 0, 0, 0, 0, 0, 0, # 50 - 57 - 0, 0, 0, 0, 0, 0, 0, 0, # 58 - 5f - 0, 0, 0, 0, 0, 0, 0, 0, # 60 - 67 - 0, 0, 0, 0, 0, 0, 0, 0, # 68 - 6f - 0, 0, 0, 0, 0, 0, 0, 0, # 70 - 77 - 0, 0, 0, 0, 0, 0, 0, 0, # 78 - 7f - 2, 2, 2, 2, 2, 2, 2, 2, # 80 - 87 - 2, 2, 2, 2, 2, 2, 2, 2, # 88 - 8f - 2, 2, 2, 2, 2, 2, 2, 2, # 90 - 97 - 2, 2, 2, 2, 2, 2, 2, 2, # 98 - 9f - 2, 2, 2, 2, 2, 2, 2, 2, # a0 - a7 - 2, 2, 2, 2, 2, 2, 2, 2, # a8 - af - 2, 2, 2, 2, 2, 2, 2, 2, # b0 - b7 - 2, 2, 2, 2, 2, 2, 2, 2, # b8 - bf - 2, 2, 2, 2, 2, 2, 2, 2, # c0 - c7 - 2, 2, 2, 2, 2, 2, 2, 2, # c8 - cf - 2, 2, 2, 2, 2, 2, 2, 2, # d0 - d7 - 2, 2, 2, 2, 2, 2, 2, 2, # d8 - df - 2, 2, 2, 2, 2, 2, 2, 2, # e0 - e7 - 2, 2, 2, 2, 2, 2, 2, 2, # e8 - ef - 2, 2, 2, 2, 2, 2, 2, 2, # f0 - f7 - 2, 2, 2, 2, 2, 2, 2, 2, # f8 - ff -) - -ISO2022KR_ST = ( - MachineState.START, 3, MachineState.ERROR, MachineState.START, MachineState.START, MachineState.START, MachineState.ERROR, MachineState.ERROR, # 00-07 - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ITS_ME, # 08-0f - MachineState.ITS_ME, MachineState.ITS_ME, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 4, MachineState.ERROR, MachineState.ERROR, # 10-17 - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, 5, MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, # 18-1f - MachineState.ERROR, MachineState.ERROR, MachineState.ERROR, MachineState.ITS_ME, MachineState.START, MachineState.START, MachineState.START, MachineState.START, # 20-27 -) -# fmt: on - -ISO2022KR_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) - -ISO2022KR_SM_MODEL: CodingStateMachineDict = { - "class_table": ISO2022KR_CLS, - "class_factor": 6, - "state_table": ISO2022KR_ST, - "char_len_table": ISO2022KR_CHAR_LEN_TABLE, - "name": "ISO-2022-KR", - "language": "Korean", -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py deleted file mode 100644 index 075150a4b586d668c1666513fbf90463cdbb11ab..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py +++ /dev/null @@ -1,188 +0,0 @@ -""" - pygments.formatters.svg - ~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for SVG output. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Comment -from pip._vendor.pygments.util import get_bool_opt, get_int_opt - -__all__ = ['SvgFormatter'] - - -def escape_html(text): - """Escape &, <, > as well as single and double quotes for HTML.""" - return text.replace('&', '&'). \ - replace('<', '<'). \ - replace('>', '>'). \ - replace('"', '"'). \ - replace("'", ''') - - -class2style = {} - -class SvgFormatter(Formatter): - """ - Format tokens as an SVG graphics file. This formatter is still experimental. - Each line of code is a ```` element with explicit ``x`` and ``y`` - coordinates containing ```` elements with the individual token styles. - - By default, this formatter outputs a full SVG document including doctype - declaration and the ```` root element. - - .. versionadded:: 0.9 - - Additional options accepted: - - `nowrap` - Don't wrap the SVG ```` elements in ```` elements and - don't add a XML declaration and a doctype. If true, the `fontfamily` - and `fontsize` options are ignored. Defaults to ``False``. - - `fontfamily` - The value to give the wrapping ```` element's ``font-family`` - attribute, defaults to ``"monospace"``. - - `fontsize` - The value to give the wrapping ```` element's ``font-size`` - attribute, defaults to ``"14px"``. - - `linenos` - If ``True``, add line numbers (default: ``False``). - - `linenostart` - The line number for the first line (default: ``1``). - - `linenostep` - If set to a number n > 1, only every nth line number is printed. - - `linenowidth` - Maximum width devoted to line numbers (default: ``3*ystep``, sufficient - for up to 4-digit line numbers. Increase width for longer code blocks). - - `xoffset` - Starting offset in X direction, defaults to ``0``. - - `yoffset` - Starting offset in Y direction, defaults to the font size if it is given - in pixels, or ``20`` else. (This is necessary since text coordinates - refer to the text baseline, not the top edge.) - - `ystep` - Offset to add to the Y coordinate for each subsequent line. This should - roughly be the text size plus 5. It defaults to that value if the text - size is given in pixels, or ``25`` else. - - `spacehack` - Convert spaces in the source to `` ``, which are non-breaking - spaces. SVG provides the ``xml:space`` attribute to control how - whitespace inside tags is handled, in theory, the ``preserve`` value - could be used to keep all whitespace as-is. However, many current SVG - viewers don't obey that rule, so this option is provided as a workaround - and defaults to ``True``. - """ - name = 'SVG' - aliases = ['svg'] - filenames = ['*.svg'] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.nowrap = get_bool_opt(options, 'nowrap', False) - self.fontfamily = options.get('fontfamily', 'monospace') - self.fontsize = options.get('fontsize', '14px') - self.xoffset = get_int_opt(options, 'xoffset', 0) - fs = self.fontsize.strip() - if fs.endswith('px'): fs = fs[:-2].strip() - try: - int_fs = int(fs) - except: - int_fs = 20 - self.yoffset = get_int_opt(options, 'yoffset', int_fs) - self.ystep = get_int_opt(options, 'ystep', int_fs + 5) - self.spacehack = get_bool_opt(options, 'spacehack', True) - self.linenos = get_bool_opt(options,'linenos',False) - self.linenostart = get_int_opt(options,'linenostart',1) - self.linenostep = get_int_opt(options,'linenostep',1) - self.linenowidth = get_int_opt(options,'linenowidth', 3*self.ystep) - self._stylecache = {} - - def format_unencoded(self, tokensource, outfile): - """ - Format ``tokensource``, an iterable of ``(tokentype, tokenstring)`` - tuples and write it into ``outfile``. - - For our implementation we put all lines in their own 'line group'. - """ - x = self.xoffset - y = self.yoffset - if not self.nowrap: - if self.encoding: - outfile.write('\n' % - self.encoding) - else: - outfile.write('\n') - outfile.write('\n') - outfile.write('\n') - outfile.write('\n' % - (self.fontfamily, self.fontsize)) - - counter = self.linenostart - counter_step = self.linenostep - counter_style = self._get_style(Comment) - line_x = x - - if self.linenos: - if counter % counter_step == 0: - outfile.write('%s' % - (x+self.linenowidth,y,counter_style,counter)) - line_x += self.linenowidth + self.ystep - counter += 1 - - outfile.write('' % (line_x, y)) - for ttype, value in tokensource: - style = self._get_style(ttype) - tspan = style and '' or '' - tspanend = tspan and '' or '' - value = escape_html(value) - if self.spacehack: - value = value.expandtabs().replace(' ', ' ') - parts = value.split('\n') - for part in parts[:-1]: - outfile.write(tspan + part + tspanend) - y += self.ystep - outfile.write('\n') - if self.linenos and counter % counter_step == 0: - outfile.write('%s' % - (x+self.linenowidth,y,counter_style,counter)) - - counter += 1 - outfile.write('' % (line_x,y)) - outfile.write(tspan + parts[-1] + tspanend) - outfile.write('') - - if not self.nowrap: - outfile.write('\n') - - def _get_style(self, tokentype): - if tokentype in self._stylecache: - return self._stylecache[tokentype] - otokentype = tokentype - while not self.style.styles_token(tokentype): - tokentype = tokentype.parent - value = self.style.style_for_token(tokentype) - result = '' - if value['color']: - result = ' fill="#' + value['color'] + '"' - if value['bold']: - result += ' font-weight="bold"' - if value['italic']: - result += ' font-style="italic"' - self._stylecache[otokentype] = result - return result diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/request.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/request.py deleted file mode 100644 index 330766ef4f3403e05a6ad8ec30f25fe05fdbc199..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/request.py +++ /dev/null @@ -1,137 +0,0 @@ -from __future__ import absolute_import - -from base64 import b64encode - -from ..exceptions import UnrewindableBodyError -from ..packages.six import b, integer_types - -# Pass as a value within ``headers`` to skip -# emitting some HTTP headers that are added automatically. -# The only headers that are supported are ``Accept-Encoding``, -# ``Host``, and ``User-Agent``. -SKIP_HEADER = "@@@SKIP_HEADER@@@" -SKIPPABLE_HEADERS = frozenset(["accept-encoding", "host", "user-agent"]) - -ACCEPT_ENCODING = "gzip,deflate" - -_FAILEDTELL = object() - - -def make_headers( - keep_alive=None, - accept_encoding=None, - user_agent=None, - basic_auth=None, - proxy_basic_auth=None, - disable_cache=None, -): - """ - Shortcuts for generating request headers. - - :param keep_alive: - If ``True``, adds 'connection: keep-alive' header. - - :param accept_encoding: - Can be a boolean, list, or string. - ``True`` translates to 'gzip,deflate'. - List will get joined by comma. - String will be used as provided. - - :param user_agent: - String representing the user-agent you want, such as - "python-urllib3/0.6" - - :param basic_auth: - Colon-separated username:password string for 'authorization: basic ...' - auth header. - - :param proxy_basic_auth: - Colon-separated username:password string for 'proxy-authorization: basic ...' - auth header. - - :param disable_cache: - If ``True``, adds 'cache-control: no-cache' header. - - Example:: - - >>> make_headers(keep_alive=True, user_agent="Batman/1.0") - {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'} - >>> make_headers(accept_encoding=True) - {'accept-encoding': 'gzip,deflate'} - """ - headers = {} - if accept_encoding: - if isinstance(accept_encoding, str): - pass - elif isinstance(accept_encoding, list): - accept_encoding = ",".join(accept_encoding) - else: - accept_encoding = ACCEPT_ENCODING - headers["accept-encoding"] = accept_encoding - - if user_agent: - headers["user-agent"] = user_agent - - if keep_alive: - headers["connection"] = "keep-alive" - - if basic_auth: - headers["authorization"] = "Basic " + b64encode(b(basic_auth)).decode("utf-8") - - if proxy_basic_auth: - headers["proxy-authorization"] = "Basic " + b64encode( - b(proxy_basic_auth) - ).decode("utf-8") - - if disable_cache: - headers["cache-control"] = "no-cache" - - return headers - - -def set_file_position(body, pos): - """ - If a position is provided, move file to that point. - Otherwise, we'll attempt to record a position for future use. - """ - if pos is not None: - rewind_body(body, pos) - elif getattr(body, "tell", None) is not None: - try: - pos = body.tell() - except (IOError, OSError): - # This differentiates from None, allowing us to catch - # a failed `tell()` later when trying to rewind the body. - pos = _FAILEDTELL - - return pos - - -def rewind_body(body, body_pos): - """ - Attempt to rewind body to a certain position. - Primarily used for request redirects and retries. - - :param body: - File-like object that supports seek. - - :param int pos: - Position to seek to in file. - """ - body_seek = getattr(body, "seek", None) - if body_seek is not None and isinstance(body_pos, integer_types): - try: - body_seek(body_pos) - except (IOError, OSError): - raise UnrewindableBodyError( - "An error occurred when rewinding request body for redirect/retry." - ) - elif body_pos is _FAILEDTELL: - raise UnrewindableBodyError( - "Unable to record file position for rewinding " - "request body during a redirect/retry." - ) - else: - raise ValueError( - "body_pos must be of type integer, instead it was %s." % type(body_pos) - ) diff --git a/spaces/Boadiwaa/Recipes/openai/__init__.py b/spaces/Boadiwaa/Recipes/openai/__init__.py deleted file mode 100644 index 86aa9e2c34b9bb209b39bcd8cfcaa5417fc97c0c..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/__init__.py +++ /dev/null @@ -1,73 +0,0 @@ -# OpenAI Python bindings. -# -# Originally forked from the MIT-licensed Stripe Python bindings. - -import os -from typing import Optional - -from openai.api_resources import ( - Answer, - Classification, - Completion, - Customer, - Edit, - Deployment, - Embedding, - Engine, - ErrorObject, - File, - FineTune, - Model, - Search, -) -from openai.error import APIError, InvalidRequestError, OpenAIError - -api_key = os.environ.get("OPENAI_API_KEY") -# Path of a file with an API key, whose contents can change. Supercedes -# `api_key` if set. The main use case is volume-mounted Kubernetes secrets, -# which are updated automatically. -api_key_path: Optional[str] = os.environ.get("OPENAI_API_KEY_PATH") - -organization = os.environ.get("OPENAI_ORGANIZATION") -api_base = os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1") -api_type = os.environ.get("OPENAI_API_TYPE", "open_ai") -api_version = "2021-11-01-preview" if api_type == "azure" else None -verify_ssl_certs = True # No effect. Certificates are always verified. -proxy = None -app_info = None -enable_telemetry = False # Ignored; the telemetry feature was removed. -ca_bundle_path = None # No longer used, feature was removed -debug = False -log = None # Set to either 'debug' or 'info', controls console logging - -__all__ = [ - "APIError", - "Answer", - "Classification", - "Completion", - "Customer", - "Edit", - "Deployment", - "Embedding", - "Engine", - "ErrorObject", - "File", - "FineTune", - "InvalidRequestError", - "Model", - "OpenAIError", - "Search", - "api_base", - "api_key", - "api_type", - "api_key_path", - "api_version", - "app_info", - "ca_bundle_path", - "debug", - "enable_elemetry", - "log", - "organization", - "proxy", - "verify_ssl_certs", -] diff --git a/spaces/Boadiwaa/Recipes/openai/embeddings_utils.py b/spaces/Boadiwaa/Recipes/openai/embeddings_utils.py deleted file mode 100644 index 2db5514c648734a60730e200cb50fed099b9a7e2..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/embeddings_utils.py +++ /dev/null @@ -1,227 +0,0 @@ -import textwrap as tr -from typing import List, Optional - -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import plotly.express as px -from scipy import spatial -from sklearn.decomposition import PCA -from sklearn.manifold import TSNE -from sklearn.metrics import average_precision_score, precision_recall_curve -from tenacity import retry, stop_after_attempt, wait_random_exponential - -import openai - - -@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) -def get_embedding(text: str, engine="text-similarity-davinci-001") -> List[float]: - - # replace newlines, which can negatively affect performance. - text = text.replace("\n", " ") - - return openai.Embedding.create(input=[text], engine=engine)["data"][0]["embedding"] - - -@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) -def get_embeddings( - list_of_text: List[str], engine="text-similarity-babbage-001" -) -> List[List[float]]: - assert len(list_of_text) < 2048, "The batch size should not be larger than 2048." - - # replace newlines, which can negatively affect performance. - list_of_text = [text.replace("\n", " ") for text in list_of_text] - - data = openai.Embedding.create(input=list_of_text, engine=engine).data - data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input. - return [d["embedding"] for d in data] - - -def cosine_similarity(a, b): - return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)) - - -def plot_multiclass_precision_recall( - y_score, y_true_untransformed, class_list, classifier_name -): - """ - Precision-Recall plotting for a multiclass problem. It plots average precision-recall, per class precision recall and reference f1 contours. - - Code slightly modified, but heavily based on https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html - """ - n_classes = len(class_list) - y_true = pd.concat( - [(y_true_untransformed == class_list[i]) for i in range(n_classes)], axis=1 - ).values - - # For each class - precision = dict() - recall = dict() - average_precision = dict() - for i in range(n_classes): - precision[i], recall[i], _ = precision_recall_curve(y_true[:, i], y_score[:, i]) - average_precision[i] = average_precision_score(y_true[:, i], y_score[:, i]) - - # A "micro-average": quantifying score on all classes jointly - precision_micro, recall_micro, _ = precision_recall_curve( - y_true.ravel(), y_score.ravel() - ) - average_precision_micro = average_precision_score(y_true, y_score, average="micro") - print( - str(classifier_name) - + " - Average precision score over all classes: {0:0.2f}".format( - average_precision_micro - ) - ) - - # setup plot details - plt.figure(figsize=(9, 10)) - f_scores = np.linspace(0.2, 0.8, num=4) - lines = [] - labels = [] - for f_score in f_scores: - x = np.linspace(0.01, 1) - y = f_score * x / (2 * x - f_score) - (l,) = plt.plot(x[y >= 0], y[y >= 0], color="gray", alpha=0.2) - plt.annotate("f1={0:0.1f}".format(f_score), xy=(0.9, y[45] + 0.02)) - - lines.append(l) - labels.append("iso-f1 curves") - (l,) = plt.plot(recall_micro, precision_micro, color="gold", lw=2) - lines.append(l) - labels.append( - "average Precision-recall (auprc = {0:0.2f})" "".format(average_precision_micro) - ) - - for i in range(n_classes): - (l,) = plt.plot(recall[i], precision[i], lw=2) - lines.append(l) - labels.append( - "Precision-recall for class `{0}` (auprc = {1:0.2f})" - "".format(class_list[i], average_precision[i]) - ) - - fig = plt.gcf() - fig.subplots_adjust(bottom=0.25) - plt.xlim([0.0, 1.0]) - plt.ylim([0.0, 1.05]) - plt.xlabel("Recall") - plt.ylabel("Precision") - plt.title(f"{classifier_name}: Precision-Recall curve for each class") - plt.legend(lines, labels) - - -def distances_from_embeddings( - query_embedding: List[float], - embeddings: List[List[float]], - distance_metric="cosine", -) -> List[List]: - """Return the distances between a query embedding and a list of embeddings.""" - distance_metrics = { - "cosine": spatial.distance.cosine, - "L1": spatial.distance.cityblock, - "L2": spatial.distance.euclidean, - "Linf": spatial.distance.chebyshev, - } - distances = [ - distance_metrics[distance_metric](query_embedding, embedding) - for embedding in embeddings - ] - return distances - - -def indices_of_nearest_neighbors_from_distances(distances) -> np.ndarray: - """Return a list of indices of nearest neighbors from a list of distances.""" - return np.argsort(distances) - - -def pca_components_from_embeddings( - embeddings: List[List[float]], n_components=2 -) -> np.ndarray: - """Return the PCA components of a list of embeddings.""" - pca = PCA(n_components=n_components) - array_of_embeddings = np.array(embeddings) - return pca.fit_transform(array_of_embeddings) - - -def tsne_components_from_embeddings( - embeddings: List[List[float]], n_components=2, **kwargs -) -> np.ndarray: - """Returns t-SNE components of a list of embeddings.""" - # use better defaults if not specified - if "init" not in kwargs.keys(): - kwargs["init"] = "pca" - if "learning_rate" not in kwargs.keys(): - kwargs["learning_rate"] = "auto" - tsne = TSNE(n_components=n_components, **kwargs) - array_of_embeddings = np.array(embeddings) - return tsne.fit_transform(array_of_embeddings) - - -def chart_from_components( - components: np.ndarray, - labels: Optional[List[str]] = None, - strings: Optional[List[str]] = None, - x_title="Component 0", - y_title="Component 1", - mark_size=5, - **kwargs, -): - """Return an interactive 2D chart of embedding components.""" - empty_list = ["" for _ in components] - data = pd.DataFrame( - { - x_title: components[:, 0], - y_title: components[:, 1], - "label": labels if labels else empty_list, - "string": ["
".join(tr.wrap(string, width=30)) for string in strings] - if strings - else empty_list, - } - ) - chart = px.scatter( - data, - x=x_title, - y=y_title, - color="label" if labels else None, - symbol="label" if labels else None, - hover_data=["string"] if strings else None, - **kwargs, - ).update_traces(marker=dict(size=mark_size)) - return chart - - -def chart_from_components_3D( - components: np.ndarray, - labels: Optional[List[str]] = None, - strings: Optional[List[str]] = None, - x_title: str = "Component 0", - y_title: str = "Component 1", - z_title: str = "Compontent 2", - mark_size: int = 5, - **kwargs, -): - """Return an interactive 3D chart of embedding components.""" - empty_list = ["" for _ in components] - data = pd.DataFrame( - { - x_title: components[:, 0], - y_title: components[:, 1], - z_title: components[:, 2], - "label": labels if labels else empty_list, - "string": ["
".join(tr.wrap(string, width=30)) for string in strings] - if strings - else empty_list, - } - ) - chart = px.scatter_3d( - data, - x=x_title, - y=y_title, - z=z_title, - color="label" if labels else None, - symbol="label" if labels else None, - hover_data=["string"] if strings else None, - **kwargs, - ).update_traces(marker=dict(size=mark_size)) - return chart diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reduce_intervals.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reduce_intervals.h deleted file mode 100644 index 44551e6452d7992c13412184687dbea906797aec..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reduce_intervals.h +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file reduce_intervals.h - * \brief OpenMP implementations of reduce_intervals algorithms. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace omp -{ -namespace detail -{ - -template -void reduce_intervals(execution_policy &exec, - InputIterator input, - OutputIterator output, - BinaryFunction binary_op, - Decomposition decomp); - -} // end namespace detail -} // end namespace omp -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py b/spaces/CVPR/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py deleted file mode 100644 index e3d42197f4646cd9ecafac2095d3f8e079f0a729..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py +++ /dev/null @@ -1,127 +0,0 @@ -# model settings -model = dict( - type='MaskRCNN', - pretrained=None, - backbone=dict( - type='SwinTransformer', - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - use_checkpoint=False), - neck=dict( - type='FPN', - in_channels=[96, 192, 384, 768], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/spaces/CVPR/v-doc_abstractive_mac/app.py b/spaces/CVPR/v-doc_abstractive_mac/app.py deleted file mode 100644 index 3701cbccd12758a2b6c6ec35dbe7a77ffd9be811..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/app.py +++ /dev/null @@ -1,12 +0,0 @@ -import gradio as gr - -description = "Story generation with GPT-2" -title = "Generate your own story" -examples = [["Adventurer is approached by a mysterious stranger in the tavern for a new quest."]] - -interface = gr.Interface.load("huggingface/ydin0771/vdoc-demo-mac", - description=description, - examples=examples -) - -interface.launch() \ No newline at end of file diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/lim_x_0/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/lim_x_0/__init__.py deleted file mode 100644 index 0eafec0ed477caa4924ce8a4ee31b0f9732a522f..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/lim_x_0/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme - -img_dir = Path(__file__).parent / "images" - - -def lim_x_0(images: List[BuildImage], texts, args): - img = images[0] - frame = BuildImage.open(img_dir / "0.png") - img_c = img.convert("RGBA").circle().resize((72, 72)) - img_tp = img.convert("RGBA").circle().resize((51, 51)) - frame.paste(img_tp, (948, 247), alpha=True) - # fmt: off - locs = [ - (143, 32), (155, 148), (334, 149), (275, 266), (486, 266), - (258, 383), (439, 382), (343, 539), (577, 487), (296, 717), - (535, 717), (64, 896), (340, 896), (578, 897), (210, 1038), - (644, 1039), (64, 1192), (460, 1192), (698, 1192), (1036, 141), - (1217, 141), (1243, 263), (1140, 378), (1321, 378), (929, 531), - (1325, 531), (1592, 531), (1007, 687), (1390, 687), (1631, 686), - (1036, 840), (1209, 839), (1447, 839), (1141, 1018), (1309, 1019), - (1546, 1019), (1037, 1197), (1317, 1198), (1555, 1197), - ] - # fmt: on - for i in range(39): - x, y = locs[i] - frame.paste(img_c, (x, y), alpha=True) - return frame.save_jpg() - - -add_meme("lim_x_0", lim_x_0, min_images=1, max_images=1, keywords=["等价无穷小"]) diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/hteyun.py b/spaces/CofAI/chat/g4f/Provider/Providers/hteyun.py deleted file mode 100644 index a6eba7c00331d720afb47215e818f5900d4aedcf..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/hteyun.py +++ /dev/null @@ -1,34 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://hteyun.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'Accept': 'application/json, text/plain, */*', - 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - 'Origin': 'https://hteyun.com', - 'Referer': 'https://hteyun.com/chat/', - } - data = { - 'messages': messages, - 'model': model, - 'systemMessage': 'You are ChatGPT, a large language model trained by OpenAI. Follow the user\'s instructions carefully. Respond using russian language.', - 'temperature': 0.7, - 'presence_penalty': 0, - } - response = requests.post(url + '/api/chat-stream', json=data, headers=headers, stream=True) - print(response.json()) - - # Извлечение текста из response - return response.json()['text'] - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/Colbe/basketball/README.md b/spaces/Colbe/basketball/README.md deleted file mode 100644 index 6fa878878062ba95e7909dfd9e326687ad6b1d28..0000000000000000000000000000000000000000 --- a/spaces/Colbe/basketball/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Basketball -emoji: 📊 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Cran-May/yugangVI/model.py b/spaces/Cran-May/yugangVI/model.py deleted file mode 100644 index b885254d7b5570733acb2d0af039b9baed51bb94..0000000000000000000000000000000000000000 --- a/spaces/Cran-May/yugangVI/model.py +++ /dev/null @@ -1,34 +0,0 @@ - -from typing import Iterator - - - -model_id = 'xuqinyang/baichuan-13b-chat-ggml-int4' - -from huggingface_hub import snapshot_download,hf_hub_download -#旧 -#snapshot_download(model_id, local_dir="./",revision="7f71a8abefa7b2eede3f74ce0564abe5fbe6874a") -snapshot_download(model_id, local_dir="./",revision="b2414a0ceee68fe09c99ace44446cfc9a1c52b08") -hf_hub_download(repo_id="baichuan-inc/Baichuan-13B-Chat",local_dir="./", filename="tokenizer.model") -from llama_cpp import Llama -llm = Llama(model_path="./ggml-model-q4_0.bin", n_ctx=4096,seed=-1) - -def run(message: str, - chat_history: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int = 1024, - temperature: float = 0.3, - top_p: float = 0.85, - top_k: int = 5) -> Iterator[str]: - history = [] - print(chat_history) - result="" - for i in chat_history: - history.append({"role": "user", "content": i[0]}) - history.append({"role": "assistant", "content": i[1]}) - print(history) - history.append({"role": "user", "content": message}) - for response in llm.create_chat_completion(history,stop=[""],stream=True,max_tokens=-1,temperature=temperature,top_k=top_k,top_p=top_p,repeat_penalty=1.1): - if "content" in response["choices"][0]["delta"]: - result = result + response["choices"][0]["delta"]["content"] - yield result \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/registry.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/registry.py deleted file mode 100644 index c3204e14148fe3341307c5d24ba9154c07449511..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/registry.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - - -def _register_generic(module_dict, module_name, module): - assert module_name not in module_dict - module_dict[module_name] = module - - -class Registry(dict): - ''' - A helper class for managing registering modules, it extends a dictionary - and provides a register functions. - - Eg. creeting a registry: - some_registry = Registry({"default": default_module}) - - There're two ways of registering new modules: - 1): normal way is just calling register function: - def foo(): - ... - some_registry.register("foo_module", foo) - 2): used as decorator when declaring the module: - @some_registry.register("foo_module") - @some_registry.register("foo_modeul_nickname") - def foo(): - ... - - Access of module is just like using a dictionary, eg: - f = some_registry["foo_modeul"] - ''' - def __init__(self, *args, **kwargs): - super(Registry, self).__init__(*args, **kwargs) - - def register(self, module_name, module=None): - # used as function call - if module is not None: - _register_generic(self, module_name, module) - return - - # used as decorator - def register_fn(fn): - _register_generic(self, module_name, fn) - return fn - - return register_fn diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/helpers.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/helpers.py deleted file mode 100644 index 874ab1ac076bc311d8853f08bb5fe454b650099f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/helpers.py +++ /dev/null @@ -1,878 +0,0 @@ -"""Various helper functions""" - -import asyncio -import base64 -import binascii -import datetime -import functools -import inspect -import netrc -import os -import platform -import re -import sys -import time -import warnings -import weakref -from collections import namedtuple -from contextlib import suppress -from email.parser import HeaderParser -from email.utils import parsedate -from math import ceil -from pathlib import Path -from types import TracebackType -from typing import ( - Any, - Callable, - ContextManager, - Dict, - Generator, - Generic, - Iterable, - Iterator, - List, - Mapping, - Optional, - Pattern, - Set, - Tuple, - Type, - TypeVar, - Union, - cast, -) -from urllib.parse import quote -from urllib.request import getproxies, proxy_bypass - -import async_timeout -import attr -from multidict import MultiDict, MultiDictProxy -from yarl import URL - -from . import hdrs -from .log import client_logger, internal_logger -from .typedefs import PathLike, Protocol # noqa - -__all__ = ("BasicAuth", "ChainMapProxy", "ETag") - -IS_MACOS = platform.system() == "Darwin" -IS_WINDOWS = platform.system() == "Windows" - -PY_36 = sys.version_info >= (3, 6) -PY_37 = sys.version_info >= (3, 7) -PY_38 = sys.version_info >= (3, 8) -PY_310 = sys.version_info >= (3, 10) -PY_311 = sys.version_info >= (3, 11) - -if sys.version_info < (3, 7): - import idna_ssl - - idna_ssl.patch_match_hostname() - - def all_tasks( - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> Set["asyncio.Task[Any]"]: - tasks = list(asyncio.Task.all_tasks(loop)) - return {t for t in tasks if not t.done()} - -else: - all_tasks = asyncio.all_tasks - - -_T = TypeVar("_T") -_S = TypeVar("_S") - - -sentinel: Any = object() -NO_EXTENSIONS: bool = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS")) - -# N.B. sys.flags.dev_mode is available on Python 3.7+, use getattr -# for compatibility with older versions -DEBUG: bool = getattr(sys.flags, "dev_mode", False) or ( - not sys.flags.ignore_environment and bool(os.environ.get("PYTHONASYNCIODEBUG")) -) - - -CHAR = {chr(i) for i in range(0, 128)} -CTL = {chr(i) for i in range(0, 32)} | { - chr(127), -} -SEPARATORS = { - "(", - ")", - "<", - ">", - "@", - ",", - ";", - ":", - "\\", - '"', - "/", - "[", - "]", - "?", - "=", - "{", - "}", - " ", - chr(9), -} -TOKEN = CHAR ^ CTL ^ SEPARATORS - - -class noop: - def __await__(self) -> Generator[None, None, None]: - yield - - -class BasicAuth(namedtuple("BasicAuth", ["login", "password", "encoding"])): - """Http basic authentication helper.""" - - def __new__( - cls, login: str, password: str = "", encoding: str = "latin1" - ) -> "BasicAuth": - if login is None: - raise ValueError("None is not allowed as login value") - - if password is None: - raise ValueError("None is not allowed as password value") - - if ":" in login: - raise ValueError('A ":" is not allowed in login (RFC 1945#section-11.1)') - - return super().__new__(cls, login, password, encoding) - - @classmethod - def decode(cls, auth_header: str, encoding: str = "latin1") -> "BasicAuth": - """Create a BasicAuth object from an Authorization HTTP header.""" - try: - auth_type, encoded_credentials = auth_header.split(" ", 1) - except ValueError: - raise ValueError("Could not parse authorization header.") - - if auth_type.lower() != "basic": - raise ValueError("Unknown authorization method %s" % auth_type) - - try: - decoded = base64.b64decode( - encoded_credentials.encode("ascii"), validate=True - ).decode(encoding) - except binascii.Error: - raise ValueError("Invalid base64 encoding.") - - try: - # RFC 2617 HTTP Authentication - # https://www.ietf.org/rfc/rfc2617.txt - # the colon must be present, but the username and password may be - # otherwise blank. - username, password = decoded.split(":", 1) - except ValueError: - raise ValueError("Invalid credentials.") - - return cls(username, password, encoding=encoding) - - @classmethod - def from_url(cls, url: URL, *, encoding: str = "latin1") -> Optional["BasicAuth"]: - """Create BasicAuth from url.""" - if not isinstance(url, URL): - raise TypeError("url should be yarl.URL instance") - if url.user is None: - return None - return cls(url.user, url.password or "", encoding=encoding) - - def encode(self) -> str: - """Encode credentials.""" - creds = (f"{self.login}:{self.password}").encode(self.encoding) - return "Basic %s" % base64.b64encode(creds).decode(self.encoding) - - -def strip_auth_from_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: - auth = BasicAuth.from_url(url) - if auth is None: - return url, None - else: - return url.with_user(None), auth - - -def netrc_from_env() -> Optional[netrc.netrc]: - """Load netrc from file. - - Attempt to load it from the path specified by the env-var - NETRC or in the default location in the user's home directory. - - Returns None if it couldn't be found or fails to parse. - """ - netrc_env = os.environ.get("NETRC") - - if netrc_env is not None: - netrc_path = Path(netrc_env) - else: - try: - home_dir = Path.home() - except RuntimeError as e: # pragma: no cover - # if pathlib can't resolve home, it may raise a RuntimeError - client_logger.debug( - "Could not resolve home directory when " - "trying to look for .netrc file: %s", - e, - ) - return None - - netrc_path = home_dir / ("_netrc" if IS_WINDOWS else ".netrc") - - try: - return netrc.netrc(str(netrc_path)) - except netrc.NetrcParseError as e: - client_logger.warning("Could not parse .netrc file: %s", e) - except OSError as e: - # we couldn't read the file (doesn't exist, permissions, etc.) - if netrc_env or netrc_path.is_file(): - # only warn if the environment wanted us to load it, - # or it appears like the default file does actually exist - client_logger.warning("Could not read .netrc file: %s", e) - - return None - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ProxyInfo: - proxy: URL - proxy_auth: Optional[BasicAuth] - - -def proxies_from_env() -> Dict[str, ProxyInfo]: - proxy_urls = { - k: URL(v) - for k, v in getproxies().items() - if k in ("http", "https", "ws", "wss") - } - netrc_obj = netrc_from_env() - stripped = {k: strip_auth_from_url(v) for k, v in proxy_urls.items()} - ret = {} - for proto, val in stripped.items(): - proxy, auth = val - if proxy.scheme in ("https", "wss"): - client_logger.warning( - "%s proxies %s are not supported, ignoring", proxy.scheme.upper(), proxy - ) - continue - if netrc_obj and auth is None: - auth_from_netrc = None - if proxy.host is not None: - auth_from_netrc = netrc_obj.authenticators(proxy.host) - if auth_from_netrc is not None: - # auth_from_netrc is a (`user`, `account`, `password`) tuple, - # `user` and `account` both can be username, - # if `user` is None, use `account` - *logins, password = auth_from_netrc - login = logins[0] if logins[0] else logins[-1] - auth = BasicAuth(cast(str, login), cast(str, password)) - ret[proto] = ProxyInfo(proxy, auth) - return ret - - -def current_task( - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> "Optional[asyncio.Task[Any]]": - if sys.version_info >= (3, 7): - return asyncio.current_task(loop=loop) - else: - return asyncio.Task.current_task(loop=loop) - - -def get_running_loop( - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> asyncio.AbstractEventLoop: - if loop is None: - loop = asyncio.get_event_loop() - if not loop.is_running(): - warnings.warn( - "The object should be created within an async function", - DeprecationWarning, - stacklevel=3, - ) - if loop.get_debug(): - internal_logger.warning( - "The object should be created within an async function", stack_info=True - ) - return loop - - -def isasyncgenfunction(obj: Any) -> bool: - func = getattr(inspect, "isasyncgenfunction", None) - if func is not None: - return func(obj) # type: ignore[no-any-return] - else: - return False - - -def get_env_proxy_for_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: - """Get a permitted proxy for the given URL from the env.""" - if url.host is not None and proxy_bypass(url.host): - raise LookupError(f"Proxying is disallowed for `{url.host!r}`") - - proxies_in_env = proxies_from_env() - try: - proxy_info = proxies_in_env[url.scheme] - except KeyError: - raise LookupError(f"No proxies found for `{url!s}` in the env") - else: - return proxy_info.proxy, proxy_info.proxy_auth - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class MimeType: - type: str - subtype: str - suffix: str - parameters: "MultiDictProxy[str]" - - -@functools.lru_cache(maxsize=56) -def parse_mimetype(mimetype: str) -> MimeType: - """Parses a MIME type into its components. - - mimetype is a MIME type string. - - Returns a MimeType object. - - Example: - - >>> parse_mimetype('text/html; charset=utf-8') - MimeType(type='text', subtype='html', suffix='', - parameters={'charset': 'utf-8'}) - - """ - if not mimetype: - return MimeType( - type="", subtype="", suffix="", parameters=MultiDictProxy(MultiDict()) - ) - - parts = mimetype.split(";") - params: MultiDict[str] = MultiDict() - for item in parts[1:]: - if not item: - continue - key, value = cast( - Tuple[str, str], item.split("=", 1) if "=" in item else (item, "") - ) - params.add(key.lower().strip(), value.strip(' "')) - - fulltype = parts[0].strip().lower() - if fulltype == "*": - fulltype = "*/*" - - mtype, stype = ( - cast(Tuple[str, str], fulltype.split("/", 1)) - if "/" in fulltype - else (fulltype, "") - ) - stype, suffix = ( - cast(Tuple[str, str], stype.split("+", 1)) if "+" in stype else (stype, "") - ) - - return MimeType( - type=mtype, subtype=stype, suffix=suffix, parameters=MultiDictProxy(params) - ) - - -def guess_filename(obj: Any, default: Optional[str] = None) -> Optional[str]: - name = getattr(obj, "name", None) - if name and isinstance(name, str) and name[0] != "<" and name[-1] != ">": - return Path(name).name - return default - - -not_qtext_re = re.compile(r"[^\041\043-\133\135-\176]") -QCONTENT = {chr(i) for i in range(0x20, 0x7F)} | {"\t"} - - -def quoted_string(content: str) -> str: - """Return 7-bit content as quoted-string. - - Format content into a quoted-string as defined in RFC5322 for - Internet Message Format. Notice that this is not the 8-bit HTTP - format, but the 7-bit email format. Content must be in usascii or - a ValueError is raised. - """ - if not (QCONTENT > set(content)): - raise ValueError(f"bad content for quoted-string {content!r}") - return not_qtext_re.sub(lambda x: "\\" + x.group(0), content) - - -def content_disposition_header( - disptype: str, quote_fields: bool = True, _charset: str = "utf-8", **params: str -) -> str: - """Sets ``Content-Disposition`` header for MIME. - - This is the MIME payload Content-Disposition header from RFC 2183 - and RFC 7579 section 4.2, not the HTTP Content-Disposition from - RFC 6266. - - disptype is a disposition type: inline, attachment, form-data. - Should be valid extension token (see RFC 2183) - - quote_fields performs value quoting to 7-bit MIME headers - according to RFC 7578. Set to quote_fields to False if recipient - can take 8-bit file names and field values. - - _charset specifies the charset to use when quote_fields is True. - - params is a dict with disposition params. - """ - if not disptype or not (TOKEN > set(disptype)): - raise ValueError("bad content disposition type {!r}" "".format(disptype)) - - value = disptype - if params: - lparams = [] - for key, val in params.items(): - if not key or not (TOKEN > set(key)): - raise ValueError( - "bad content disposition parameter" " {!r}={!r}".format(key, val) - ) - if quote_fields: - if key.lower() == "filename": - qval = quote(val, "", encoding=_charset) - lparams.append((key, '"%s"' % qval)) - else: - try: - qval = quoted_string(val) - except ValueError: - qval = "".join( - (_charset, "''", quote(val, "", encoding=_charset)) - ) - lparams.append((key + "*", qval)) - else: - lparams.append((key, '"%s"' % qval)) - else: - qval = val.replace("\\", "\\\\").replace('"', '\\"') - lparams.append((key, '"%s"' % qval)) - sparams = "; ".join("=".join(pair) for pair in lparams) - value = "; ".join((value, sparams)) - return value - - -class _TSelf(Protocol, Generic[_T]): - _cache: Dict[str, _T] - - -class reify(Generic[_T]): - """Use as a class method decorator. - - It operates almost exactly like - the Python `@property` decorator, but it puts the result of the - method it decorates into the instance dict after the first call, - effectively replacing the function it decorates with an instance - variable. It is, in Python parlance, a data descriptor. - """ - - def __init__(self, wrapped: Callable[..., _T]) -> None: - self.wrapped = wrapped - self.__doc__ = wrapped.__doc__ - self.name = wrapped.__name__ - - def __get__(self, inst: _TSelf[_T], owner: Optional[Type[Any]] = None) -> _T: - try: - try: - return inst._cache[self.name] - except KeyError: - val = self.wrapped(inst) - inst._cache[self.name] = val - return val - except AttributeError: - if inst is None: - return self - raise - - def __set__(self, inst: _TSelf[_T], value: _T) -> None: - raise AttributeError("reified property is read-only") - - -reify_py = reify - -try: - from ._helpers import reify as reify_c - - if not NO_EXTENSIONS: - reify = reify_c # type: ignore[misc,assignment] -except ImportError: - pass - -_ipv4_pattern = ( - r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}" - r"(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$" -) -_ipv6_pattern = ( - r"^(?:(?:(?:[A-F0-9]{1,4}:){6}|(?=(?:[A-F0-9]{0,4}:){0,6}" - r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}$)(([0-9A-F]{1,4}:){0,5}|:)" - r"((:[0-9A-F]{1,4}){1,5}:|:)|::(?:[A-F0-9]{1,4}:){5})" - r"(?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}" - r"(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])|(?:[A-F0-9]{1,4}:){7}" - r"[A-F0-9]{1,4}|(?=(?:[A-F0-9]{0,4}:){0,7}[A-F0-9]{0,4}$)" - r"(([0-9A-F]{1,4}:){1,7}|:)((:[0-9A-F]{1,4}){1,7}|:)|(?:[A-F0-9]{1,4}:){7}" - r":|:(:[A-F0-9]{1,4}){7})$" -) -_ipv4_regex = re.compile(_ipv4_pattern) -_ipv6_regex = re.compile(_ipv6_pattern, flags=re.IGNORECASE) -_ipv4_regexb = re.compile(_ipv4_pattern.encode("ascii")) -_ipv6_regexb = re.compile(_ipv6_pattern.encode("ascii"), flags=re.IGNORECASE) - - -def _is_ip_address( - regex: Pattern[str], regexb: Pattern[bytes], host: Optional[Union[str, bytes]] -) -> bool: - if host is None: - return False - if isinstance(host, str): - return bool(regex.match(host)) - elif isinstance(host, (bytes, bytearray, memoryview)): - return bool(regexb.match(host)) - else: - raise TypeError(f"{host} [{type(host)}] is not a str or bytes") - - -is_ipv4_address = functools.partial(_is_ip_address, _ipv4_regex, _ipv4_regexb) -is_ipv6_address = functools.partial(_is_ip_address, _ipv6_regex, _ipv6_regexb) - - -def is_ip_address(host: Optional[Union[str, bytes, bytearray, memoryview]]) -> bool: - return is_ipv4_address(host) or is_ipv6_address(host) - - -def next_whole_second() -> datetime.datetime: - """Return current time rounded up to the next whole second.""" - return datetime.datetime.now(datetime.timezone.utc).replace( - microsecond=0 - ) + datetime.timedelta(seconds=0) - - -_cached_current_datetime: Optional[int] = None -_cached_formatted_datetime = "" - - -def rfc822_formatted_time() -> str: - global _cached_current_datetime - global _cached_formatted_datetime - - now = int(time.time()) - if now != _cached_current_datetime: - # Weekday and month names for HTTP date/time formatting; - # always English! - # Tuples are constants stored in codeobject! - _weekdayname = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun") - _monthname = ( - "", # Dummy so we can use 1-based month numbers - "Jan", - "Feb", - "Mar", - "Apr", - "May", - "Jun", - "Jul", - "Aug", - "Sep", - "Oct", - "Nov", - "Dec", - ) - - year, month, day, hh, mm, ss, wd, *tail = time.gmtime(now) - _cached_formatted_datetime = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % ( - _weekdayname[wd], - day, - _monthname[month], - year, - hh, - mm, - ss, - ) - _cached_current_datetime = now - return _cached_formatted_datetime - - -def _weakref_handle(info: "Tuple[weakref.ref[object], str]") -> None: - ref, name = info - ob = ref() - if ob is not None: - with suppress(Exception): - getattr(ob, name)() - - -def weakref_handle( - ob: object, name: str, timeout: float, loop: asyncio.AbstractEventLoop -) -> Optional[asyncio.TimerHandle]: - if timeout is not None and timeout > 0: - when = loop.time() + timeout - if timeout >= 5: - when = ceil(when) - - return loop.call_at(when, _weakref_handle, (weakref.ref(ob), name)) - return None - - -def call_later( - cb: Callable[[], Any], timeout: float, loop: asyncio.AbstractEventLoop -) -> Optional[asyncio.TimerHandle]: - if timeout is not None and timeout > 0: - when = loop.time() + timeout - if timeout > 5: - when = ceil(when) - return loop.call_at(when, cb) - return None - - -class TimeoutHandle: - """Timeout handle""" - - def __init__( - self, loop: asyncio.AbstractEventLoop, timeout: Optional[float] - ) -> None: - self._timeout = timeout - self._loop = loop - self._callbacks: List[ - Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]] - ] = [] - - def register( - self, callback: Callable[..., None], *args: Any, **kwargs: Any - ) -> None: - self._callbacks.append((callback, args, kwargs)) - - def close(self) -> None: - self._callbacks.clear() - - def start(self) -> Optional[asyncio.Handle]: - timeout = self._timeout - if timeout is not None and timeout > 0: - when = self._loop.time() + timeout - if timeout >= 5: - when = ceil(when) - return self._loop.call_at(when, self.__call__) - else: - return None - - def timer(self) -> "BaseTimerContext": - if self._timeout is not None and self._timeout > 0: - timer = TimerContext(self._loop) - self.register(timer.timeout) - return timer - else: - return TimerNoop() - - def __call__(self) -> None: - for cb, args, kwargs in self._callbacks: - with suppress(Exception): - cb(*args, **kwargs) - - self._callbacks.clear() - - -class BaseTimerContext(ContextManager["BaseTimerContext"]): - pass - - -class TimerNoop(BaseTimerContext): - def __enter__(self) -> BaseTimerContext: - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - return - - -class TimerContext(BaseTimerContext): - """Low resolution timeout context manager""" - - def __init__(self, loop: asyncio.AbstractEventLoop) -> None: - self._loop = loop - self._tasks: List[asyncio.Task[Any]] = [] - self._cancelled = False - - def __enter__(self) -> BaseTimerContext: - task = current_task(loop=self._loop) - - if task is None: - raise RuntimeError( - "Timeout context manager should be used " "inside a task" - ) - - if self._cancelled: - raise asyncio.TimeoutError from None - - self._tasks.append(task) - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> Optional[bool]: - if self._tasks: - self._tasks.pop() - - if exc_type is asyncio.CancelledError and self._cancelled: - raise asyncio.TimeoutError from None - return None - - def timeout(self) -> None: - if not self._cancelled: - for task in set(self._tasks): - task.cancel() - - self._cancelled = True - - -def ceil_timeout(delay: Optional[float]) -> async_timeout.Timeout: - if delay is None or delay <= 0: - return async_timeout.timeout(None) - - loop = get_running_loop() - now = loop.time() - when = now + delay - if delay > 5: - when = ceil(when) - return async_timeout.timeout_at(when) - - -class HeadersMixin: - - ATTRS = frozenset(["_content_type", "_content_dict", "_stored_content_type"]) - - _content_type: Optional[str] = None - _content_dict: Optional[Dict[str, str]] = None - _stored_content_type = sentinel - - def _parse_content_type(self, raw: str) -> None: - self._stored_content_type = raw - if raw is None: - # default value according to RFC 2616 - self._content_type = "application/octet-stream" - self._content_dict = {} - else: - msg = HeaderParser().parsestr("Content-Type: " + raw) - self._content_type = msg.get_content_type() - params = msg.get_params() - self._content_dict = dict(params[1:]) # First element is content type again - - @property - def content_type(self) -> str: - """The value of content part for Content-Type HTTP header.""" - raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] - if self._stored_content_type != raw: - self._parse_content_type(raw) - return self._content_type # type: ignore[return-value] - - @property - def charset(self) -> Optional[str]: - """The value of charset part for Content-Type HTTP header.""" - raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] - if self._stored_content_type != raw: - self._parse_content_type(raw) - return self._content_dict.get("charset") # type: ignore[union-attr] - - @property - def content_length(self) -> Optional[int]: - """The value of Content-Length HTTP header.""" - content_length = self._headers.get( # type: ignore[attr-defined] - hdrs.CONTENT_LENGTH - ) - - if content_length is not None: - return int(content_length) - else: - return None - - -def set_result(fut: "asyncio.Future[_T]", result: _T) -> None: - if not fut.done(): - fut.set_result(result) - - -def set_exception(fut: "asyncio.Future[_T]", exc: BaseException) -> None: - if not fut.done(): - fut.set_exception(exc) - - -class ChainMapProxy(Mapping[str, Any]): - __slots__ = ("_maps",) - - def __init__(self, maps: Iterable[Mapping[str, Any]]) -> None: - self._maps = tuple(maps) - - def __init_subclass__(cls) -> None: - raise TypeError( - "Inheritance class {} from ChainMapProxy " - "is forbidden".format(cls.__name__) - ) - - def __getitem__(self, key: str) -> Any: - for mapping in self._maps: - try: - return mapping[key] - except KeyError: - pass - raise KeyError(key) - - def get(self, key: str, default: Any = None) -> Any: - return self[key] if key in self else default - - def __len__(self) -> int: - # reuses stored hash values if possible - return len(set().union(*self._maps)) # type: ignore[arg-type] - - def __iter__(self) -> Iterator[str]: - d: Dict[str, Any] = {} - for mapping in reversed(self._maps): - # reuses stored hash values if possible - d.update(mapping) - return iter(d) - - def __contains__(self, key: object) -> bool: - return any(key in m for m in self._maps) - - def __bool__(self) -> bool: - return any(self._maps) - - def __repr__(self) -> str: - content = ", ".join(map(repr, self._maps)) - return f"ChainMapProxy({content})" - - -# https://tools.ietf.org/html/rfc7232#section-2.3 -_ETAGC = r"[!#-}\x80-\xff]+" -_ETAGC_RE = re.compile(_ETAGC) -_QUOTED_ETAG = rf'(W/)?"({_ETAGC})"' -QUOTED_ETAG_RE = re.compile(_QUOTED_ETAG) -LIST_QUOTED_ETAG_RE = re.compile(rf"({_QUOTED_ETAG})(?:\s*,\s*|$)|(.)") - -ETAG_ANY = "*" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ETag: - value: str - is_weak: bool = False - - -def validate_etag_value(value: str) -> None: - if value != ETAG_ANY and not _ETAGC_RE.fullmatch(value): - raise ValueError( - f"Value {value!r} is not a valid etag. Maybe it contains '\"'?" - ) - - -def parse_http_date(date_str: Optional[str]) -> Optional[datetime.datetime]: - """Process a date string, return a datetime object""" - if date_str is not None: - timetuple = parsedate(date_str) - if timetuple is not None: - with suppress(ValueError): - return datetime.datetime(*timetuple[:6], tzinfo=datetime.timezone.utc) - return None diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/__init__.py deleted file mode 100644 index 644db6f3fa037c2114a31dd461d432f5c06dc44f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/__init__.py +++ /dev/null @@ -1,319 +0,0 @@ -import sys -from dataclasses import dataclass -from datetime import timezone -from typing import TYPE_CHECKING, Any, Callable, Iterator, Optional, TypeVar, Union - -if sys.version_info < (3, 8): - from typing_extensions import Protocol, runtime_checkable -else: - from typing import Protocol, runtime_checkable - -if sys.version_info < (3, 9): - from typing_extensions import Annotated, Literal -else: - from typing import Annotated, Literal - -if sys.version_info < (3, 10): - EllipsisType = type(Ellipsis) - KW_ONLY = {} - SLOTS = {} -else: - from types import EllipsisType - - KW_ONLY = {"kw_only": True} - SLOTS = {"slots": True} - - -__all__ = ( - 'BaseMetadata', - 'GroupedMetadata', - 'Gt', - 'Ge', - 'Lt', - 'Le', - 'Interval', - 'MultipleOf', - 'MinLen', - 'MaxLen', - 'Len', - 'Timezone', - 'Predicate', - 'LowerCase', - 'UpperCase', - 'IsDigits', - '__version__', -) - -__version__ = '0.5.0' - - -T = TypeVar('T') - - -# arguments that start with __ are considered -# positional only -# see https://peps.python.org/pep-0484/#positional-only-arguments - - -class SupportsGt(Protocol): - def __gt__(self: T, __other: T) -> bool: - ... - - -class SupportsGe(Protocol): - def __ge__(self: T, __other: T) -> bool: - ... - - -class SupportsLt(Protocol): - def __lt__(self: T, __other: T) -> bool: - ... - - -class SupportsLe(Protocol): - def __le__(self: T, __other: T) -> bool: - ... - - -class SupportsMod(Protocol): - def __mod__(self: T, __other: T) -> T: - ... - - -class SupportsDiv(Protocol): - def __div__(self: T, __other: T) -> T: - ... - - -class BaseMetadata: - """Base class for all metadata. - - This exists mainly so that implementers - can do `isinstance(..., BaseMetadata)` while traversing field annotations. - """ - - __slots__ = () - - -@dataclass(frozen=True, **SLOTS) -class Gt(BaseMetadata): - """Gt(gt=x) implies that the value must be greater than x. - - It can be used with any type that supports the ``>`` operator, - including numbers, dates and times, strings, sets, and so on. - """ - - gt: SupportsGt - - -@dataclass(frozen=True, **SLOTS) -class Ge(BaseMetadata): - """Ge(ge=x) implies that the value must be greater than or equal to x. - - It can be used with any type that supports the ``>=`` operator, - including numbers, dates and times, strings, sets, and so on. - """ - - ge: SupportsGe - - -@dataclass(frozen=True, **SLOTS) -class Lt(BaseMetadata): - """Lt(lt=x) implies that the value must be less than x. - - It can be used with any type that supports the ``<`` operator, - including numbers, dates and times, strings, sets, and so on. - """ - - lt: SupportsLt - - -@dataclass(frozen=True, **SLOTS) -class Le(BaseMetadata): - """Le(le=x) implies that the value must be less than or equal to x. - - It can be used with any type that supports the ``<=`` operator, - including numbers, dates and times, strings, sets, and so on. - """ - - le: SupportsLe - - -@runtime_checkable -class GroupedMetadata(Protocol): - """A grouping of multiple BaseMetadata objects. - - `GroupedMetadata` on its own is not metadata and has no meaning. - All it the the constraint and metadata should be fully expressable - in terms of the `BaseMetadata`'s returned by `GroupedMetadata.__iter__()`. - - Concrete implementations should override `GroupedMetadata.__iter__()` - to add their own metadata. - For example: - - >>> @dataclass - >>> class Field(GroupedMetadata): - >>> gt: float | None = None - >>> description: str | None = None - ... - >>> def __iter__(self) -> Iterable[BaseMetadata]: - >>> if self.gt is not None: - >>> yield Gt(self.gt) - >>> if self.description is not None: - >>> yield Description(self.gt) - - Also see the implementation of `Interval` below for an example. - - Parsers should recognize this and unpack it so that it can be used - both with and without unpacking: - - - `Annotated[int, Field(...)]` (parser must unpack Field) - - `Annotated[int, *Field(...)]` (PEP-646) - """ # noqa: trailing-whitespace - - @property - def __is_annotated_types_grouped_metadata__(self) -> Literal[True]: - return True - - def __iter__(self) -> Iterator[BaseMetadata]: - ... - - if not TYPE_CHECKING: - __slots__ = () # allow subclasses to use slots - - def __init_subclass__(cls, *args: Any, **kwargs: Any) -> None: - # Basic ABC like functionality without the complexity of an ABC - super().__init_subclass__(*args, **kwargs) - if cls.__iter__ is GroupedMetadata.__iter__: - raise TypeError("Can't subclass GroupedMetadata without implementing __iter__") - - def __iter__(self) -> Iterator[BaseMetadata]: # noqa: F811 - raise NotImplementedError # more helpful than "None has no attribute..." type errors - - -@dataclass(frozen=True, **KW_ONLY, **SLOTS) -class Interval(GroupedMetadata): - """Interval can express inclusive or exclusive bounds with a single object. - - It accepts keyword arguments ``gt``, ``ge``, ``lt``, and/or ``le``, which - are interpreted the same way as the single-bound constraints. - """ - - gt: Union[SupportsGt, None] = None - ge: Union[SupportsGe, None] = None - lt: Union[SupportsLt, None] = None - le: Union[SupportsLe, None] = None - - def __iter__(self) -> Iterator[BaseMetadata]: - """Unpack an Interval into zero or more single-bounds.""" - if self.gt is not None: - yield Gt(self.gt) - if self.ge is not None: - yield Ge(self.ge) - if self.lt is not None: - yield Lt(self.lt) - if self.le is not None: - yield Le(self.le) - - -@dataclass(frozen=True, **SLOTS) -class MultipleOf(BaseMetadata): - """MultipleOf(multiple_of=x) might be interpreted in two ways: - - 1. Python semantics, implying ``value % multiple_of == 0``, or - 2. JSONschema semantics, where ``int(value / multiple_of) == value / multiple_of`` - - We encourage users to be aware of these two common interpretations, - and libraries to carefully document which they implement. - """ - - multiple_of: Union[SupportsDiv, SupportsMod] - - -@dataclass(frozen=True, **SLOTS) -class MinLen(BaseMetadata): - """ - MinLen() implies minimum inclusive length, - e.g. ``len(value) >= min_length``. - """ - - min_length: Annotated[int, Ge(0)] - - -@dataclass(frozen=True, **SLOTS) -class MaxLen(BaseMetadata): - """ - MaxLen() implies maximum inclusive length, - e.g. ``len(value) <= max_length``. - """ - - max_length: Annotated[int, Ge(0)] - - -@dataclass(frozen=True, **SLOTS) -class Len(GroupedMetadata): - """ - Len() implies that ``min_length <= len(value) <= max_length``. - - Upper bound may be omitted or ``None`` to indicate no upper length bound. - """ - - min_length: Annotated[int, Ge(0)] = 0 - max_length: Optional[Annotated[int, Ge(0)]] = None - - def __iter__(self) -> Iterator[BaseMetadata]: - """Unpack a Len into zone or more single-bounds.""" - if self.min_length > 0: - yield MinLen(self.min_length) - if self.max_length is not None: - yield MaxLen(self.max_length) - - -@dataclass(frozen=True, **SLOTS) -class Timezone(BaseMetadata): - """Timezone(tz=...) requires a datetime to be aware (or ``tz=None``, naive). - - ``Annotated[datetime, Timezone(None)]`` must be a naive datetime. - ``Timezone[...]`` (the ellipsis literal) expresses that the datetime must be - tz-aware but any timezone is allowed. - - You may also pass a specific timezone string or timezone object such as - ``Timezone(timezone.utc)`` or ``Timezone("Africa/Abidjan")`` to express that - you only allow a specific timezone, though we note that this is often - a symptom of poor design. - """ - - tz: Union[str, timezone, EllipsisType, None] - - -@dataclass(frozen=True, **SLOTS) -class Predicate(BaseMetadata): - """``Predicate(func: Callable)`` implies `func(value)` is truthy for valid values. - - Users should prefer statically inspectable metadata, but if you need the full - power and flexibility of arbitrary runtime predicates... here it is. - - We provide a few predefined predicates for common string constraints: - ``IsLower = Predicate(str.islower)``, ``IsUpper = Predicate(str.isupper)``, and - ``IsDigit = Predicate(str.isdigit)``. Users are encouraged to use methods which - can be given special handling, and avoid indirection like ``lambda s: s.lower()``. - - Some libraries might have special logic to handle certain predicates, e.g. by - checking for `str.isdigit` and using its presence to both call custom logic to - enforce digit-only strings, and customise some generated external schema. - - We do not specify what behaviour should be expected for predicates that raise - an exception. For example `Annotated[int, Predicate(str.isdigit)]` might silently - skip invalid constraints, or statically raise an error; or it might try calling it - and then propogate or discard the resulting exception. - """ - - func: Callable[[Any], bool] - - -StrType = TypeVar("StrType", bound=str) - -LowerCase = Annotated[StrType, Predicate(str.islower)] -UpperCase = Annotated[StrType, Predicate(str.isupper)] -IsDigits = Annotated[StrType, Predicate(str.isdigit)] -IsAscii = Annotated[StrType, Predicate(str.isascii)] diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py deleted file mode 100644 index c97b4354298d7c933fa812084a71a4b6c1ac32b8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py +++ /dev/null @@ -1,112 +0,0 @@ -from fontTools.varLib.models import VariationModel, normalizeValue, piecewiseLinearMap - - -def Location(loc): - return tuple(sorted(loc.items())) - - -class VariableScalar: - """A scalar with different values at different points in the designspace.""" - - def __init__(self, location_value={}): - self.values = {} - self.axes = {} - for location, value in location_value.items(): - self.add_value(location, value) - - def __repr__(self): - items = [] - for location, value in self.values.items(): - loc = ",".join(["%s=%i" % (ax, loc) for ax, loc in location]) - items.append("%s:%i" % (loc, value)) - return "(" + (" ".join(items)) + ")" - - @property - def does_vary(self): - values = list(self.values.values()) - return any(v != values[0] for v in values[1:]) - - @property - def axes_dict(self): - if not self.axes: - raise ValueError( - ".axes must be defined on variable scalar before interpolating" - ) - return {ax.axisTag: ax for ax in self.axes} - - def _normalized_location(self, location): - location = self.fix_location(location) - normalized_location = {} - for axtag in location.keys(): - if axtag not in self.axes_dict: - raise ValueError("Unknown axis %s in %s" % (axtag, location)) - axis = self.axes_dict[axtag] - normalized_location[axtag] = normalizeValue( - location[axtag], (axis.minValue, axis.defaultValue, axis.maxValue) - ) - - return Location(normalized_location) - - def fix_location(self, location): - location = dict(location) - for tag, axis in self.axes_dict.items(): - if tag not in location: - location[tag] = axis.defaultValue - return location - - def add_value(self, location, value): - if self.axes: - location = self.fix_location(location) - - self.values[Location(location)] = value - - def fix_all_locations(self): - self.values = { - Location(self.fix_location(l)): v for l, v in self.values.items() - } - - @property - def default(self): - self.fix_all_locations() - key = Location({ax.axisTag: ax.defaultValue for ax in self.axes}) - if key not in self.values: - raise ValueError("Default value could not be found") - # I *guess* we could interpolate one, but I don't know how. - return self.values[key] - - def value_at_location(self, location, model_cache=None, avar=None): - loc = location - if loc in self.values.keys(): - return self.values[loc] - values = list(self.values.values()) - return self.model(model_cache, avar).interpolateFromMasters(loc, values) - - def model(self, model_cache=None, avar=None): - if model_cache is not None: - key = tuple(self.values.keys()) - if key in model_cache: - return model_cache[key] - locations = [dict(self._normalized_location(k)) for k in self.values.keys()] - if avar is not None: - mapping = avar.segments - locations = [ - { - k: piecewiseLinearMap(v, mapping[k]) if k in mapping else v - for k, v in location.items() - } - for location in locations - ] - m = VariationModel(locations) - if model_cache is not None: - model_cache[key] = m - return m - - def get_deltas_and_supports(self, model_cache=None, avar=None): - values = list(self.values.values()) - return self.model(model_cache, avar).getDeltasAndSupports(values) - - def add_to_variation_store(self, store_builder, model_cache=None, avar=None): - deltas, supports = self.get_deltas_and_supports(model_cache, avar) - store_builder.setSupports(supports) - index = store_builder.storeDeltas(deltas) - return int(self.default), index diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/stores/errors.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/stores/errors.ts deleted file mode 100644 index c7dd124ff03c1845237213b6c22ec7afefcd18e8..0000000000000000000000000000000000000000 --- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/stores/errors.ts +++ /dev/null @@ -1,7 +0,0 @@ -import { writable } from "svelte/store"; - -export const ERROR_MESSAGES = { - default: "Oops, something went wrong.", -}; - -export const error = writable(null); diff --git a/spaces/DaleChen/AutoGPT/tests.py b/spaces/DaleChen/AutoGPT/tests.py deleted file mode 100644 index 62f76da8ac4925ef6cdfcce0484612cf70959862..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/tests.py +++ /dev/null @@ -1,21 +0,0 @@ -import unittest - -import coverage - -if __name__ == "__main__": - # Start coverage collection - cov = coverage.Coverage() - cov.start() - - # Load all tests from the 'autogpt/tests' package - suite = unittest.defaultTestLoader.discover("./tests") - - # Run the tests - unittest.TextTestRunner().run(suite) - - # Stop coverage collection - cov.stop() - cov.save() - - # Report the coverage - cov.report(show_missing=True) diff --git a/spaces/Dao3/chatwithdocs/__init__.py b/spaces/Dao3/chatwithdocs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/height.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/height.py deleted file mode 100644 index 9abbffcef5d4e1baf3614d6fd2902d0bd4337e60..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/utils/height.py +++ /dev/null @@ -1,131 +0,0 @@ -""" -@date: 2021/6/30 -@description: -""" -import numpy as np -from typing import List - -from utils.boundary import * -from scipy.optimize import least_squares -from functools import partial - - -def lsq_fit(ceil_norm, floor_norm): - """ - Least Squares - :param ceil_norm: - :param floor_norm: - :return: - """ - - def error_fun(ratio, ceil_norm, floor_norm): - error = np.abs(ratio * ceil_norm - floor_norm) - return error - - init_ratio = np.mean(floor_norm / ceil_norm, axis=-1) - error_func = partial(error_fun, ceil_norm=ceil_norm, floor_norm=floor_norm) - ret = least_squares(error_func, init_ratio, verbose=0) - ratio = ret.x[0] - return ratio - - -def mean_percentile_fit(ceil_norm, floor_norm, p1=25, p2=75): - """ - :param ceil_norm: - :param floor_norm: - :param p1: - :param p2: - :return: - """ - ratio = floor_norm / ceil_norm - r_min = np.percentile(ratio, p1) - r_max = np.percentile(ratio, p2) - return ratio[(r_min <= ratio) & (ratio <= r_max)].mean() - - -def calc_ceil_ratio(boundaries: List[np.array], mode='lsq'): - """ - :param boundaries: [ [[cu1, cv1], [cu2, cv2], ...], [[fu1, fv1], [fu2, fv2], ...] ] - :param mode: 'lsq' or 'mean' - :return: - """ - assert len(boundaries[0].shape) < 4 and len(boundaries[1].shape) < 4, 'error shape' - if not is_normal_layout(boundaries): - return 0 - - ceil_boundary = boundaries[0] - floor_boundary = boundaries[1] - assert ceil_boundary.shape[-2] == floor_boundary.shape[-2], "boundary need same length" - - ceil_xyz = uv2xyz(ceil_boundary, -1) - floor_xyz = uv2xyz(floor_boundary, 1) - - ceil_xz = ceil_xyz[..., ::2] - floor_xz = floor_xyz[..., ::2] - - ceil_norm = np.linalg.norm(ceil_xz, axis=-1) - floor_norm = np.linalg.norm(floor_xz, axis=-1) - - if mode == "lsq": - if len(ceil_norm.shape) == 2: - ratio = np.array([lsq_fit(ceil_norm[i], floor_norm[i]) for i in range(ceil_norm.shape[0])]) - else: - ratio = lsq_fit(ceil_norm, floor_norm) - else: - if len(ceil_norm.shape) == 2: - ratio = np.array([mean_percentile_fit(ceil_norm[i], floor_norm[i]) for i in range(ceil_norm.shape[0])]) - else: - ratio = mean_percentile_fit(ceil_norm, floor_norm) - - return ratio - - -def calc_ceil_height(boundaries: List[np.array], camera_height=1.6, mode='lsq') -> float: - """ - :param boundaries: [ [[cu1, cv1], [cu2, cv2], ...], [[fu1, fv1], [fu2, fv2], ...] ] - :param camera_height: - :param mode: - :return: - """ - ratio = calc_ceil_ratio(boundaries, mode) - ceil_height = camera_height * ratio - return ceil_height - - -def calc_room_height(boundaries: List[np.array], camera_height=1.6, mode='lsq') -> float: - """ - :param boundaries: also can corners,format: [ [[cu1, cv1], [cu2, cv2], ...], [[fu1, fv1], [fu2, fv2], ...] ], - 0 denotes ceil, 1 denotes floor - :param camera_height: actual camera height determines the scale - :param mode: fitting method lsq or mean - :return: - """ - ceil_height = calc_ceil_height(boundaries, camera_height, mode) - room_height = camera_height + ceil_height - return room_height - - -def height2ratio(height, camera_height=1.6): - ceil_height = height - camera_height - ratio = ceil_height / camera_height - return ratio - - -def ratio2height(ratio, camera_height=1.6): - ceil_height = camera_height * ratio - room_height = camera_height + ceil_height - return room_height - - -if __name__ == '__main__': - from dataset.mp3d_dataset import MP3DDataset - - dataset = MP3DDataset(root_dir="../src/dataset/mp3d", mode="train") - for data in dataset: - ceil_corners = data['corners'][::2] - floor_corners = data['corners'][1::2] - # ceil_boundary = corners2boundary(ceil_corners, length=1024) - # floor_boundary = corners2boundary(floor_corners, length=1024) - room_height1 = calc_room_height([ceil_corners, floor_corners], camera_height=1.6, mode='mean') - room_height2 = calc_room_height([ceil_corners, floor_corners], camera_height=1.6, mode='lsq') - print(room_height1, room_height2, data['cameraCeilingHeight'] + 1.6) diff --git a/spaces/Datasculptor/DescriptionGPT/tools/remove_lvis_rare.py b/spaces/Datasculptor/DescriptionGPT/tools/remove_lvis_rare.py deleted file mode 100644 index 06e4e881bfa50e2cd74747511a3ad2e8676e0c70..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/tools/remove_lvis_rare.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/lvis/lvis_v1_train.json') - args = parser.parse_args() - - print('Loading', args.ann) - data = json.load(open(args.ann, 'r')) - catid2freq = {x['id']: x['frequency'] for x in data['categories']} - print('ori #anns', len(data['annotations'])) - exclude = ['r'] - data['annotations'] = [x for x in data['annotations'] \ - if catid2freq[x['category_id']] not in exclude] - print('filtered #anns', len(data['annotations'])) - out_path = args.ann[:-5] + '_norare.json' - print('Saving to', out_path) - json.dump(data, open(out_path, 'w')) diff --git a/spaces/EllaTsoi/text_generator/app.py b/spaces/EllaTsoi/text_generator/app.py deleted file mode 100644 index dfeee68b613591d7bb15e66ec1aa30ad4b229e4e..0000000000000000000000000000000000000000 --- a/spaces/EllaTsoi/text_generator/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -title = "My First Text Generator" -description = "input text and submit." - -model1=gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B") -model3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") - -gr.Parallel(model1 , model3, title=title,description=description).launch() diff --git a/spaces/Enigma007/Normalizer-Dashboard/README.md b/spaces/Enigma007/Normalizer-Dashboard/README.md deleted file mode 100644 index 2daded6a0013bddb44a86433fc1874a8c31e3f5a..0000000000000000000000000000000000000000 --- a/spaces/Enigma007/Normalizer-Dashboard/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Healthcare-Dashboard-Generator -emoji: 🏃 -colorFrom: pink -colorTo: red -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/EnigmaOfTheWorld/Power_AI_Point/app.py b/spaces/EnigmaOfTheWorld/Power_AI_Point/app.py deleted file mode 100644 index 4e11f949d87453423dda90423e1be542c6914b35..0000000000000000000000000000000000000000 --- a/spaces/EnigmaOfTheWorld/Power_AI_Point/app.py +++ /dev/null @@ -1,390 +0,0 @@ -import os -import time -import re -import pathlib - -import requests -import openai -from embedchain import App -from serpapi import GoogleSearch -from pptx import Presentation -from pptx.util import Inches - -from pptx import Presentation -from pptx.util import Inches, Pt -import gradio as gr - -import torch - -from PIL import Image -import qrcode -from pathlib import Path -from multiprocessing import cpu_count -import requests -import io -import os -from PIL import Image - - -from diffusers import ( - StableDiffusionControlNetPipeline, - ControlNetModel, - DDIMScheduler, - DPMSolverMultistepScheduler, - DEISMultistepScheduler, - HeunDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, -) - - -openai.api_key = os.environ['OPENAI_API_KEY'] -def gpt(user_prompt: str) -> str: - response = openai.Completion.create( - model="text-davinci-003", - prompt=user_prompt, - temperature=0, - max_tokens=200, - top_p=1, - frequency_penalty=0, - presence_penalty=0) - return response["choices"][0]["text"] - -def get_results(query:str, topic:str,index=0)->list[str]: - combined_q = gpt(f'combine these "{query}" + "{topic}" words and generate one heading') - print(f'{query = }, {topic = }, {combined_q = }') - - try: - params = { - "engine": "google", - "q": combined_q, - "api_key": os.environ[f'SERPAPI_API_KEY{index}'] - } - search = GoogleSearch(params) - results = search.get_dict() - except Exception as e: - print(e) - get_results(query, topic,index=index+1) - - - - organic_results = results["organic_results"] - return organic_results - -def extract_points(query:str, topic:str)->list[str]: - # print('--Sleep--') - time.sleep(60) - organic_results = get_results(query, topic) - embd_chain = App() - for index, dct in enumerate(organic_results): - try: - embd_chain.add('web_page',dct['link']) - except requests.exceptions.SSLError: - continue - except openai.error.RateLimitError: - break - print('--sleep--') - time.sleep(60) - embd_chain_q = embd_chain.query(f'highlight 7 important points') - - return -# Add the title slide - -def add_slide(prs, title, content, title_font_size=Pt(36), content_font_size=Pt(18)): - slide_layout = prs.slide_layouts[1] # Use the layout for "Title and Content" - slide = prs.slides.add_slide(slide_layout) - - # Set the title and content text - slide.shapes.title.text = title - text_box = slide.placeholders[1] - text_box.text = content - - # Change the font size for title and content text - title_text_frame = slide.shapes.title.text_frame - content_text_frame = text_box.text_frame - for paragraph in title_text_frame.paragraphs: - for run in paragraph.runs: - run.font.size = title_font_size - - for paragraph in content_text_frame.paragraphs: - for run in paragraph.runs: - run.font.size = content_font_size - - -def add_title_slide(prs, title, title_font_size=Pt(44)): - slide_layout = prs.slide_layouts[0] # Use the layout for "Title Slide" - slide = prs.slides.add_slide(slide_layout) - - # Set the title and subtitle text - slide.shapes.title.text = title - - - # Change the font size for title and subtitle text - title_text_frame = slide.shapes.title.text_frame - - for paragraph in title_text_frame.paragraphs: - for run in paragraph.runs: - run.font.size = title_font_size - - -def main(user_query:str)->dict[str, str]: - res = gpt(f'You are assisting me in creating a presentation on "{user_query}" Please generate 5 informative side headings for the slides. Each heading should be concise and reflect a key aspect of the topic.') - topics = re.sub(r'[\d.]','',res.strip()).split('\n') - print(f'{topics = }') - ppt_points = { topic: extract_points(topic, user_query) - for topic in topics} - prs = Presentation() - add_title_slide(prs,user_query, title_font_size=Pt(44)) - - # Data for content slides - - # Adding each key-value pair as a slide in the presentation with custom font sizes - for key, value in ppt_points.items(): - add_slide(prs, key, value, title_font_size=Pt(36), content_font_size=Pt(18)) - - # Save the presentation - prs.save(f'{user_query}.pptx') - - return f'{user_query}.pptx' - -controlnet = ControlNetModel.from_pretrained( - "monster-labs/control_v1p_sd15_qrcode_monster", - torch_dtype=torch.float16 - -).to('cpu') - -pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16 - - -) - - -SAMPLER_MAP = { - "DPM++ Karras SDE": lambda config: DPMSolverMultistepScheduler.from_config(config, use_karras=True, algorithm_type="sde-dpmsolver++"), - "DPM++ Karras": lambda config: DPMSolverMultistepScheduler.from_config(config, use_karras=True), - "Heun": lambda config: HeunDiscreteScheduler.from_config(config), - "Euler a": lambda config: EulerAncestralDiscreteScheduler.from_config(config), - "Euler": lambda config: EulerDiscreteScheduler.from_config(config), - "DDIM": lambda config: DDIMScheduler.from_config(config), - "DEIS": lambda config: DEISMultistepScheduler.from_config(config), -} - - -def create_code(content: str): - qr = qrcode.QRCode( - version=1, - error_correction=qrcode.constants.ERROR_CORRECT_H, - box_size=16, - border=0, - ) - qr.add_data(content) - qr.make(fit=True) - img = qr.make_image(fill_color="black", back_color="white") - - # find smallest image size multiple of 256 that can fit qr - offset_min = 8 * 16 - w, h = img.size - w = (w + 255 + offset_min) // 256 * 256 - h = (h + 255 + offset_min) // 256 * 256 - if w > 1024: - raise gr.Error("QR code is too large, please use a shorter content") - bg = Image.new('L', (w, h), 128) - - # align on 16px grid - coords = ((w - img.size[0]) // 2 // 16 * 16, - (h - img.size[1]) // 2 // 16 * 16) - bg.paste(img, coords) - return bg - - -def inference( - qr_code_content: str, - prompt: str, - negative_prompt: str, - guidance_scale: float = 10.0, - controlnet_conditioning_scale: float = 2.0, - seed: int = -1, - sampler="Euler a", -): - - - pipe.scheduler = SAMPLER_MAP[sampler](pipe.scheduler.config) - - generator = torch.manual_seed(seed) if seed != -1 else torch.Generator() - - print("Generating QR Code from content") - qrcode_image = create_code(qr_code_content) - - # hack due to gradio examples - init_image = qrcode_image - - out = pipe( - prompt=prompt, - negative_prompt=negative_prompt, - image=qrcode_image, - width=qrcode_image.width, - height=qrcode_image.height, - guidance_scale=float(guidance_scale), - controlnet_conditioning_scale=float(controlnet_conditioning_scale), - - num_inference_steps=40, - ) - return out.images[0] - -import gradio as gr - - -with gr.Blocks() as demo: - with gr.Tab('Presentation'): - with gr.Row(): - with gr.Column(): - txt = gr.Textbox(label="Your Query") - with gr.Column(): - file = gr.File() - - btn = gr.Button('Create Presentation') - - - btn.click(main, txt, file) - with gr.Tab('QR Code'): - gr.Markdown('This feature needs GPU to run') - with gr.Row(): - with gr.Column(): - qr_code_content = gr.Textbox( - label="QR Code Content or URL", - info="The text you want to encode into the QR code", - value="", - ) - - prompt = gr.Textbox( - label="Prompt", - info="Prompt that guides the generation towards", - ) - negative_prompt = gr.Textbox( - label="Negative Prompt", - value="ugly, disfigured, low quality, blurry, nsfw", - info="Prompt that guides the generation away from", - ) - - with gr.Accordion( - label="Params: The generated QR Code functionality is largely influenced by the parameters detailed below", - open=True, - ): - controlnet_conditioning_scale = gr.Slider( - minimum=0.5, - maximum=2.5, - step=0.01, - value=1.5, - label="Controlnet Conditioning Scale", - info="""Controls the readability/creativity of the QR code. - High values: The generated QR code will be more readable. - Low values: The generated QR code will be more creative. - """ - ) - guidance_scale = gr.Slider( - minimum=0.0, - maximum=25.0, - step=0.25, - value=7, - label="Guidance Scale", - info="Controls the amount of guidance the text prompt guides the image generation" - ) - sampler = gr.Dropdown(choices=list( - SAMPLER_MAP.keys()), value="Euler a", label="Sampler") - seed = gr.Number( - minimum=-1, - maximum=9999999999, - step=1, - value=2313123, - label="Seed", - randomize=True, - info="Seed for the random number generator. Set to -1 for a random seed" - ) - with gr.Row(): - run_btn = gr.Button("Run") - with gr.Column(): - result_image = gr.Image(label="Result Image", elem_id="result_image") - run_btn.click( - inference, - inputs=[ - qr_code_content, - prompt, - negative_prompt, - guidance_scale, - controlnet_conditioning_scale, - seed, - sampler, - ], - outputs=[result_image], - ) - - gr.Examples( - examples=[ - [ - "test", - "Baroque rococo architecture, architectural photography, post apocalyptic New York, hyperrealism, [roots], hyperrealistic, octane render, cinematic, hyper detailed, 8K", - "", - 7, - 1.6, - 2592353769, - "Euler a", - ], - [ - "https://qrcodemonster.art", - "a centered render of an ancient tree covered in bio - organic micro organisms growing in a mystical setting, cinematic, beautifully lit, by tomasz alen kopera and peter mohrbacher and craig mullins, 3d, trending on artstation, octane render, 8k", - "", - 7, - 1.57, - 259235398, - "Euler a", - ], - [ - "test", - "3 cups of coffee with coffee beans around", - "", - 7, - 1.95, - 1889601353, - "Euler a", - ], - [ - "https://huggingface.co", - "A top view picture of a sandy beach with a sand castle, beautiful lighting, 8k, highly detailed", - "sky", - 7, - 1.15, - 46200, - "Euler a", - ], - [ - "test", - "A top view picture of a sandy beach, organic shapes, beautiful lighting, bumps and shadows, 8k, highly detailed", - "sky, water, squares", - 7, - 1.25, - 46220, - "Euler a", - ], - ], - fn=inference, - inputs=[ - qr_code_content, - prompt, - negative_prompt, - guidance_scale, - controlnet_conditioning_scale, - seed, - sampler, - ], - outputs=[result_image], - - ) - - - -demo.launch(debug=True) - diff --git a/spaces/EuroPython2022/clickbaitonator/fudge/topic_data/README.md b/spaces/EuroPython2022/clickbaitonator/fudge/topic_data/README.md deleted file mode 100644 index 4b0083335b0e65fcc17843e990259255d40ac6c2..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/clickbaitonator/fudge/topic_data/README.md +++ /dev/null @@ -1,3 +0,0 @@ -`topic_prefixes.txt` contains the 20 prefixes used at test time for starting the generations. - -`wordlists/` contains the wordlists for each of the 7 topics used during testing. The heldout bags used to evaluate the generalization of the topic words to other related words are in `test_wordlists/`. `val_wordlists/` contains just one extra wordlist used for tuning. \ No newline at end of file diff --git a/spaces/Fantasy-Studio/Paint-by-Example/README.md b/spaces/Fantasy-Studio/Paint-by-Example/README.md deleted file mode 100644 index 42ad8694fdb05b4c0eb65d3ed348af0ec1da4b31..0000000000000000000000000000000000000000 --- a/spaces/Fantasy-Studio/Paint-by-Example/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Paint by example -emoji: 🔥 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.44.1 -app_file: app.py -pinned: false -duplicated_from: akhaliq/paint-by-example ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/Felix123456/bingo/src/components/tone-selector.tsx b/spaces/Felix123456/bingo/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/Felix123456/bingo/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
-
- 选择对话样式 -
-
-
    - { - ToneList.map(tone => ( -
  • onChange?.(tone.type)}> - -
  • - )) - } -
-
-
- ) -} diff --git a/spaces/FireFrame/werz/index.html b/spaces/FireFrame/werz/index.html deleted file mode 100644 index 8f7287b7299fae26313842dcedaa69368ce6fc7a..0000000000000000000000000000000000000000 --- a/spaces/FireFrame/werz/index.html +++ /dev/null @@ -1 +0,0 @@ -hello? \ No newline at end of file diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/unet.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/unet.py deleted file mode 100644 index b61437a44ef7510e0c62afaae070deabc24c42bb..0000000000000000000000000000000000000000 --- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/unet.py +++ /dev/null @@ -1,635 +0,0 @@ -import math -from abc import abstractmethod - -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from .fp16_util import convert_module_to_f16, convert_module_to_f32 -from .nn import avg_pool_nd, conv_nd, linear, normalization, timestep_embedding, zero_module - - -class TimestepBlock(nn.Module): - """ - Any module where forward() takes timestep embeddings as a second argument. - """ - - @abstractmethod - def forward(self, x, emb): - """ - Apply the module to `x` given `emb` timestep embeddings. - """ - - -class TimestepEmbedSequential(nn.Sequential, TimestepBlock): - """ - A sequential module that passes timestep embeddings to the children that - support it as an extra input. - """ - - def forward(self, x, emb, encoder_out=None): - for layer in self: - if isinstance(layer, TimestepBlock): - x = layer(x, emb) - elif isinstance(layer, AttentionBlock): - x = layer(x, encoder_out) - else: - x = layer(x) - return x - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - if use_conv: - self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=1) - - def forward(self, x): - assert x.shape[1] == self.channels - if self.dims == 3: - x = F.interpolate(x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest") - else: - x = F.interpolate(x, scale_factor=2, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd(dims, self.channels, self.out_channels, 3, stride=stride, padding=1) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(TimestepBlock): - """ - A residual block that can optionally change the number of channels. - - :param channels: the number of input channels. - :param emb_channels: the number of timestep embedding channels. - :param dropout: the rate of dropout. - :param out_channels: if specified, the number of out channels. - :param use_conv: if True and out_channels is specified, use a spatial - convolution instead of a smaller 1x1 convolution to change the - channels in the skip connection. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param use_checkpoint: if True, use gradient checkpointing on this module. - :param up: if True, use this block for upsampling. - :param down: if True, use this block for downsampling. - """ - - def __init__( - self, - channels, - emb_channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - dims=2, - use_checkpoint=False, - up=False, - down=False, - ): - super().__init__() - self.channels = channels - self.emb_channels = emb_channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_checkpoint = use_checkpoint - self.use_scale_shift_norm = use_scale_shift_norm - - self.in_layers = nn.Sequential( - normalization(channels, swish=1.0), - nn.Identity(), - conv_nd(dims, channels, self.out_channels, 3, padding=1), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False, dims) - self.x_upd = Upsample(channels, False, dims) - elif down: - self.h_upd = Downsample(channels, False, dims) - self.x_upd = Downsample(channels, False, dims) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.emb_layers = nn.Sequential( - nn.SiLU(), - linear( - emb_channels, - 2 * self.out_channels if use_scale_shift_norm else self.out_channels, - ), - ) - self.out_layers = nn.Sequential( - normalization(self.out_channels, swish=0.0 if use_scale_shift_norm else 1.0), - nn.SiLU() if use_scale_shift_norm else nn.Identity(), - nn.Dropout(p=dropout), - zero_module(conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 3, padding=1) - else: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 1) - - def forward(self, x, emb): - """ - Apply the block to a Tensor, conditioned on a timestep embedding. - - :param x: an [N x C x ...] Tensor of features. - :param emb: an [N x emb_channels] Tensor of timestep embeddings. - :return: an [N x C x ...] Tensor of outputs. - """ - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - emb_out = self.emb_layers(emb).type(h.dtype) - while len(emb_out.shape) < len(h.shape): - emb_out = emb_out[..., None] - if self.use_scale_shift_norm: - out_norm, out_rest = self.out_layers[0], self.out_layers[1:] - scale, shift = th.chunk(emb_out, 2, dim=1) - h = out_norm(h) * (1 + scale) + shift - h = out_rest(h) - else: - h = h + emb_out - h = self.out_layers(h) - return self.skip_connection(x) + h - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - use_checkpoint=False, - encoder_channels=None, - ): - super().__init__() - self.channels = channels - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.use_checkpoint = use_checkpoint - self.norm = normalization(channels, swish=0.0) - self.qkv = conv_nd(1, channels, channels * 3, 1) - self.attention = QKVAttention(self.num_heads) - - if encoder_channels is not None: - self.encoder_kv = conv_nd(1, encoder_channels, channels * 2, 1) - self.proj_out = zero_module(conv_nd(1, channels, channels, 1)) - - def forward(self, x, encoder_out=None): - b, c, *spatial = x.shape - qkv = self.qkv(self.norm(x).view(b, c, -1)) - if encoder_out is not None: - encoder_out = self.encoder_kv(encoder_out) - h = self.attention(qkv, encoder_out) - else: - h = self.attention(qkv) - h = self.proj_out(h) - return x + h.reshape(b, c, *spatial) - - -class QKVAttention(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv, encoder_kv=None): - """ - Apply QKV attention. - - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - if encoder_kv is not None: - assert encoder_kv.shape[1] == self.n_heads * ch * 2 - ek, ev = encoder_kv.reshape(bs * self.n_heads, ch * 2, -1).split(ch, dim=1) - k = th.cat([ek, k], dim=-1) - v = th.cat([ev, v], dim=-1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v) - return a.reshape(bs, -1, length) - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - """ - - def __init__( - self, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - encoder_channels=None, - ): - super().__init__() - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - ch = input_ch = int(channel_mult[0] * model_channels) - self.input_blocks = nn.ModuleList( - [TimestepEmbedSequential(conv_nd(dims, in_channels, ch, 3, padding=1))] - ) - self._feature_size = ch - input_block_chans = [ch] - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=int(mult * model_channels), - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = int(mult * model_channels) - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - encoder_channels=encoder_channels, - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - encoder_channels=encoder_channels, - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=int(model_channels * mult), - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = int(model_channels * mult) - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=num_head_channels, - encoder_channels=encoder_channels, - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch, swish=1.0), - nn.Identity(), - zero_module(conv_nd(dims, input_ch, out_channels, 3, padding=1)), - ) - self.use_fp16 = use_fp16 - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps, y=None): - """ - Apply the model to an input batch. - - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - - hs = [] - emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) - - if self.num_classes is not None: - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb) - hs.append(h) - h = self.middle_block(h, emb) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb) - h = h.type(x.dtype) - return self.out(h) - -class SuperResUNetModel(UNetModel): - """ - A UNetModel that performs super-resolution. - - Expects an extra kwarg `low_res` to condition on a low-resolution image. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 2 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 2 - super().__init__(*args, **kwargs) - - def forward(self, x, timesteps, low_res=None, **kwargs): - _, _, new_height, new_width = x.shape - upsampled = F.interpolate(low_res, (new_height, new_width), mode="bilinear") - x = th.cat([x, upsampled], dim=1) - return super().forward(x, timesteps, **kwargs) - - -class InpaintUNetModel(UNetModel): - """ - A UNetModel which can perform inpainting. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 2 + 1 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 2 + 1 - super().__init__(*args, **kwargs) - - def forward(self, x, timesteps, inpaint_image=None, inpaint_mask=None, **kwargs): - if inpaint_image is None: - inpaint_image = th.zeros_like(x) - if inpaint_mask is None: - inpaint_mask = th.zeros_like(x[:, :1]) - return super().forward( - th.cat([x, inpaint_image * inpaint_mask, inpaint_mask], dim=1), - timesteps, - **kwargs, - ) - - -class SuperResInpaintUNetModel(UNetModel): - """ - A UNetModel which can perform both upsampling and inpainting. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 3 + 1 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 3 + 1 - super().__init__(*args, **kwargs) - - def forward( - self, - x, - timesteps, - inpaint_image=None, - inpaint_mask=None, - low_res=None, - **kwargs, - ): - if inpaint_image is None: - inpaint_image = th.zeros_like(x) - if inpaint_mask is None: - inpaint_mask = th.zeros_like(x[:, :1]) - _, _, new_height, new_width = x.shape - upsampled = F.interpolate(low_res, (new_height, new_width), mode="bilinear") - return super().forward( - th.cat([x, inpaint_image * inpaint_mask, inpaint_mask, upsampled], dim=1), - timesteps, - **kwargs, - ) diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/mora_list.py b/spaces/GaenKoki/voicevox/voicevox_engine/mora_list.py deleted file mode 100644 index 5a49f4a3a434ef4832355fcc66c5192b1a4b3059..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/mora_list.py +++ /dev/null @@ -1,218 +0,0 @@ -""" -以下のモーラ対応表はOpenJTalkのソースコードから取得し、 -カタカナ表記とモーラが一対一対応するように改造した。 -ライセンス表記: ------------------------------------------------------------------ - The Japanese TTS System "Open JTalk" - developed by HTS Working Group - http://open-jtalk.sourceforge.net/ ------------------------------------------------------------------ - - Copyright (c) 2008-2014 Nagoya Institute of Technology - Department of Computer Science - -All rights reserved. - -Redistribution and use in source and binary forms, with or -without modification, are permitted provided that the following -conditions are met: - -- Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. -- Redistributions in binary form must reproduce the above - copyright notice, this list of conditions and the following - disclaimer in the documentation and/or other materials provided - with the distribution. -- Neither the name of the HTS working group nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND -CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS -BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED -TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY -OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. -""" -_mora_list_minimum = [ - ["ヴォ", "v", "o"], - ["ヴェ", "v", "e"], - ["ヴィ", "v", "i"], - ["ヴァ", "v", "a"], - ["ヴ", "v", "u"], - ["ン", "", "N"], - ["ワ", "w", "a"], - ["ロ", "r", "o"], - ["レ", "r", "e"], - ["ル", "r", "u"], - ["リョ", "ry", "o"], - ["リュ", "ry", "u"], - ["リャ", "ry", "a"], - ["リェ", "ry", "e"], - ["リ", "r", "i"], - ["ラ", "r", "a"], - ["ヨ", "y", "o"], - ["ユ", "y", "u"], - ["ヤ", "y", "a"], - ["モ", "m", "o"], - ["メ", "m", "e"], - ["ム", "m", "u"], - ["ミョ", "my", "o"], - ["ミュ", "my", "u"], - ["ミャ", "my", "a"], - ["ミェ", "my", "e"], - ["ミ", "m", "i"], - ["マ", "m", "a"], - ["ポ", "p", "o"], - ["ボ", "b", "o"], - ["ホ", "h", "o"], - ["ペ", "p", "e"], - ["ベ", "b", "e"], - ["ヘ", "h", "e"], - ["プ", "p", "u"], - ["ブ", "b", "u"], - ["フォ", "f", "o"], - ["フェ", "f", "e"], - ["フィ", "f", "i"], - ["ファ", "f", "a"], - ["フ", "f", "u"], - ["ピョ", "py", "o"], - ["ピュ", "py", "u"], - ["ピャ", "py", "a"], - ["ピェ", "py", "e"], - ["ピ", "p", "i"], - ["ビョ", "by", "o"], - ["ビュ", "by", "u"], - ["ビャ", "by", "a"], - ["ビェ", "by", "e"], - ["ビ", "b", "i"], - ["ヒョ", "hy", "o"], - ["ヒュ", "hy", "u"], - ["ヒャ", "hy", "a"], - ["ヒェ", "hy", "e"], - ["ヒ", "h", "i"], - ["パ", "p", "a"], - ["バ", "b", "a"], - ["ハ", "h", "a"], - ["ノ", "n", "o"], - ["ネ", "n", "e"], - ["ヌ", "n", "u"], - ["ニョ", "ny", "o"], - ["ニュ", "ny", "u"], - ["ニャ", "ny", "a"], - ["ニェ", "ny", "e"], - ["ニ", "n", "i"], - ["ナ", "n", "a"], - ["ドゥ", "d", "u"], - ["ド", "d", "o"], - ["トゥ", "t", "u"], - ["ト", "t", "o"], - ["デョ", "dy", "o"], - ["デュ", "dy", "u"], - ["デャ", "dy", "a"], - ["デェ", "dy", "e"], - ["ディ", "d", "i"], - ["デ", "d", "e"], - ["テョ", "ty", "o"], - ["テュ", "ty", "u"], - ["テャ", "ty", "a"], - ["ティ", "t", "i"], - ["テ", "t", "e"], - ["ツォ", "ts", "o"], - ["ツェ", "ts", "e"], - ["ツィ", "ts", "i"], - ["ツァ", "ts", "a"], - ["ツ", "ts", "u"], - ["ッ", "", "cl"], - ["チョ", "ch", "o"], - ["チュ", "ch", "u"], - ["チャ", "ch", "a"], - ["チェ", "ch", "e"], - ["チ", "ch", "i"], - ["ダ", "d", "a"], - ["タ", "t", "a"], - ["ゾ", "z", "o"], - ["ソ", "s", "o"], - ["ゼ", "z", "e"], - ["セ", "s", "e"], - ["ズィ", "z", "i"], - ["ズ", "z", "u"], - ["スィ", "s", "i"], - ["ス", "s", "u"], - ["ジョ", "j", "o"], - ["ジュ", "j", "u"], - ["ジャ", "j", "a"], - ["ジェ", "j", "e"], - ["ジ", "j", "i"], - ["ショ", "sh", "o"], - ["シュ", "sh", "u"], - ["シャ", "sh", "a"], - ["シェ", "sh", "e"], - ["シ", "sh", "i"], - ["ザ", "z", "a"], - ["サ", "s", "a"], - ["ゴ", "g", "o"], - ["コ", "k", "o"], - ["ゲ", "g", "e"], - ["ケ", "k", "e"], - ["グヮ", "gw", "a"], - ["グ", "g", "u"], - ["クヮ", "kw", "a"], - ["ク", "k", "u"], - ["ギョ", "gy", "o"], - ["ギュ", "gy", "u"], - ["ギャ", "gy", "a"], - ["ギェ", "gy", "e"], - ["ギ", "g", "i"], - ["キョ", "ky", "o"], - ["キュ", "ky", "u"], - ["キャ", "ky", "a"], - ["キェ", "ky", "e"], - ["キ", "k", "i"], - ["ガ", "g", "a"], - ["カ", "k", "a"], - ["オ", "", "o"], - ["エ", "", "e"], - ["ウォ", "w", "o"], - ["ウェ", "w", "e"], - ["ウィ", "w", "i"], - ["ウ", "", "u"], - ["イェ", "y", "e"], - ["イ", "", "i"], - ["ア", "", "a"], -] -_mora_list_additional = [ - ["ヴョ", "by", "o"], - ["ヴュ", "by", "u"], - ["ヴャ", "by", "a"], - ["ヲ", "", "o"], - ["ヱ", "", "e"], - ["ヰ", "", "i"], - ["ヮ", "w", "a"], - ["ョ", "y", "o"], - ["ュ", "y", "u"], - ["ヅ", "z", "u"], - ["ヂ", "j", "i"], - ["ヶ", "k", "e"], - ["ャ", "y", "a"], - ["ォ", "", "o"], - ["ェ", "", "e"], - ["ゥ", "", "u"], - ["ィ", "", "i"], - ["ァ", "", "a"], -] - -openjtalk_mora2text = { - consonant + vowel: text for [text, consonant, vowel] in _mora_list_minimum -} -openjtalk_text2mora = { - text: (consonant, vowel) - for [text, consonant, vowel] in _mora_list_minimum + _mora_list_additional -} diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/utils/pybullet_utils.py b/spaces/Gen-Sim/Gen-Sim/cliport/utils/pybullet_utils.py deleted file mode 100644 index 1c22aa119da1bc3ebcd1c84a3ad6064024dac974..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/utils/pybullet_utils.py +++ /dev/null @@ -1,23 +0,0 @@ -"""PyBullet utilities for loading assets.""" -import os -import six -import time -import pybullet as p - - -# BEGIN GOOGLE-EXTERNAL -def load_urdf(pybullet_client, file_path, *args, **kwargs): - """Loads the given URDF filepath.""" - # Handles most general file open case. - for _ in range(6): - try: - return pybullet_client.loadURDF(file_path, *args, **kwargs) - except pybullet_client.error as e: - print("PYBULLET load urdf error!") - print(e) - time.sleep(0.1) - print("missing urdf error. use dummy block.") - urdf = 'stacking/block.urdf' - return pybullet_client.loadURDF(urdf, *args, **kwargs) - -# END GOOGLE-EXTERNAL diff --git a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/upfirdn2d.cpp b/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/scripts/train.py b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/scripts/train.py deleted file mode 100644 index d885cfde49a0b21140e663e475918698d5e51ee3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/scripts/train.py +++ /dev/null @@ -1,88 +0,0 @@ -""" -This file runs the main training/val loop -""" -import os -import json -import math -import sys -import pprint -import torch -from argparse import Namespace - -sys.path.append(".") -sys.path.append("..") - -from options.train_options import TrainOptions -from training.coach import Coach - - -def main(): - opts = TrainOptions().parse() - previous_train_ckpt = None - if opts.resume_training_from_ckpt: - opts, previous_train_ckpt = load_train_checkpoint(opts) - else: - setup_progressive_steps(opts) - create_initial_experiment_dir(opts) - - coach = Coach(opts, previous_train_ckpt) - coach.train() - - -def load_train_checkpoint(opts): - train_ckpt_path = opts.resume_training_from_ckpt - previous_train_ckpt = torch.load(opts.resume_training_from_ckpt, map_location='cpu') - new_opts_dict = vars(opts) - opts = previous_train_ckpt['opts'] - opts['resume_training_from_ckpt'] = train_ckpt_path - update_new_configs(opts, new_opts_dict) - pprint.pprint(opts) - opts = Namespace(**opts) - if opts.sub_exp_dir is not None: - sub_exp_dir = opts.sub_exp_dir - opts.exp_dir = os.path.join(opts.exp_dir, sub_exp_dir) - create_initial_experiment_dir(opts) - return opts, previous_train_ckpt - - -def setup_progressive_steps(opts): - log_size = int(math.log(opts.stylegan_size, 2)) - num_style_layers = 2*log_size - 2 - num_deltas = num_style_layers - 1 - if opts.progressive_start is not None: # If progressive delta training - opts.progressive_steps = [0] - next_progressive_step = opts.progressive_start - for i in range(num_deltas): - opts.progressive_steps.append(next_progressive_step) - next_progressive_step += opts.progressive_step_every - - assert opts.progressive_steps is None or is_valid_progressive_steps(opts, num_style_layers), \ - "Invalid progressive training input" - - -def is_valid_progressive_steps(opts, num_style_layers): - return len(opts.progressive_steps) == num_style_layers and opts.progressive_steps[0] == 0 - - -def create_initial_experiment_dir(opts): - if os.path.exists(opts.exp_dir): - raise Exception('Oops... {} already exists'.format(opts.exp_dir)) - os.makedirs(opts.exp_dir) - - opts_dict = vars(opts) - pprint.pprint(opts_dict) - with open(os.path.join(opts.exp_dir, 'opt.json'), 'w') as f: - json.dump(opts_dict, f, indent=4, sort_keys=True) - - -def update_new_configs(ckpt_opts, new_opts): - for k, v in new_opts.items(): - if k not in ckpt_opts: - ckpt_opts[k] = v - if new_opts['update_param_list']: - for param in new_opts['update_param_list']: - ckpt_opts[param] = new_opts[param] - - -if __name__ == '__main__': - main() diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/centripetalnet/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/centripetalnet/README.md deleted file mode 100644 index 18631da0ac205c7a364dadc61903e1eb8acb2d6c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/centripetalnet/README.md +++ /dev/null @@ -1,26 +0,0 @@ -# CentripetalNet - -## Introduction - -[ALGORITHM] - -```latex -@InProceedings{Dong_2020_CVPR, -author = {Dong, Zhiwei and Li, Guoxuan and Liao, Yue and Wang, Fei and Ren, Pengju and Qian, Chen}, -title = {CentripetalNet: Pursuing High-Quality Keypoint Pairs for Object Detection}, -booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, -month = {June}, -year = {2020} -} -``` - -## Results and models - -| Backbone | Batch Size | Step/Total Epochs | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :-------------: | :--------: |:----------------: | :------: | :------------: | :----: | :------: | :--------: | -| HourglassNet-104 | [16 x 6](./centripetalnet_hourglass104_mstest_16x6_210e_coco.py) | 190/210 | 16.7 | 3.7 | 44.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804-3ccc61e5.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804.log.json) | - -Note: - -- TTA setting is single-scale and `flip=True`. -- The model we released is the best checkpoint rather than the latest checkpoint (box AP 44.8 vs 44.6 in our experiment). diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ocrnet_r50-d8.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ocrnet_r50-d8.py deleted file mode 100644 index 615aa3ff703942b6c22b2d6e9642504dd3e41ebd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ocrnet_r50-d8.py +++ /dev/null @@ -1,47 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='CascadeEncoderDecoder', - num_stages=2, - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=[ - dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=2048, - in_index=3, - channels=512, - ocr_channels=256, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) - ], - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_512x512_160k_ade20k.py deleted file mode 100644 index 1ce2279a0fbfd6fcc7cd20e3f552b1a39f47d943..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './apcnet_r50-d8_512x512_160k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/dpm_solver_pytorch.py b/spaces/GroveStreet/GTA_SOVITS/diffusion/dpm_solver_pytorch.py deleted file mode 100644 index dee5e280661b61e0a99038ce0bd240db51344ead..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/diffusion/dpm_solver_pytorch.py +++ /dev/null @@ -1,1201 +0,0 @@ -import math - -import torch - - -class NoiseScheduleVP: - def __init__( - self, - schedule='discrete', - betas=None, - alphas_cumprod=None, - continuous_beta_0=0.1, - continuous_beta_1=20., - ): - """Create a wrapper class for the forward SDE (VP type). - - *** - Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t. - We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images. - *** - - The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ). - We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper). - Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have: - - log_alpha_t = self.marginal_log_mean_coeff(t) - sigma_t = self.marginal_std(t) - lambda_t = self.marginal_lambda(t) - - Moreover, as lambda(t) is an invertible function, we also support its inverse function: - - t = self.inverse_lambda(lambda_t) - - =============================================================== - - We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]). - - 1. For discrete-time DPMs: - - For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by: - t_i = (i + 1) / N - e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1. - We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3. - - Args: - betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details) - alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details) - - Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`. - - **Important**: Please pay special attention for the args for `alphas_cumprod`: - The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that - q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ). - Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have - alpha_{t_n} = \sqrt{\hat{alpha_n}}, - and - log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}). - - - 2. For continuous-time DPMs: - - We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise - schedule are the default settings in DDPM and improved-DDPM: - - Args: - beta_min: A `float` number. The smallest beta for the linear schedule. - beta_max: A `float` number. The largest beta for the linear schedule. - cosine_s: A `float` number. The hyperparameter in the cosine schedule. - cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule. - T: A `float` number. The ending time of the forward process. - - =============================================================== - - Args: - schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs, - 'linear' or 'cosine' for continuous-time DPMs. - Returns: - A wrapper object of the forward SDE (VP type). - - =============================================================== - - Example: - - # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', betas=betas) - - # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod) - - # For continuous-time DPMs (VPSDE), linear schedule: - >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.) - - """ - - if schedule not in ['discrete', 'linear', 'cosine']: - raise ValueError( - "Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format( - schedule)) - - self.schedule = schedule - if schedule == 'discrete': - if betas is not None: - log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0) - else: - assert alphas_cumprod is not None - log_alphas = 0.5 * torch.log(alphas_cumprod) - self.total_N = len(log_alphas) - self.T = 1. - self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1)) - self.log_alpha_array = log_alphas.reshape((1, -1,)) - else: - self.total_N = 1000 - self.beta_0 = continuous_beta_0 - self.beta_1 = continuous_beta_1 - self.cosine_s = 0.008 - self.cosine_beta_max = 999. - self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.)) - self.schedule = schedule - if schedule == 'cosine': - # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T. - # Note that T = 0.9946 may be not the optimal setting. However, we find it works well. - self.T = 0.9946 - else: - self.T = 1. - - def marginal_log_mean_coeff(self, t): - """ - Compute log(alpha_t) of a given continuous-time label t in [0, T]. - """ - if self.schedule == 'discrete': - return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), - self.log_alpha_array.to(t.device)).reshape((-1)) - elif self.schedule == 'linear': - return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0 - elif self.schedule == 'cosine': - log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.)) - log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0 - return log_alpha_t - - def marginal_alpha(self, t): - """ - Compute alpha_t of a given continuous-time label t in [0, T]. - """ - return torch.exp(self.marginal_log_mean_coeff(t)) - - def marginal_std(self, t): - """ - Compute sigma_t of a given continuous-time label t in [0, T]. - """ - return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t))) - - def marginal_lambda(self, t): - """ - Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T]. - """ - log_mean_coeff = self.marginal_log_mean_coeff(t) - log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff)) - return log_mean_coeff - log_std - - def inverse_lambda(self, lamb): - """ - Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t. - """ - if self.schedule == 'linear': - tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - Delta = self.beta_0 ** 2 + tmp - return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0) - elif self.schedule == 'discrete': - log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb) - t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), - torch.flip(self.t_array.to(lamb.device), [1])) - return t.reshape((-1,)) - else: - log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - t = t_fn(log_alpha) - return t - - -def model_wrapper( - model, - noise_schedule, - model_type="noise", - model_kwargs={}, - guidance_type="uncond", - condition=None, - unconditional_condition=None, - guidance_scale=1., - classifier_fn=None, - classifier_kwargs={}, -): - """Create a wrapper function for the noise prediction model. - - DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to - firstly wrap the model function to a noise prediction model that accepts the continuous time as the input. - - We support four types of the diffusion model by setting `model_type`: - - 1. "noise": noise prediction model. (Trained by predicting noise). - - 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0). - - 3. "v": velocity prediction model. (Trained by predicting the velocity). - The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2]. - - [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models." - arXiv preprint arXiv:2202.00512 (2022). - [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models." - arXiv preprint arXiv:2210.02303 (2022). - - 4. "score": marginal score function. (Trained by denoising score matching). - Note that the score function and the noise prediction model follows a simple relationship: - ``` - noise(x_t, t) = -sigma_t * score(x_t, t) - ``` - - We support three types of guided sampling by DPMs by setting `guidance_type`: - 1. "uncond": unconditional sampling by DPMs. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - The input `classifier_fn` has the following format: - `` - classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond) - `` - - [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis," - in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794. - - 3. "classifier-free": classifier-free guidance sampling by conditional DPMs. - The input `model` has the following format: - `` - model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score - `` - And if cond == `unconditional_condition`, the model output is the unconditional DPM output. - - [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." - arXiv preprint arXiv:2207.12598 (2022). - - - The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999) - or continuous-time labels (i.e. epsilon to T). - - We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise: - `` - def model_fn(x, t_continuous) -> noise: - t_input = get_model_input_time(t_continuous) - return noise_pred(model, x, t_input, **model_kwargs) - `` - where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver. - - =============================================================== - - Args: - model: A diffusion model with the corresponding format described above. - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - model_type: A `str`. The parameterization type of the diffusion model. - "noise" or "x_start" or "v" or "score". - model_kwargs: A `dict`. A dict for the other inputs of the model function. - guidance_type: A `str`. The type of the guidance for sampling. - "uncond" or "classifier" or "classifier-free". - condition: A pytorch tensor. The condition for the guided sampling. - Only used for "classifier" or "classifier-free" guidance type. - unconditional_condition: A pytorch tensor. The condition for the unconditional sampling. - Only used for "classifier-free" guidance type. - guidance_scale: A `float`. The scale for the guided sampling. - classifier_fn: A classifier function. Only used for the classifier guidance. - classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function. - Returns: - A noise prediction model that accepts the noised data and the continuous time as the inputs. - """ - - def get_model_input_time(t_continuous): - """ - Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time. - For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N]. - For continuous-time DPMs, we just use `t_continuous`. - """ - if noise_schedule.schedule == 'discrete': - return (t_continuous - 1. / noise_schedule.total_N) * noise_schedule.total_N - else: - return t_continuous - - def noise_pred_fn(x, t_continuous, cond=None): - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - t_input = get_model_input_time(t_continuous) - if cond is None: - output = model(x, t_input, **model_kwargs) - else: - output = model(x, t_input, cond, **model_kwargs) - if model_type == "noise": - return output - elif model_type == "x_start": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims) - elif model_type == "v": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x - elif model_type == "score": - sigma_t = noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return -expand_dims(sigma_t, dims) * output - - def cond_grad_fn(x, t_input): - """ - Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t). - """ - with torch.enable_grad(): - x_in = x.detach().requires_grad_(True) - log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs) - return torch.autograd.grad(log_prob.sum(), x_in)[0] - - def model_fn(x, t_continuous): - """ - The noise predicition model function that is used for DPM-Solver. - """ - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - if guidance_type == "uncond": - return noise_pred_fn(x, t_continuous) - elif guidance_type == "classifier": - assert classifier_fn is not None - t_input = get_model_input_time(t_continuous) - cond_grad = cond_grad_fn(x, t_input) - sigma_t = noise_schedule.marginal_std(t_continuous) - noise = noise_pred_fn(x, t_continuous) - return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad - elif guidance_type == "classifier-free": - if guidance_scale == 1. or unconditional_condition is None: - return noise_pred_fn(x, t_continuous, cond=condition) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t_continuous] * 2) - c_in = torch.cat([unconditional_condition, condition]) - noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) - return noise_uncond + guidance_scale * (noise - noise_uncond) - - assert model_type in ["noise", "x_start", "v"] - assert guidance_type in ["uncond", "classifier", "classifier-free"] - return model_fn - - -class DPM_Solver: - def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.): - """Construct a DPM-Solver. - - We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0"). - If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver). - If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++). - In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True. - The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales. - - Args: - model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]): - `` - def model_fn(x, t_continuous): - return noise - `` - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model. - thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1]. - max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding. - - [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b. - """ - self.model = model_fn - self.noise_schedule = noise_schedule - self.predict_x0 = predict_x0 - self.thresholding = thresholding - self.max_val = max_val - - def noise_prediction_fn(self, x, t): - """ - Return the noise prediction model. - """ - return self.model(x, t) - - def data_prediction_fn(self, x, t): - """ - Return the data prediction model (with thresholding). - """ - noise = self.noise_prediction_fn(x, t) - dims = x.dim() - alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t) - x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims) - if self.thresholding: - p = 0.995 # A hyperparameter in the paper of "Imagen" [1]. - s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1) - s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims) - x0 = torch.clamp(x0, -s, s) / s - return x0 - - def model_fn(self, x, t): - """ - Convert the model to the noise prediction model or the data prediction model. - """ - if self.predict_x0: - return self.data_prediction_fn(x, t) - else: - return self.noise_prediction_fn(x, t) - - def get_time_steps(self, skip_type, t_T, t_0, N, device): - """Compute the intermediate time steps for sampling. - - Args: - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - N: A `int`. The total number of the spacing of the time steps. - device: A torch device. - Returns: - A pytorch tensor of the time steps, with the shape (N + 1,). - """ - if skip_type == 'logSNR': - lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device)) - lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device)) - logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device) - return self.noise_schedule.inverse_lambda(logSNR_steps) - elif skip_type == 'time_uniform': - return torch.linspace(t_T, t_0, N + 1).to(device) - elif skip_type == 'time_quadratic': - t_order = 2 - t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device) - return t - else: - raise ValueError( - "Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type)) - - def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): - """ - Get the order of each step for sampling by the singlestep DPM-Solver. - - We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". - Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - - If order == 1: - We take `steps` of DPM-Solver-1 (i.e. DDIM). - - If order == 2: - - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of DPM-Solver-2. - - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If order == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. - - ============================================ - Args: - order: A `int`. The max order for the solver (2 or 3). - steps: A `int`. The total number of function evaluations (NFE). - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - device: A torch device. - Returns: - orders: A list of the solver order of each step. - """ - if order == 3: - K = steps // 3 + 1 - if steps % 3 == 0: - orders = [3, ] * (K - 2) + [2, 1] - elif steps % 3 == 1: - orders = [3, ] * (K - 1) + [1] - else: - orders = [3, ] * (K - 1) + [2] - elif order == 2: - if steps % 2 == 0: - K = steps // 2 - orders = [2, ] * K - else: - K = steps // 2 + 1 - orders = [2, ] * (K - 1) + [1] - elif order == 1: - K = 1 - orders = [1, ] * steps - else: - raise ValueError("'order' must be '1' or '2' or '3'.") - if skip_type == 'logSNR': - # To reproduce the results in DPM-Solver paper - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) - else: - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[ - torch.cumsum(torch.tensor([0, ] + orders), dim=0).to(device)] - return timesteps_outer, orders - - def denoise_fn(self, x, s): - """ - Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization. - """ - return self.data_prediction_fn(x, s) - - def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False): - """ - DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - if self.predict_x0: - phi_1 = torch.expm1(-h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - else: - phi_1 = torch.expm1(h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - - def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False, - solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-2 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the second-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 0.5 - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - s1 = ns.inverse_lambda(lambda_s1) - log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff( - s1), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t) - alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_1 = torch.expm1(-h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * ( - model_s1 - model_s) - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_1 = torch.expm1(h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s) - ) - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1} - else: - return x_t - - def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None, - return_intermediate=False, solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-3 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`). - If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 1. / 3. - if r2 is None: - r2 = 2. / 3. - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - lambda_s2 = lambda_s + r2 * h - s1 = ns.inverse_lambda(lambda_s1) - s2 = ns.inverse_lambda(lambda_s2) - log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff( - s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std( - s2), ns.marginal_std(t) - alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_12 = torch.expm1(-r2 * h) - phi_1 = torch.expm1(-h) - phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1. - phi_2 = phi_1 / h + 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(sigma_s2 / sigma_s, dims) * x - - expand_dims(alpha_s2 * phi_12, dims) * model_s - + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + expand_dims(alpha_t * phi_2, dims) * D1 - - expand_dims(alpha_t * phi_3, dims) * D2 - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_12 = torch.expm1(r2 * h) - phi_1 = torch.expm1(h) - phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1. - phi_2 = phi_1 / h - 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x - - expand_dims(sigma_s2 * phi_12, dims) * model_s - - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - expand_dims(sigma_t * phi_2, dims) * D1 - - expand_dims(sigma_t * phi_3, dims) * D2 - ) - - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2} - else: - return x_t - - def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"): - """ - Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - ns = self.noise_schedule - dims = x.dim() - model_prev_1, model_prev_0 = model_prev_list - t_prev_1, t_prev_0 = t_prev_list - lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda( - t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0 = h_0 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - if self.predict_x0: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0 - ) - else: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0 - ) - return x_t - - def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'): - """ - Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - model_prev_2, model_prev_1, model_prev_0 = model_prev_list - t_prev_2, t_prev_1, t_prev_0 = t_prev_list - lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda( - t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_1 = lambda_prev_1 - lambda_prev_2 - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0, r1 = h_0 / h, h_1 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2) - D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1) - D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1) - if self.predict_x0: - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1 - - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2 - ) - else: - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1 - - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2 - ) - return x_t - - def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None, - r2=None): - """ - Singlestep DPM-Solver with the order `order` from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - r1: A `float`. The hyperparameter of the second-order or third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate) - elif order == 2: - return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1) - elif order == 3: - return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1, r2=r2) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'): - """ - Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1]) - elif order == 2: - return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - elif order == 3: - return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5, - solver_type='dpm_solver'): - """ - The adaptive step size solver based on singlestep DPM-Solver. - - Args: - x: A pytorch tensor. The initial value at time `t_T`. - order: A `int`. The (higher) order of the solver. We only support order == 2 or 3. - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - h_init: A `float`. The initial step size (for logSNR). - atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1]. - rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05. - theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1]. - t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the - current time and `t_0` is less than `t_err`. The default setting is 1e-5. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_0: A pytorch tensor. The approximated solution at time `t_0`. - - [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021. - """ - ns = self.noise_schedule - s = t_T * torch.ones((x.shape[0],)).to(x) - lambda_s = ns.marginal_lambda(s) - lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x)) - h = h_init * torch.ones_like(s).to(x) - x_prev = x - nfe = 0 - if order == 2: - r1 = 0.5 - lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - solver_type=solver_type, - **kwargs) - elif order == 3: - r1, r2 = 1. / 3., 2. / 3. - lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - return_intermediate=True, - solver_type=solver_type) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2, - solver_type=solver_type, - **kwargs) - else: - raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order)) - while torch.abs((s - t_0)).mean() > t_err: - t = ns.inverse_lambda(lambda_s + h) - x_lower, lower_noise_kwargs = lower_update(x, s, t) - x_higher = higher_update(x, s, t, **lower_noise_kwargs) - delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev))) - norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True)) - E = norm_fn((x_higher - x_lower) / delta).max() - if torch.all(E <= 1.): - x = x_higher - s = t - x_prev = x_lower - lambda_s = ns.marginal_lambda(s) - h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s) - nfe += order - print('adaptive solver nfe', nfe) - return x - - def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform', - method='singlestep', denoise=False, solver_type='dpm_solver', atol=0.0078, - rtol=0.05, - ): - """ - Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`. - - ===================================================== - - We support the following algorithms for both noise prediction model and data prediction model: - - 'singlestep': - Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver. - We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps). - The total number of function evaluations (NFE) == `steps`. - Given a fixed NFE == `steps`, the sampling procedure is: - - If `order` == 1: - - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2. - - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If `order` == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2. - - 'multistep': - Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`. - We initialize the first `order` values by lower order multistep solvers. - Given a fixed NFE == `steps`, the sampling procedure is: - Denote K = steps. - - If `order` == 1: - - We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2. - - If `order` == 3: - - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3. - - 'singlestep_fixed': - Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3). - We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE. - - 'adaptive': - Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper). - We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`. - You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs - (NFE) and the sample quality. - - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2. - - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3. - - ===================================================== - - Some advices for choosing the algorithm: - - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs: - Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3, - skip_type='time_uniform', method='singlestep') - - For **guided sampling with large guidance scale** by DPMs: - Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2, - skip_type='time_uniform', method='multistep') - - We support three types of `skip_type`: - - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images** - - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**. - - 'time_quadratic': quadratic time for the time steps. - - ===================================================== - Args: - x: A pytorch tensor. The initial value at time `t_start` - e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution. - steps: A `int`. The total number of function evaluations (NFE). - t_start: A `float`. The starting time of the sampling. - If `T` is None, we use self.noise_schedule.T (default is 1.0). - t_end: A `float`. The ending time of the sampling. - If `t_end` is None, we use 1. / self.noise_schedule.total_N. - e.g. if total_N == 1000, we have `t_end` == 1e-3. - For discrete-time DPMs: - - We recommend `t_end` == 1. / self.noise_schedule.total_N. - For continuous-time DPMs: - - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15. - order: A `int`. The order of DPM-Solver. - skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'. - method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'. - denoise: A `bool`. Whether to denoise at the final step. Default is False. - If `denoise` is True, the total NFE is (`steps` + 1). - solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`. - atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - Returns: - x_end: A pytorch tensor. The approximated solution at time `t_end`. - - """ - t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end - t_T = self.noise_schedule.T if t_start is None else t_start - device = x.device - if method == 'adaptive': - with torch.no_grad(): - x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol, - solver_type=solver_type) - elif method == 'multistep': - assert steps >= order - timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device) - assert timesteps.shape[0] - 1 == steps - with torch.no_grad(): - vec_t = timesteps[0].expand((x.shape[0])) - model_prev_list = [self.model_fn(x, vec_t)] - t_prev_list = [vec_t] - # Init the first `order` values by lower order multistep DPM-Solver. - for init_order in range(1, order): - vec_t = timesteps[init_order].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order, - solver_type=solver_type) - model_prev_list.append(self.model_fn(x, vec_t)) - t_prev_list.append(vec_t) - # Compute the remaining values by `order`-th order multistep DPM-Solver. - for step in range(order, steps + 1): - vec_t = timesteps[step].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, order, - solver_type=solver_type) - for i in range(order - 1): - t_prev_list[i] = t_prev_list[i + 1] - model_prev_list[i] = model_prev_list[i + 1] - t_prev_list[-1] = vec_t - # We do not need to evaluate the final model value. - if step < steps: - model_prev_list[-1] = self.model_fn(x, vec_t) - elif method in ['singlestep', 'singlestep_fixed']: - if method == 'singlestep': - timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order, - skip_type=skip_type, - t_T=t_T, t_0=t_0, - device=device) - elif method == 'singlestep_fixed': - K = steps // order - orders = [order, ] * K - timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device) - for i, order in enumerate(orders): - t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1] - timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(), - N=order, device=device) - lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner) - vec_s, vec_t = t_T_inner.repeat(x.shape[0]), t_0_inner.repeat(x.shape[0]) - h = lambda_inner[-1] - lambda_inner[0] - r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h - r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h - x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2) - if denoise: - x = self.denoise_fn(x, torch.ones((x.shape[0],)).to(device) * t_0) - return x - - -############################################################# -# other utility functions -############################################################# - -def interpolate_fn(x, xp, yp): - """ - A piecewise linear function y = f(x), using xp and yp as keypoints. - We implement f(x) in a differentiable way (i.e. applicable for autograd). - The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.) - - Args: - x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver). - xp: PyTorch tensor with shape [C, K], where K is the number of keypoints. - yp: PyTorch tensor with shape [C, K]. - Returns: - The function values f(x), with shape [N, C]. - """ - N, K = x.shape[0], xp.shape[1] - all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2) - sorted_all_x, x_indices = torch.sort(all_x, dim=2) - x_idx = torch.argmin(x_indices, dim=2) - cand_start_idx = x_idx - 1 - start_idx = torch.where( - torch.eq(x_idx, 0), - torch.tensor(1, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1) - start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2) - end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2) - start_idx2 = torch.where( - torch.eq(x_idx, 0), - torch.tensor(0, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1) - start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2) - end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2) - cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x) - return cand - - -def expand_dims(v, dims): - """ - Expand the tensor `v` to the dim `dims`. - - Args: - `v`: a PyTorch tensor with shape [N]. - `dim`: a `int`. - Returns: - a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`. - """ - return v[(...,) + (None,) * (dims - 1)] diff --git a/spaces/HaHaBill/LandShapes-Antarctica/config.py b/spaces/HaHaBill/LandShapes-Antarctica/config.py deleted file mode 100644 index 5af238a0a4382504bd2af894d30331e1be33079a..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/config.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import sys -import argparse -import json -from copy import deepcopy - -class Config: - def __init__(self, **kwargs): - self.from_args([]) # set all defaults - self.default_args = deepcopy(self.__dict__) - self.from_dict(kwargs) # override - - def __str__(self): - custom = {} - default = {} - - # Find non-default arguments - for k, v in self.__dict__.items(): - if k == 'default_args': - continue - - in_default = k in self.default_args - same_value = self.default_args.get(k) == v - - if in_default and same_value: - default[k] = v - else: - custom[k] = v - - config = { - 'custom': custom, - 'default': default - } - - return json.dumps(config, indent=4) - - def __repr__(self): - return self.__str__() - - def from_dict(self, dictionary): - for k, v in dictionary.items(): - setattr(self, k, v) - return self - - def from_args(self, args=sys.argv[1:]): - parser = argparse.ArgumentParser(description='GAN component analysis config') - parser.add_argument('--model', dest='model', type=str, default='StyleGAN', help='The network to analyze') # StyleGAN, DCGAN, ProGAN, BigGAN-XYZ - parser.add_argument('--layer', dest='layer', type=str, default='g_mapping', help='The layer to analyze') - parser.add_argument('--class', dest='output_class', type=str, default=None, help='Output class to generate (BigGAN: Imagenet, ProGAN: LSUN)') - parser.add_argument('--est', dest='estimator', type=str, default='ipca', help='The algorithm to use [pca, fbpca, cupca, spca, ica]') - parser.add_argument('--sparsity', type=float, default=1.0, help='Sparsity parameter of SPCA') - parser.add_argument('--video', dest='make_video', action='store_true', help='Generate output videos (MP4s)') - parser.add_argument('--batch', dest='batch_mode', action='store_true', help="Don't open windows, instead save results to file") - parser.add_argument('-b', dest='batch_size', type=int, default=None, help='Minibatch size, leave empty for automatic detection') - parser.add_argument('-c', dest='components', type=int, default=80, help='Number of components to keep') - parser.add_argument('-n', type=int, default=300_000, help='Number of examples to use in decomposition') - parser.add_argument('--use_w', action='store_true', help='Use W latent space (StyleGAN(2))') - parser.add_argument('--sigma', type=float, default=2.0, help='Number of stdevs to walk in visualize.py') - parser.add_argument('--inputs', type=str, default=None, help='Path to directory with named components') - parser.add_argument('--seed', type=int, default=None, help='Seed used in decomposition') - args = parser.parse_args(args) - - return self.from_dict(args.__dict__) \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/finetune_classification_zen1-base_tnews.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/finetune_classification_zen1-base_tnews.sh deleted file mode 100644 index eaa50ddac4376c8e86000852da138d0d4779126d..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/finetune_classification_zen1-base_tnews.sh +++ /dev/null @@ -1,150 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=afqmc-bart-base # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=2 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:2 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id) - -export CUDA_VISIBLE_DEVICES='5' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=fengshen-zen1 - -TASK=tnews -TEXTA_NAME=sentence -LABEL_NAME=label -ID_NAME=id - - -BATCH_SIZE=8 -VAL_BATCH_SIZE=32 -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/classification_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/yangping/data/ChineseCLUE_DATA/${TASK}_public/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/ZEN_pretrain_base_v0.1.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - - -config_json="${ROOT_DIR}/ds_config.json" -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -# reduce_bucket_size: hidden_size*hidden_size -# stage3_prefetch_bucket_size: 0.9 * hidden_size * hidden_size -# stage3_param_persistence_threshold: 10 * hidden_size - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": $BATCH_SIZE, - "steps_per_print": 100, - "gradient_clipping": 0.1, - "zero_optimization": { - "stage": ${ZERO_STAGE} - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 2e-5, - "eps": 1e-12, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 2e-8, - "warmup_max_lr": 2e-5, - "warmup_num_steps": 400, - "warmup_type": "linear" - } - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json - - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.json \ - --valid_data dev.json \ - --test_data test1.1.json \ - --train_batchsize $BATCH_SIZE \ - --valid_batchsize $VAL_BATCH_SIZE \ - --max_length 128 \ - --texta_name $TEXTA_NAME \ - --label_name $LABEL_NAME \ - --id_name $ID_NAME \ - " - -MODEL_ARGS="\ - --learning_rate 1e-5 \ - --weight_decay 1e-2 \ - --warmup 0.01 \ - --num_labels 15 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_acc \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 200 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_acc:.4f} \ - " - - -TRAINER_ARGS="\ - --max_epochs 7 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy $STRATEGY \ - --gradient_clip_val 1.0 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 1.0 \ - --default_root_dir $ROOT_DIR \ - " - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --output_save_path $OUTPUT_PATH \ - --model_type $MODEL_NAME \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/classification/finetune_classification.py - -# python3 $SCRIPT_PATH $options -source activate base -singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/hubert/pretrain_hubert_base.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/hubert/pretrain_hubert_base.sh deleted file mode 100644 index 11e5ddf38361d51910c35b02f10b7e285ab3f0fb..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/hubert/pretrain_hubert_base.sh +++ /dev/null @@ -1,120 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=pretrain_bart # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks-per-node=8 # number of tasks to run per node -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:8 # number of gpus per node -#SBATCH -o %x-%j.log # output and error log file names (%x for job id) -#SBATCH -x dgx050 - -MODEL_NAME=hubert-base-ls960 -config_json="./$MODEL_NAME.ds_config.json" -export MASTER_PORT=29503 -MICRO_BATCH_SIZE=8 -ZERO_STAGE=1 - -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $config_json -{ - "zero_optimization": { - "stage": ${ZERO_STAGE} - }, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "initial_scale_power": 16, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "tensorboard": { - "enabled": true, - "output_path": "/data/training_model/fengshen-${MODEL_NAME}/ds-tb-logs", - "job_name": "${MODEL_NAME}" - }, - "#flops_profiler": { - "enabled": true, - "profile_step": 200, - "detailed": true, - "output_file": null - }, - "steps_per_print": 100, - "gradient_clipping": 1, - "train_micro_batch_size_per_gpu": $MICRO_BATCH_SIZE, - "zero_allow_untested_optimizer": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=/home/gaoxinyu/torch_extendsions - -DATA_DIR=/data/common_data/librispeech_tsv/datas -LABELS_DIR=/data/common_data/librispeech_tsv/labels - -DATA_ARGS="\ - --dataloader_workers 2 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize 32 \ - --test_batchsize 8 \ - --val_datasets_field valid \ - --test_datasets_field valid \ - --sampler_type random \ - --data ${DATA_DIR} \ - --label_dir ${LABELS_DIR} \ - --labels km \ - --label_rate 100 \ - --max_sample_size 250000 \ - --min_sample_size 32000 \ - --pad_audio False \ - --random_crop True \ - --normalize False \ - " - -MODEL_ARGS="\ - --model_path /data/pretrained_model/$MODEL_NAME/ \ - --learning_rate 1e-4 \ - --weight_decay 1e-2 \ - --warmup_ratio 0.01 \ - --pred_masked_weight 1.0 \ - --loss_weights 10 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor train_loss \ - --save_top_k 0 \ - --mode min \ - --every_n_train_steps 10000 \ - --dirpath /data/training_model/ckpt/fengshen-$MODEL_NAME \ - --filename model-{step:02d}-{train_loss:.4f} \ - --every_n_epochs 0 \ - --save_last \ - --not_save_on_train_epoch_end \ - " - -# deepspeed_stage_${ZERO_STAGE} \ -TRAINER_ARGS="\ - --gradient_clip_val 1.0 \ - --max_epochs 10 \ - --gpus 2 \ - --num_nodes 1 \ - --strategy deepspeed_stage_${ZERO_STAGE} \ - --log_every_n_steps 100 \ - --val_check_interval 500 \ - --limit_val_batches 10 \ - --accumulate_grad_batches 1 \ - --precision 16 \ - --ckpt_path /data/training_model/ckpt/fengshen-${MODEL_NAME}/last.ckpt \ - --default_root_dir /data/training_model/fengshen-$MODEL_NAME \ - " - - -export options=" \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -export SCRIPT_PATH=pretrain_hubert.py - -eval python3 -m debugpy --listen localhost:53005 --wait-for-client $SCRIPT_PATH $options diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/hifi/__init__.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/hifi/__init__.py deleted file mode 100644 index 0323b35a0fc2ef21ac417857d9336cc7c8a3b717..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/hifi/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .env import AttrDict -from .models import Generator - -if __name__ == "__main__": - pass diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/script/indic_scripts.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/script/indic_scripts.py deleted file mode 100644 index 66c797cc583b6dadc1903194919a8faea509be0d..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/script/indic_scripts.py +++ /dev/null @@ -1,301 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import pandas as pd -import numpy as np -import os - -from indicnlp import common -from indicnlp.common import IndicNlpException -from indicnlp import langinfo as li - -### -# Phonetic Information about script characters -### - -""" Phonetic data about all languages except Tamil """ -ALL_PHONETIC_DATA=None - -""" Phonetic data for Tamil """ -TAMIL_PHONETIC_DATA=None - -""" Phonetic vector for all languages except Tamil """ -ALL_PHONETIC_VECTORS=None - -""" Phonetic vector for Tamil """ -TAMIL_PHONETIC_VECTORS=None - -""" Length of phonetic vector """ -PHONETIC_VECTOR_LENGTH=38 - -""" Start offset for the phonetic feature vector in the phonetic data vector """ -PHONETIC_VECTOR_START_OFFSET=6 - -## PHONETIC PROPERTIES in order in which they occur in the vector -## This list must be in sync with the keys in the PV_PROP_RANGES dictionary -PV_PROP=['basic_type', - 'vowel_length', - 'vowel_strength', - 'vowel_status', - 'consonant_type', - 'articulation_place', - 'aspiration', - 'voicing', - 'nasalization', - 'vowel_horizontal', - 'vowel_vertical', - 'vowel_roundness', - ] - -### -# Bit vector ranges for various properties -### - -PV_PROP_RANGES={ - 'basic_type': [0,6], - 'vowel_length': [6,8], - 'vowel_strength': [8,11], - 'vowel_status': [11,13], - 'consonant_type': [13,18], - 'articulation_place': [18,23], - 'aspiration': [23,25], - 'voicing': [25,27], - 'nasalization': [27,29], - 'vowel_horizontal': [29,32], - 'vowel_vertical': [32,36], - 'vowel_roundness': [36,38], - } - - -#### -# Indexes into the Phonetic Vector -#### -PVIDX_BT_VOWEL=0 -PVIDX_BT_CONSONANT=1 -PVIDX_BT_NUKTA=2 -PVIDX_BT_HALANT=3 -PVIDX_BT_ANUSVAAR=4 -PVIDX_BT_MISC=5 -PVIDX_BT_S=PVIDX_BT_VOWEL -PVIDX_BT_E=PVIDX_BT_MISC+1 - -PVIDX_VSTAT_DEP=12 - -##### -# Unicode information about characters -##### - -SCRIPT_OFFSET_START=0 -SCRIPT_OFFSET_RANGE=0x80 - -def init(): - """ - To be called by library loader, do not call it in your program - """ - - global ALL_PHONETIC_DATA, ALL_PHONETIC_VECTORS, TAMIL_PHONETIC_DATA, TAMIL_PHONETIC_VECTORS, PHONETIC_VECTOR_LENGTH, PHONETIC_VECTOR_START_OFFSET - - ALL_PHONETIC_DATA=pd.read_csv(os.path.join(common.get_resources_path(),'script','all_script_phonetic_data.csv'),encoding='utf-8') - TAMIL_PHONETIC_DATA=pd.read_csv(os.path.join(common.get_resources_path(),'script','tamil_script_phonetic_data.csv'),encoding='utf-8') - - ALL_PHONETIC_VECTORS= ALL_PHONETIC_DATA.iloc[:,PHONETIC_VECTOR_START_OFFSET:].values - TAMIL_PHONETIC_VECTORS=TAMIL_PHONETIC_DATA.iloc[:,PHONETIC_VECTOR_START_OFFSET:].values - - PHONETIC_VECTOR_LENGTH=ALL_PHONETIC_VECTORS.shape[1] - -def is_supported_language(lang): - return lang in list(li.SCRIPT_RANGES.keys()) - -def get_offset(c,lang): - if not is_supported_language(lang): - raise IndicNlpException('Language {} not supported'.format(lang)) - return ord(c)-li.SCRIPT_RANGES[lang][0] - -def offset_to_char(off,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - if not is_supported_language(lang): - raise IndicNlpException('Language {} not supported'.format(lang)) - return chr(off+li.SCRIPT_RANGES[lang][0]) - -def is_indiclang_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - Note that DANDA and DOUBLE_DANDA have the same Unicode codepoint for all Indic scripts - """ - if not is_supported_language(lang): - raise IndicNlpException('Language {} not supported'.format(lang)) - o=get_offset(c,lang) - return (o>=SCRIPT_OFFSET_START and o=li.COORDINATED_RANGE_START_INCLUSIVE and c_offset<=li.COORDINATED_RANGE_END_INCLUSIVE) - -def in_coordinated_range(c,lang): - if not is_supported_language(lang): - raise IndicNlpException('Language {} not supported'.format(lang)) - return in_coordinated_range_offset(get_offset(c,lang)) - -def get_phonetic_info(lang): - if not is_supported_language(lang): - raise IndicNlpException('Language {} not supported'.format(lang)) - phonetic_data= ALL_PHONETIC_DATA if lang!=li.LC_TA else TAMIL_PHONETIC_DATA - phonetic_vectors= ALL_PHONETIC_VECTORS if lang!=li.LC_TA else TAMIL_PHONETIC_VECTORS - - return (phonetic_data, phonetic_vectors) - -def invalid_vector(): - ## TODO: check if np datatype is correct? - return np.array([0]*PHONETIC_VECTOR_LENGTH) - -def get_phonetic_feature_vector(c,lang): - - offset=get_offset(c,lang) - - if not in_coordinated_range_offset(offset): - return invalid_vector() - - phonetic_data, phonetic_vectors= get_phonetic_info(lang) - - if phonetic_data.iloc[offset]['Valid Vector Representation']==0: - return invalid_vector() - - return phonetic_vectors[offset] - -def get_phonetic_feature_vector_offset(offset,lang): - - if not in_coordinated_range_offset(offset): - return invalid_vector() - - phonetic_data, phonetic_vectors= get_phonetic_info(lang) - - if phonetic_data.iloc[offset]['Valid Vector Representation']==0: - return invalid_vector() - - return phonetic_vectors[offset] - -### Unary operations on vectors -def is_valid(v): - return np.sum(v)>0 - -def is_vowel(v): - return v[PVIDX_BT_VOWEL]==1 - -def is_consonant(v): - return v[PVIDX_BT_CONSONANT]==1 - -def is_halant(v): - return v[PVIDX_BT_HALANT]==1 - -def is_nukta(v): - return v[PVIDX_BT_NUKTA]==1 - -def is_anusvaar(v): - return v[PVIDX_BT_ANUSVAAR]==1 - -def is_misc(v): - return v[PVIDX_BT_MISC]==1 - -def is_dependent_vowel(v): - return is_vowel(v) and v[PVIDX_VSTAT_DEP]==1 - -def is_plosive(v): - return is_consonant(v) and get_property_vector(v,'consonant_type')[0]==1 - -### Binary operations on phonetic vectors - -def or_vectors(v1,v2): - return np.array([ 1 if (b1+b2)>=1 else 0 for b1,b2 in zip(v1,v2) ]) - -def xor_vectors(v1,v2): - return np.array([ 1 if b1!=b2 else 0 for b1,b2 in zip(v1,v2) ]) - -### Getting properties from phonetic vectors - -def get_property_vector(v,prop_name): - return v[PV_PROP_RANGES[prop_name][0]:PV_PROP_RANGES[prop_name][1]] - -def get_property_value(v,prop_name): - factor_bits=get_property_vector(v,prop_name).tolist() - - v=0 - c=1 - for b in factor_bits[::-1]: - v+=(c*b) - c=c*2.0 - - return int(v) - -def lcsr_indic(srcw,tgtw,slang,tlang): - """ - compute the Longest Common Subsequence Ratio (LCSR) between two strings at the character level. - This works for Indic scripts by mapping both languages to a common script - - srcw: source language string - tgtw: source language string - slang: source language - tlang: target language - """ - score_mat=np.zeros((len(srcw)+1,len(tgtw)+1)) - - for si,sc in enumerate(srcw,1): - for ti,tc in enumerate(tgtw,1): - so=get_offset(sc,slang) - to=get_offset(tc,tlang) - - if in_coordinated_range_offset(so) and in_coordinated_range_offset(to) and so==to: - score_mat[si,ti]=score_mat[si-1,ti-1]+1.0 - elif not (in_coordinated_range_offset(so) or in_coordinated_range_offset(to)) and sc==tc: - score_mat[si,ti]=score_mat[si-1,ti-1]+1.0 - else: - score_mat[si,ti]= max( - score_mat[si,ti-1], - score_mat[si-1,ti]) - - return (score_mat[-1,-1]/float(max(len(srcw),len(tgtw))),float(len(srcw)),float(len(tgtw))) - -def lcsr_any(srcw,tgtw): - """ - LCSR computation if both languages have the same script - """ - score_mat=np.zeros((len(srcw)+1,len(tgtw)+1)) - - for si,sc in enumerate(srcw,1): - for ti,tc in enumerate(tgtw,1): - - if sc==tc: - score_mat[si,ti]=score_mat[si-1,ti-1]+1.0 - else: - score_mat[si,ti]= max( - score_mat[si,ti-1], - score_mat[si-1,ti]) - - return (score_mat[-1,-1]/float(max(len(srcw),len(tgtw))),float(len(srcw)),float(len(tgtw))) - -def lcsr(srcw,tgtw,slang,tlang): - """ - compute the Longest Common Subsequence Ratio (LCSR) between two strings at the character level. - - srcw: source language string - tgtw: source language string - slang: source language - tlang: target language - """ - - if slang==tlang or not is_supported_language(slang) or not is_supported_language(tlang): - return lcsr_any(srcw,tgtw,slang,tlang) - else: - return lcsr_indic(srcw,tgtw) - - - diff --git a/spaces/HopeMan/Claude/Dockerfile b/spaces/HopeMan/Claude/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/HopeMan/Claude/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/criterions/cross_entropy_acc.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/criterions/cross_entropy_acc.py deleted file mode 100644 index 7c4d8ba3802a2da9467c42b0aa18653c7bbb2ec9..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/criterions/cross_entropy_acc.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import logging -import math - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("cross_entropy_acc") -class CrossEntropyWithAccCriterion(FairseqCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - def compute_loss(self, model, net_output, target, reduction, log_probs): - # N, T -> N * T - target = target.view(-1) - lprobs = model.get_normalized_probs(net_output, log_probs=log_probs) - if not hasattr(lprobs, "batch_first"): - logging.warning( - "ERROR: we need to know whether " - "batch first for the net output; " - "you need to set batch_first attribute for the return value of " - "model.get_normalized_probs. Now, we assume this is true, but " - "in the future, we will raise exception instead. " - ) - batch_first = getattr(lprobs, "batch_first", True) - if not batch_first: - lprobs = lprobs.transpose(0, 1) - - # N, T, D -> N * T, D - lprobs = lprobs.view(-1, lprobs.size(-1)) - loss = F.nll_loss( - lprobs, target, ignore_index=self.padding_idx, reduction=reduction - ) - return lprobs, loss - - def get_logging_output(self, sample, target, lprobs, loss): - target = target.view(-1) - mask = target != self.padding_idx - correct = torch.sum( - lprobs.argmax(1).masked_select(mask) == target.masked_select(mask) - ) - total = torch.sum(mask) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - - logging_output = { - "loss": utils.item(loss.data), # * sample['ntokens'], - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - "correct": utils.item(correct.data), - "total": utils.item(total.data), - "nframes": torch.sum(sample["net_input"]["src_lengths"]).item(), - } - - return sample_size, logging_output - - def forward(self, model, sample, reduction="sum", log_probs=True): - """Computes the cross entropy with accuracy metric for the given sample. - - This is similar to CrossEntropyCriterion in fairseq, but also - computes accuracy metrics as part of logging - - Args: - logprobs (Torch.tensor) of shape N, T, D i.e. - batchsize, timesteps, dimensions - targets (Torch.tensor) of shape N, T i.e batchsize, timesteps - - Returns: - tuple: With three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - - TODO: - * Currently this Criterion will only work with LSTMEncoderModels or - FairseqModels which have decoder, or Models which return TorchTensor - as net_output. - We need to make a change to support all FairseqEncoder models. - """ - net_output = model(**sample["net_input"]) - target = model.get_targets(sample, net_output) - lprobs, loss = self.compute_loss( - model, net_output, target, reduction, log_probs - ) - sample_size, logging_output = self.get_logging_output( - sample, target, lprobs, loss - ) - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - correct_sum = sum(log.get("correct", 0) for log in logging_outputs) - total_sum = sum(log.get("total", 0) for log in logging_outputs) - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - nframes = sum(log.get("nframes", 0) for log in logging_outputs) - agg_output = { - "loss": loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - # if args.sentence_avg, then sample_size is nsentences, then loss - # is per-sentence loss; else sample_size is ntokens, the loss - # becomes per-output token loss - "ntokens": ntokens, - "nsentences": nsentences, - "nframes": nframes, - "sample_size": sample_size, - "acc": correct_sum * 100.0 / total_sum if total_sum > 0 else 0.0, - "correct": correct_sum, - "total": total_sum, - # total is the number of validate tokens - } - if sample_size != ntokens: - agg_output["nll_loss"] = loss_sum / ntokens / math.log(2) - # loss: per output token loss - # nll_loss: per sentence loss - return agg_output diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/__init__.py deleted file mode 100644 index 239d2e69f9a235095dee1ea7b3a94164a77273f5..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import tasks, criterions, models # noqa diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh deleted file mode 100644 index b34c5b6e0688914a53515162f817a93617b609e5..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -split="dev_other" -ref_txt="" # ground truth transcript path -psd_txt="" # pseudo transcript path -get_best_wer=true -dec_name="decode" -graph_name="graph" -kenlm_path=/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o6.bin - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -exp_root=$1 -unsup_args="" -if [ $# -ge 2 ]; then - unsup_args=$2 -fi - -set -eu - -if [ ! -z $ref_txt ] && $get_best_wer; then - echo "==== WER w.r.t. real transcript (select based on unsupervised metric)" - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - ( - for tra in $x/scoring/*.tra; do - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:::g' | sed 's:::g' > $tra.txt - python local/unsup_select.py $psd_txt $tra.txt --kenlm_path $kenlm_path --gt_tra $ref_txt $unsup_args - done 2>/dev/null | grep "score=" | sed 's/=/ /g' | sed 's/;//g' | sort -k3n | head -n1 - ) & - done -fi -wait - diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/__init__.py deleted file mode 100644 index dc9fd1886d55756b5bdfeccf1ad329bd419a706e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import os -import sys - -try: - from .version import __version__ # noqa -except ImportError: - version_txt = os.path.join(os.path.dirname(__file__), "version.txt") - with open(version_txt) as f: - __version__ = f.read().strip() - -__all__ = ["pdb"] - -# backwards compatibility to support `from fairseq.X import Y` -from fairseq.distributed import utils as distributed_utils -from fairseq.logging import meters, metrics, progress_bar # noqa - -sys.modules["fairseq.distributed_utils"] = distributed_utils -sys.modules["fairseq.meters"] = meters -sys.modules["fairseq.metrics"] = metrics -sys.modules["fairseq.progress_bar"] = progress_bar - -# initialize hydra -from fairseq.dataclass.initialize import hydra_init -hydra_init() - -import fairseq.criterions # noqa -import fairseq.distributed # noqa -import fairseq.models # noqa -import fairseq.modules # noqa -import fairseq.optim # noqa -import fairseq.optim.lr_scheduler # noqa -import fairseq.pdb # noqa -import fairseq.scoring # noqa -import fairseq.tasks # noqa -import fairseq.token_generation_constraints # noqa - -import fairseq.benchmark # noqa -import fairseq.model_parallel # noqa diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/subsample_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/subsample_dataset.py deleted file mode 100644 index 48feaf883f87dc95f8637c24d3c96f3f9fd8bd1d..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/subsample_dataset.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np - -from . import BaseWrapperDataset - - -logger = logging.getLogger(__name__) - - -class SubsampleDataset(BaseWrapperDataset): - """Subsamples a given dataset by a specified ratio. Subsampling is done on the number of examples - - Args: - dataset (~torch.utils.data.Dataset): dataset to subsample - size_ratio(float): the ratio to subsample to. must be between 0 and 1 (exclusive) - """ - - def __init__(self, dataset, size_ratio, shuffle=False): - super().__init__(dataset) - assert size_ratio < 1 - self.actual_size = np.ceil(len(dataset) * size_ratio).astype(int) - self.indices = np.random.choice( - list(range(len(self.dataset))), self.actual_size, replace=False - ) - self.shuffle = shuffle - logger.info( - "subsampled dataset from {} to {} (ratio={})".format( - len(self.dataset), self.actual_size, size_ratio - ) - ) - - def __getitem__(self, index): - return self.dataset[self.indices[index]] - - def __len__(self): - return self.actual_size - - def collater(self, samples): - return self.dataset.collater(samples) - - @property - def sizes(self): - return self.dataset.sizes[self.indices] - - @property - def name(self): - return self.dataset.name - - def num_tokens(self, index): - return self.dataset.num_tokens(self.indices[index]) - - def size(self, index): - return self.dataset.size(self.indices[index]) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - order.append(self.sizes) - return np.lexsort(order) - - def prefetch(self, indices): - self.dataset.prefetch(self.indices[indices]) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/utils.py b/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/utils.py deleted file mode 100644 index 1320ec473756c78ec949f72f9260420c19caff0f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/utils.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import inspect -import logging -import os -import re -from argparse import ArgumentError, ArgumentParser, Namespace -from dataclasses import _MISSING_TYPE, MISSING, is_dataclass -from enum import Enum -from typing import Any, Dict, List, Optional, Tuple, Type - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.configs import FairseqConfig -from hydra.core.global_hydra import GlobalHydra -from hydra.experimental import compose, initialize -from omegaconf import DictConfig, OmegaConf, open_dict, _utils - -logger = logging.getLogger(__name__) - - -def eval_str_list(x, x_type=float): - if x is None: - return None - if isinstance(x, str): - if len(x) == 0: - return [] - x = ast.literal_eval(x) - try: - return list(map(x_type, x)) - except TypeError: - return [x_type(x)] - - -def interpret_dc_type(field_type): - if isinstance(field_type, str): - raise RuntimeError("field should be a type") - - if field_type == Any: - return str - - typestring = str(field_type) - if re.match( - r"(typing.|^)Union\[(.*), NoneType\]$", typestring - ) or typestring.startswith("typing.Optional"): - return field_type.__args__[0] - return field_type - - -def gen_parser_from_dataclass( - parser: ArgumentParser, - dataclass_instance: FairseqDataclass, - delete_default: bool = False, - with_prefix: Optional[str] = None, -) -> None: - """ - convert a dataclass instance to tailing parser arguments. - - If `with_prefix` is provided, prefix all the keys in the resulting parser with it. It means that we are - building a flat namespace from a structured dataclass (see transformer_config.py for example). - """ - - def argparse_name(name: str): - if name == "data" and (with_prefix is None or with_prefix == ''): - # normally data is positional args, so we don't add the -- nor the prefix - return name - if name == "_name": - # private member, skip - return None - full_name = "--" + name.replace("_", "-") - if with_prefix is not None and with_prefix != '': - # if a prefix is specified, construct the prefixed arg name - full_name = with_prefix + "-" + full_name[2:] # strip -- when composing - return full_name - - def get_kwargs_from_dc( - dataclass_instance: FairseqDataclass, k: str - ) -> Dict[str, Any]: - """k: dataclass attributes""" - - kwargs = {} - - field_type = dataclass_instance._get_type(k) - inter_type = interpret_dc_type(field_type) - - field_default = dataclass_instance._get_default(k) - - if isinstance(inter_type, type) and issubclass(inter_type, Enum): - field_choices = [t.value for t in list(inter_type)] - else: - field_choices = None - - field_help = dataclass_instance._get_help(k) - field_const = dataclass_instance._get_argparse_const(k) - - if isinstance(field_default, str) and field_default.startswith("${"): - kwargs["default"] = field_default - else: - if field_default is MISSING: - kwargs["required"] = True - if field_choices is not None: - kwargs["choices"] = field_choices - if ( - isinstance(inter_type, type) - and (issubclass(inter_type, List) or issubclass(inter_type, Tuple)) - ) or ("List" in str(inter_type) or "Tuple" in str(inter_type)): - if "int" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, int) - elif "float" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, float) - elif "str" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, str) - else: - raise NotImplementedError( - "parsing of type " + str(inter_type) + " is not implemented" - ) - if field_default is not MISSING: - kwargs["default"] = ( - ",".join(map(str, field_default)) - if field_default is not None - else None - ) - elif ( - isinstance(inter_type, type) and issubclass(inter_type, Enum) - ) or "Enum" in str(inter_type): - kwargs["type"] = str - if field_default is not MISSING: - if isinstance(field_default, Enum): - kwargs["default"] = field_default.value - else: - kwargs["default"] = field_default - elif inter_type is bool: - kwargs["action"] = ( - "store_false" if field_default is True else "store_true" - ) - kwargs["default"] = field_default - else: - kwargs["type"] = inter_type - if field_default is not MISSING: - kwargs["default"] = field_default - - # build the help with the hierarchical prefix - if with_prefix is not None and with_prefix != '' and field_help is not None: - field_help = with_prefix[2:] + ': ' + field_help - - kwargs["help"] = field_help - if field_const is not None: - kwargs["const"] = field_const - kwargs["nargs"] = "?" - - return kwargs - - for k in dataclass_instance._get_all_attributes(): - field_name = argparse_name(dataclass_instance._get_name(k)) - field_type = dataclass_instance._get_type(k) - if field_name is None: - continue - elif inspect.isclass(field_type) and issubclass(field_type, FairseqDataclass): - # for fields that are of type FairseqDataclass, we can recursively - # add their fields to the namespace (so we add the args from model, task, etc. to the root namespace) - prefix = None - if with_prefix is not None: - # if a prefix is specified, then we don't want to copy the subfields directly to the root namespace - # but we prefix them with the name of the current field. - prefix = field_name - gen_parser_from_dataclass(parser, field_type(), delete_default, prefix) - continue - - kwargs = get_kwargs_from_dc(dataclass_instance, k) - - field_args = [field_name] - alias = dataclass_instance._get_argparse_alias(k) - if alias is not None: - field_args.append(alias) - - if "default" in kwargs: - if isinstance(kwargs["default"], str) and kwargs["default"].startswith( - "${" - ): - if kwargs["help"] is None: - # this is a field with a name that will be added elsewhere - continue - else: - del kwargs["default"] - if delete_default and "default" in kwargs: - del kwargs["default"] - try: - parser.add_argument(*field_args, **kwargs) - except ArgumentError: - pass - - -def _set_legacy_defaults(args, cls): - """Helper to set default arguments based on *add_args*.""" - if not hasattr(cls, "add_args"): - return - - import argparse - - parser = argparse.ArgumentParser( - argument_default=argparse.SUPPRESS, allow_abbrev=False - ) - cls.add_args(parser) - # copied from argparse.py: - defaults = argparse.Namespace() - for action in parser._actions: - if action.dest is not argparse.SUPPRESS: - if not hasattr(defaults, action.dest): - if action.default is not argparse.SUPPRESS: - setattr(defaults, action.dest, action.default) - for key, default_value in vars(defaults).items(): - if not hasattr(args, key): - setattr(args, key, default_value) - - -def _override_attr( - sub_node: str, data_class: Type[FairseqDataclass], args: Namespace -) -> List[str]: - overrides = [] - - if not inspect.isclass(data_class) or not issubclass(data_class, FairseqDataclass): - return overrides - - def get_default(f): - if not isinstance(f.default_factory, _MISSING_TYPE): - return f.default_factory() - return f.default - - for k, v in data_class.__dataclass_fields__.items(): - if k.startswith("_"): - # private member, skip - continue - - val = get_default(v) if not hasattr(args, k) else getattr(args, k) - - field_type = interpret_dc_type(v.type) - if ( - isinstance(val, str) - and not val.startswith("${") # not interpolation - and field_type != str - and ( - not inspect.isclass(field_type) or not issubclass(field_type, Enum) - ) # not choices enum - ): - # upgrade old models that stored complex parameters as string - val = ast.literal_eval(val) - - if isinstance(val, tuple): - val = list(val) - - v_type = getattr(v.type, "__origin__", None) - if ( - (v_type is List or v_type is list or v_type is Optional) - # skip interpolation - and not (isinstance(val, str) and val.startswith("${")) - ): - # if type is int but val is float, then we will crash later - try to convert here - if hasattr(v.type, "__args__"): - t_args = v.type.__args__ - if len(t_args) == 1 and (t_args[0] is float or t_args[0] is int): - val = list(map(t_args[0], val)) - elif val is not None and ( - field_type is int or field_type is bool or field_type is float - ): - try: - val = field_type(val) - except: - pass # ignore errors here, they are often from interpolation args - - if val is None: - overrides.append("{}.{}=null".format(sub_node, k)) - elif val == "": - overrides.append("{}.{}=''".format(sub_node, k)) - elif isinstance(val, str): - val = val.replace("'", r"\'") - overrides.append("{}.{}='{}'".format(sub_node, k, val)) - elif isinstance(val, FairseqDataclass): - overrides += _override_attr(f"{sub_node}.{k}", type(val), args) - elif isinstance(val, Namespace): - sub_overrides, _ = override_module_args(val) - for so in sub_overrides: - overrides.append(f"{sub_node}.{k}.{so}") - else: - overrides.append("{}.{}={}".format(sub_node, k, val)) - - return overrides - - -def migrate_registry( - name, value, registry, args, overrides, deletes, use_name_as_val=False -): - if value in registry: - overrides.append("{}={}".format(name, value)) - overrides.append("{}._name={}".format(name, value)) - overrides.extend(_override_attr(name, registry[value], args)) - elif use_name_as_val and value is not None: - overrides.append("{}={}".format(name, value)) - else: - deletes.append(name) - - -def override_module_args(args: Namespace) -> Tuple[List[str], List[str]]: - """use the field in args to overrides those in cfg""" - overrides = [] - deletes = [] - - for k in FairseqConfig.__dataclass_fields__.keys(): - overrides.extend( - _override_attr(k, FairseqConfig.__dataclass_fields__[k].type, args) - ) - - if args is not None: - if hasattr(args, "task"): - from fairseq.tasks import TASK_DATACLASS_REGISTRY - - migrate_registry( - "task", args.task, TASK_DATACLASS_REGISTRY, args, overrides, deletes - ) - else: - deletes.append("task") - - # these options will be set to "None" if they have not yet been migrated - # so we can populate them with the entire flat args - CORE_REGISTRIES = {"criterion", "optimizer", "lr_scheduler"} - - from fairseq.registry import REGISTRIES - - for k, v in REGISTRIES.items(): - if hasattr(args, k): - migrate_registry( - k, - getattr(args, k), - v["dataclass_registry"], - args, - overrides, - deletes, - use_name_as_val=k not in CORE_REGISTRIES, - ) - else: - deletes.append(k) - - no_dc = True - if hasattr(args, "arch"): - from fairseq.models import ARCH_MODEL_REGISTRY, ARCH_MODEL_NAME_REGISTRY - - if args.arch in ARCH_MODEL_REGISTRY: - m_cls = ARCH_MODEL_REGISTRY[args.arch] - dc = getattr(m_cls, "__dataclass", None) - if dc is not None: - m_name = ARCH_MODEL_NAME_REGISTRY[args.arch] - overrides.append("model={}".format(m_name)) - overrides.append("model._name={}".format(args.arch)) - # override model params with those exist in args - overrides.extend(_override_attr("model", dc, args)) - no_dc = False - if no_dc: - deletes.append("model") - - return overrides, deletes - - -class omegaconf_no_object_check: - def __init__(self): - self.old_is_primitive = _utils.is_primitive_type - - def __enter__(self): - _utils.is_primitive_type = lambda _: True - - def __exit__(self, type, value, traceback): - _utils.is_primitive_type = self.old_is_primitive - - -def convert_namespace_to_omegaconf(args: Namespace) -> DictConfig: - """Convert a flat argparse.Namespace to a structured DictConfig.""" - - # Here we are using field values provided in args to override counterparts inside config object - overrides, deletes = override_module_args(args) - - # configs will be in fairseq/config after installation - config_path = os.path.join("..", "config") - - GlobalHydra.instance().clear() - - with initialize(config_path=config_path): - try: - composed_cfg = compose("config", overrides=overrides, strict=False) - except: - logger.error("Error when composing. Overrides: " + str(overrides)) - raise - - for k in deletes: - composed_cfg[k] = None - - cfg = OmegaConf.create( - OmegaConf.to_container(composed_cfg, resolve=True, enum_to_str=True) - ) - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - with omegaconf_no_object_check(): - if cfg.task is None and getattr(args, "task", None): - cfg.task = Namespace(**vars(args)) - from fairseq.tasks import TASK_REGISTRY - - _set_legacy_defaults(cfg.task, TASK_REGISTRY[args.task]) - cfg.task._name = args.task - if cfg.model is None and getattr(args, "arch", None): - cfg.model = Namespace(**vars(args)) - from fairseq.models import ARCH_MODEL_REGISTRY - - _set_legacy_defaults(cfg.model, ARCH_MODEL_REGISTRY[args.arch]) - cfg.model._name = args.arch - if cfg.optimizer is None and getattr(args, "optimizer", None): - cfg.optimizer = Namespace(**vars(args)) - from fairseq.optim import OPTIMIZER_REGISTRY - - _set_legacy_defaults(cfg.optimizer, OPTIMIZER_REGISTRY[args.optimizer]) - cfg.optimizer._name = args.optimizer - if cfg.lr_scheduler is None and getattr(args, "lr_scheduler", None): - cfg.lr_scheduler = Namespace(**vars(args)) - from fairseq.optim.lr_scheduler import LR_SCHEDULER_REGISTRY - - _set_legacy_defaults( - cfg.lr_scheduler, LR_SCHEDULER_REGISTRY[args.lr_scheduler] - ) - cfg.lr_scheduler._name = args.lr_scheduler - if cfg.criterion is None and getattr(args, "criterion", None): - cfg.criterion = Namespace(**vars(args)) - from fairseq.criterions import CRITERION_REGISTRY - - _set_legacy_defaults(cfg.criterion, CRITERION_REGISTRY[args.criterion]) - cfg.criterion._name = args.criterion - - OmegaConf.set_struct(cfg, True) - return cfg - - -def overwrite_args_by_name(cfg: DictConfig, overrides: Dict[str, any]): - # this will be deprecated when we get rid of argparse and model_overrides logic - - from fairseq.registry import REGISTRIES - - with open_dict(cfg): - for k in cfg.keys(): - # "k in cfg" will return false if its a "mandatory value (e.g. ???)" - if k in cfg and isinstance(cfg[k], DictConfig): - if k in overrides and isinstance(overrides[k], dict): - for ok, ov in overrides[k].items(): - if isinstance(ov, dict) and cfg[k][ok] is not None: - overwrite_args_by_name(cfg[k][ok], ov) - else: - cfg[k][ok] = ov - else: - overwrite_args_by_name(cfg[k], overrides) - elif k in cfg and isinstance(cfg[k], Namespace): - for override_key, val in overrides.items(): - setattr(cfg[k], override_key, val) - elif k in overrides: - if ( - k in REGISTRIES - and overrides[k] in REGISTRIES[k]["dataclass_registry"] - ): - cfg[k] = DictConfig( - REGISTRIES[k]["dataclass_registry"][overrides[k]] - ) - overwrite_args_by_name(cfg[k], overrides) - cfg[k]._name = overrides[k] - else: - cfg[k] = overrides[k] - - -def merge_with_parent(dc: FairseqDataclass, cfg: DictConfig, remove_missing=True): - if remove_missing: - - if is_dataclass(dc): - target_keys = set(dc.__dataclass_fields__.keys()) - else: - target_keys = set(dc.keys()) - - with open_dict(cfg): - for k in list(cfg.keys()): - if k not in target_keys: - del cfg[k] - - merged_cfg = OmegaConf.merge(dc, cfg) - merged_cfg.__dict__["_parent"] = cfg.__dict__["_parent"] - OmegaConf.set_struct(merged_cfg, True) - return merged_cfg diff --git a/spaces/IPN/demo-sdamian/app.py b/spaces/IPN/demo-sdamian/app.py deleted file mode 100644 index b65012f2c654df04f347f38045568d0b5fe1a9f2..0000000000000000000000000000000000000000 --- a/spaces/IPN/demo-sdamian/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/lirondos/anglicisms-spanish-mbert").launch(); diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/basicvsr_arch.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/basicvsr_arch.py deleted file mode 100644 index ed7b824eae108a9bcca57f1c14dd0d8afafc4f58..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/basicvsr_arch.py +++ /dev/null @@ -1,336 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from basicsr.utils.registry import ARCH_REGISTRY -from .arch_util import ResidualBlockNoBN, flow_warp, make_layer -from .edvr_arch import PCDAlignment, TSAFusion -from .spynet_arch import SpyNet - - -@ARCH_REGISTRY.register() -class BasicVSR(nn.Module): - """A recurrent network for video SR. Now only x4 is supported. - - Args: - num_feat (int): Number of channels. Default: 64. - num_block (int): Number of residual blocks for each branch. Default: 15 - spynet_path (str): Path to the pretrained weights of SPyNet. Default: None. - """ - - def __init__(self, num_feat=64, num_block=15, spynet_path=None): - super().__init__() - self.num_feat = num_feat - - # alignment - self.spynet = SpyNet(spynet_path) - - # propagation - self.backward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - self.forward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - - # reconstruction - self.fusion = nn.Conv2d(num_feat * 2, num_feat, 1, 1, 0, bias=True) - self.upconv1 = nn.Conv2d(num_feat, num_feat * 4, 3, 1, 1, bias=True) - self.upconv2 = nn.Conv2d(num_feat, 64 * 4, 3, 1, 1, bias=True) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - - self.pixel_shuffle = nn.PixelShuffle(2) - - # activation functions - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def get_flow(self, x): - b, n, c, h, w = x.size() - - x_1 = x[:, :-1, :, :, :].reshape(-1, c, h, w) - x_2 = x[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(x_1, x_2).view(b, n - 1, 2, h, w) - flows_forward = self.spynet(x_2, x_1).view(b, n - 1, 2, h, w) - - return flows_forward, flows_backward - - def forward(self, x): - """Forward function of BasicVSR. - - Args: - x: Input frames with shape (b, n, c, h, w). n is the temporal dimension / number of frames. - """ - flows_forward, flows_backward = self.get_flow(x) - b, n, _, h, w = x.size() - - # backward branch - out_l = [] - feat_prop = x.new_zeros(b, self.num_feat, h, w) - for i in range(n - 1, -1, -1): - x_i = x[:, i, :, :, :] - if i < n - 1: - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.backward_trunk(feat_prop) - out_l.insert(0, feat_prop) - - # forward branch - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, n): - x_i = x[:, i, :, :, :] - if i > 0: - flow = flows_forward[:, i - 1, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.forward_trunk(feat_prop) - - # upsample - out = torch.cat([out_l[i], feat_prop], dim=1) - out = self.lrelu(self.fusion(out)) - out = self.lrelu(self.pixel_shuffle(self.upconv1(out))) - out = self.lrelu(self.pixel_shuffle(self.upconv2(out))) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = F.interpolate(x_i, scale_factor=4, mode='bilinear', align_corners=False) - out += base - out_l[i] = out - - return torch.stack(out_l, dim=1) - - -class ConvResidualBlocks(nn.Module): - """Conv and residual block used in BasicVSR. - - Args: - num_in_ch (int): Number of input channels. Default: 3. - num_out_ch (int): Number of output channels. Default: 64. - num_block (int): Number of residual blocks. Default: 15. - """ - - def __init__(self, num_in_ch=3, num_out_ch=64, num_block=15): - super().__init__() - self.main = nn.Sequential( - nn.Conv2d(num_in_ch, num_out_ch, 3, 1, 1, bias=True), nn.LeakyReLU(negative_slope=0.1, inplace=True), - make_layer(ResidualBlockNoBN, num_block, num_feat=num_out_ch)) - - def forward(self, fea): - return self.main(fea) - - -@ARCH_REGISTRY.register() -class IconVSR(nn.Module): - """IconVSR, proposed also in the BasicVSR paper. - - Args: - num_feat (int): Number of channels. Default: 64. - num_block (int): Number of residual blocks for each branch. Default: 15. - keyframe_stride (int): Keyframe stride. Default: 5. - temporal_padding (int): Temporal padding. Default: 2. - spynet_path (str): Path to the pretrained weights of SPyNet. Default: None. - edvr_path (str): Path to the pretrained EDVR model. Default: None. - """ - - def __init__(self, - num_feat=64, - num_block=15, - keyframe_stride=5, - temporal_padding=2, - spynet_path=None, - edvr_path=None): - super().__init__() - - self.num_feat = num_feat - self.temporal_padding = temporal_padding - self.keyframe_stride = keyframe_stride - - # keyframe_branch - self.edvr = EDVRFeatureExtractor(temporal_padding * 2 + 1, num_feat, edvr_path) - # alignment - self.spynet = SpyNet(spynet_path) - - # propagation - self.backward_fusion = nn.Conv2d(2 * num_feat, num_feat, 3, 1, 1, bias=True) - self.backward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - - self.forward_fusion = nn.Conv2d(2 * num_feat, num_feat, 3, 1, 1, bias=True) - self.forward_trunk = ConvResidualBlocks(2 * num_feat + 3, num_feat, num_block) - - # reconstruction - self.upconv1 = nn.Conv2d(num_feat, num_feat * 4, 3, 1, 1, bias=True) - self.upconv2 = nn.Conv2d(num_feat, 64 * 4, 3, 1, 1, bias=True) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - - self.pixel_shuffle = nn.PixelShuffle(2) - - # activation functions - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def pad_spatial(self, x): - """Apply padding spatially. - - Since the PCD module in EDVR requires that the resolution is a multiple - of 4, we apply padding to the input LR images if their resolution is - not divisible by 4. - - Args: - x (Tensor): Input LR sequence with shape (n, t, c, h, w). - Returns: - Tensor: Padded LR sequence with shape (n, t, c, h_pad, w_pad). - """ - n, t, c, h, w = x.size() - - pad_h = (4 - h % 4) % 4 - pad_w = (4 - w % 4) % 4 - - # padding - x = x.view(-1, c, h, w) - x = F.pad(x, [0, pad_w, 0, pad_h], mode='reflect') - - return x.view(n, t, c, h + pad_h, w + pad_w) - - def get_flow(self, x): - b, n, c, h, w = x.size() - - x_1 = x[:, :-1, :, :, :].reshape(-1, c, h, w) - x_2 = x[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(x_1, x_2).view(b, n - 1, 2, h, w) - flows_forward = self.spynet(x_2, x_1).view(b, n - 1, 2, h, w) - - return flows_forward, flows_backward - - def get_keyframe_feature(self, x, keyframe_idx): - if self.temporal_padding == 2: - x = [x[:, [4, 3]], x, x[:, [-4, -5]]] - elif self.temporal_padding == 3: - x = [x[:, [6, 5, 4]], x, x[:, [-5, -6, -7]]] - x = torch.cat(x, dim=1) - - num_frames = 2 * self.temporal_padding + 1 - feats_keyframe = {} - for i in keyframe_idx: - feats_keyframe[i] = self.edvr(x[:, i:i + num_frames].contiguous()) - return feats_keyframe - - def forward(self, x): - b, n, _, h_input, w_input = x.size() - - x = self.pad_spatial(x) - h, w = x.shape[3:] - - keyframe_idx = list(range(0, n, self.keyframe_stride)) - if keyframe_idx[-1] != n - 1: - keyframe_idx.append(n - 1) # last frame is a keyframe - - # compute flow and keyframe features - flows_forward, flows_backward = self.get_flow(x) - feats_keyframe = self.get_keyframe_feature(x, keyframe_idx) - - # backward branch - out_l = [] - feat_prop = x.new_zeros(b, self.num_feat, h, w) - for i in range(n - 1, -1, -1): - x_i = x[:, i, :, :, :] - if i < n - 1: - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - if i in keyframe_idx: - feat_prop = torch.cat([feat_prop, feats_keyframe[i]], dim=1) - feat_prop = self.backward_fusion(feat_prop) - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.backward_trunk(feat_prop) - out_l.insert(0, feat_prop) - - # forward branch - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, n): - x_i = x[:, i, :, :, :] - if i > 0: - flow = flows_forward[:, i - 1, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - if i in keyframe_idx: - feat_prop = torch.cat([feat_prop, feats_keyframe[i]], dim=1) - feat_prop = self.forward_fusion(feat_prop) - - feat_prop = torch.cat([x_i, out_l[i], feat_prop], dim=1) - feat_prop = self.forward_trunk(feat_prop) - - # upsample - out = self.lrelu(self.pixel_shuffle(self.upconv1(feat_prop))) - out = self.lrelu(self.pixel_shuffle(self.upconv2(out))) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = F.interpolate(x_i, scale_factor=4, mode='bilinear', align_corners=False) - out += base - out_l[i] = out - - return torch.stack(out_l, dim=1)[..., :4 * h_input, :4 * w_input] - - -class EDVRFeatureExtractor(nn.Module): - """EDVR feature extractor used in IconVSR. - - Args: - num_input_frame (int): Number of input frames. - num_feat (int): Number of feature channels - load_path (str): Path to the pretrained weights of EDVR. Default: None. - """ - - def __init__(self, num_input_frame, num_feat, load_path): - - super(EDVRFeatureExtractor, self).__init__() - - self.center_frame_idx = num_input_frame // 2 - - # extract pyramid features - self.conv_first = nn.Conv2d(3, num_feat, 3, 1, 1) - self.feature_extraction = make_layer(ResidualBlockNoBN, 5, num_feat=num_feat) - self.conv_l2_1 = nn.Conv2d(num_feat, num_feat, 3, 2, 1) - self.conv_l2_2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_l3_1 = nn.Conv2d(num_feat, num_feat, 3, 2, 1) - self.conv_l3_2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - - # pcd and tsa module - self.pcd_align = PCDAlignment(num_feat=num_feat, deformable_groups=8) - self.fusion = TSAFusion(num_feat=num_feat, num_frame=num_input_frame, center_frame_idx=self.center_frame_idx) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - if load_path: - self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params']) - - def forward(self, x): - b, n, c, h, w = x.size() - - # extract features for each frame - # L1 - feat_l1 = self.lrelu(self.conv_first(x.view(-1, c, h, w))) - feat_l1 = self.feature_extraction(feat_l1) - # L2 - feat_l2 = self.lrelu(self.conv_l2_1(feat_l1)) - feat_l2 = self.lrelu(self.conv_l2_2(feat_l2)) - # L3 - feat_l3 = self.lrelu(self.conv_l3_1(feat_l2)) - feat_l3 = self.lrelu(self.conv_l3_2(feat_l3)) - - feat_l1 = feat_l1.view(b, n, -1, h, w) - feat_l2 = feat_l2.view(b, n, -1, h // 2, w // 2) - feat_l3 = feat_l3.view(b, n, -1, h // 4, w // 4) - - # PCD alignment - ref_feat_l = [ # reference feature list - feat_l1[:, self.center_frame_idx, :, :, :].clone(), feat_l2[:, self.center_frame_idx, :, :, :].clone(), - feat_l3[:, self.center_frame_idx, :, :, :].clone() - ] - aligned_feat = [] - for i in range(n): - nbr_feat_l = [ # neighboring feature list - feat_l1[:, i, :, :, :].clone(), feat_l2[:, i, :, :, :].clone(), feat_l3[:, i, :, :, :].clone() - ] - aligned_feat.append(self.pcd_align(nbr_feat_l, ref_feat_l)) - aligned_feat = torch.stack(aligned_feat, dim=1) # (b, t, c, h, w) - - # TSA fusion - return self.fusion(aligned_feat) diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/modules/losses.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/modules/losses.py deleted file mode 100644 index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/modules/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import modules.commons as commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - #print(logs_p) - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Illumotion/Koboldcpp/examples/json-schema-to-grammar.py b/spaces/Illumotion/Koboldcpp/examples/json-schema-to-grammar.py deleted file mode 100644 index 2a4cb65bcfc7ef0782004ddccd317026d1e4f569..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/json-schema-to-grammar.py +++ /dev/null @@ -1,133 +0,0 @@ -#!/usr/bin/env python3 -import argparse -import json -import re -import sys - -# whitespace is constrained to a single space char to prevent model "running away" in -# whitespace. Also maybe improves generation quality? -SPACE_RULE = '" "?' - -PRIMITIVE_RULES = { - 'boolean': '("true" | "false") space', - 'number': '("-"? ([0-9] | [1-9] [0-9]*)) ("." [0-9]+)? ([eE] [-+]? [0-9]+)? space', - 'integer': '("-"? ([0-9] | [1-9] [0-9]*)) space', - 'string': r''' "\"" ( - [^"\\] | - "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]) - )* "\"" space ''', - 'null': '"null" space', -} - -INVALID_RULE_CHARS_RE = re.compile(r'[^a-zA-Z0-9-]+') -GRAMMAR_LITERAL_ESCAPE_RE = re.compile(r'[\r\n"]') -GRAMMAR_LITERAL_ESCAPES = {'\r': '\\r', '\n': '\\n', '"': '\\"'} - - -class SchemaConverter: - def __init__(self, prop_order): - self._prop_order = prop_order - self._rules = {'space': SPACE_RULE} - - def _format_literal(self, literal): - escaped = GRAMMAR_LITERAL_ESCAPE_RE.sub( - lambda m: GRAMMAR_LITERAL_ESCAPES.get(m.group(0)), json.dumps(literal) - ) - return f'"{escaped}"' - - def _add_rule(self, name, rule): - esc_name = INVALID_RULE_CHARS_RE.sub('-', name) - if esc_name not in self._rules or self._rules[esc_name] == rule: - key = esc_name - else: - i = 0 - while f'{esc_name}{i}' in self._rules: - i += 1 - key = f'{esc_name}{i}' - self._rules[key] = rule - return key - - def visit(self, schema, name): - schema_type = schema.get('type') - rule_name = name or 'root' - - if 'oneOf' in schema or 'anyOf' in schema: - rule = ' | '.join(( - self.visit(alt_schema, f'{name}{"-" if name else ""}{i}') - for i, alt_schema in enumerate(schema.get('oneOf') or schema['anyOf']) - )) - return self._add_rule(rule_name, rule) - - elif 'const' in schema: - return self._add_rule(rule_name, self._format_literal(schema['const'])) - - elif 'enum' in schema: - rule = ' | '.join((self._format_literal(v) for v in schema['enum'])) - return self._add_rule(rule_name, rule) - - elif schema_type == 'object' and 'properties' in schema: - # TODO: `required` keyword - prop_order = self._prop_order - prop_pairs = sorted( - schema['properties'].items(), - # sort by position in prop_order (if specified) then by key - key=lambda kv: (prop_order.get(kv[0], len(prop_order)), kv[0]), - ) - - rule = '"{" space' - for i, (prop_name, prop_schema) in enumerate(prop_pairs): - prop_rule_name = self.visit(prop_schema, f'{name}{"-" if name else ""}{prop_name}') - if i > 0: - rule += ' "," space' - rule += fr' {self._format_literal(prop_name)} space ":" space {prop_rule_name}' - rule += ' "}" space' - - return self._add_rule(rule_name, rule) - - elif schema_type == 'array' and 'items' in schema: - # TODO `prefixItems` keyword - item_rule_name = self.visit(schema['items'], f'{name}{"-" if name else ""}item') - rule = f'"[" space ({item_rule_name} ("," space {item_rule_name})*)? "]" space' - return self._add_rule(rule_name, rule) - - else: - assert schema_type in PRIMITIVE_RULES, f'Unrecognized schema: {schema}' - return self._add_rule( - 'root' if rule_name == 'root' else schema_type, - PRIMITIVE_RULES[schema_type] - ) - - def format_grammar(self): - return '\n'.join((f'{name} ::= {rule}' for name, rule in self._rules.items())) - - -def main(args_in = None): - parser = argparse.ArgumentParser( - description=''' - Generates a grammar (suitable for use in ./main) that produces JSON conforming to a - given JSON schema. Only a subset of JSON schema features are supported; more may be - added in the future. - ''', - ) - parser.add_argument( - '--prop-order', - default=[], - type=lambda s: s.split(','), - help=''' - comma-separated property names defining the order of precedence for object properties; - properties not specified here are given lower precedence than those that are, and are - sorted alphabetically - ''' - ) - parser.add_argument('schema', help='file containing JSON schema ("-" for stdin)') - args = parser.parse_args(args_in) - - schema = json.load(sys.stdin if args.schema == '-' else open(args.schema)) - prop_order = {name: idx for idx, name in enumerate(args.prop_order)} - converter = SchemaConverter(prop_order) - converter.visit(schema, '') - print(converter.format_grammar()) - - -if __name__ == '__main__': - main() diff --git a/spaces/IntelligenzaArtificiale/ChatGLM-6B-Int4-API-OpenAI-Compatible/main.py b/spaces/IntelligenzaArtificiale/ChatGLM-6B-Int4-API-OpenAI-Compatible/main.py deleted file mode 100644 index 292156ded07986db1b24cd2fb1b6a81a53b8fbb8..0000000000000000000000000000000000000000 --- a/spaces/IntelligenzaArtificiale/ChatGLM-6B-Int4-API-OpenAI-Compatible/main.py +++ /dev/null @@ -1,114 +0,0 @@ -import json -from typing import List - -import torch -from fastapi import FastAPI, Request, status, HTTPException -from pydantic import BaseModel -from torch.cuda import get_device_properties -from transformers import AutoModel, AutoTokenizer -from sse_starlette.sse import EventSourceResponse -from fastapi.middleware.cors import CORSMiddleware -import uvicorn - -import os - -os.environ['TRANSFORMERS_CACHE'] = ".cache" - -bits = 4 -kernel_path = "models/models--silver--chatglm-6b-int4-slim/quantization_kernels.so" -model_path = "./models/models--silver--chatglm-6b-int4-slim/snapshots/02e096b3805c579caf5741a6d8eddd5ba7a74e0d" -cache_dir = './models' -model_name = 'chatglm-6b-int4' -min_memory = 5.5 -tokenizer = None -model = None - -app = FastAPI() - -app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - - -@app.on_event('startup') -def init(): - global tokenizer, model - tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=cache_dir) - model = AutoModel.from_pretrained(model_path, trust_remote_code=True, cache_dir=cache_dir) - - if torch.cuda.is_available() and get_device_properties(0).total_memory / 1024 ** 3 > min_memory: - model = model.half().quantize(bits=bits).cuda() - print("Using GPU") - else: - model = model.float().quantize(bits=bits) - if torch.cuda.is_available(): - print("Total Memory: ", get_device_properties(0).total_memory / 1024 ** 3) - else: - print("No GPU available") - print("Using CPU") - model = model.eval() - if os.environ.get("ngrok_token") is not None: - ngrok_connect() - - -class Message(BaseModel): - role: str - content: str - - -class Body(BaseModel): - messages: List[Message] - model: str - stream: bool - max_tokens: int - - -@app.get("/") -def read_root(): - return {"Hello": "World!"} - - -@app.post("/chat/completions") -async def completions(body: Body, request: Request): - if not body.stream or body.model != model_name: - raise HTTPException(status.HTTP_400_BAD_REQUEST, "Not Implemented") - - question = body.messages[-1] - if question.role == 'user': - question = question.content - else: - raise HTTPException(status.HTTP_400_BAD_REQUEST, "No Question Found") - - user_question = '' - history = [] - for message in body.messages: - if message.role == 'user': - user_question = message.content - elif message.role == 'system' or message.role == 'assistant': - assistant_answer = message.content - history.append((user_question, assistant_answer)) - - async def event_generator(): - for response in model.stream_chat(tokenizer, question, history, max_length=max(2048, body.max_tokens)): - if await request.is_disconnected(): - return - yield json.dumps({"response": response[0]}) - yield "[DONE]" - - return EventSourceResponse(event_generator()) - - -def ngrok_connect(): - from pyngrok import ngrok, conf - conf.set_default(conf.PyngrokConfig(ngrok_path="./ngrok")) - #ngrok.set_auth_token(os.environ["ngrok_token"]) - http_tunnel = ngrok.connect(8000) - print(http_tunnel.public_url) - - -if __name__ == "__main__": - uvicorn.run("main:app", reload=True, app_dir=".") diff --git a/spaces/IsaacK/streamlit-test/old/grid.py b/spaces/IsaacK/streamlit-test/old/grid.py deleted file mode 100644 index ff098f786769285497197ffe47fc557a3495ff04..0000000000000000000000000000000000000000 --- a/spaces/IsaacK/streamlit-test/old/grid.py +++ /dev/null @@ -1,19 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np - -from st_aggrid import AgGrid, GridOptionsBuilder, JsCode - -def app(): - df_template = pd.DataFrame( - '', - index=range(10), - columns=list('abcde') - ) - - with st.form('example form') as f: - st.header('Example Form') - response = AgGrid(df_template, editable=True, fit_columns_on_grid_load=True) - st.form_submit_button() - - st.write(response['data']) \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py b/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py deleted file mode 100644 index ae9412a9568202bb8ae39ea2a07cc26208cf7aa8..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py +++ /dev/null @@ -1,79 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -# flake8: noqa - -from ..utils import DummyObject, requires_backends - - -class OnnxStableDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class OnnxStableDiffusionInpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class OnnxStableDiffusionInpaintPipelineLegacy(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class OnnxStableDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class StableDiffusionOnnxPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) diff --git a/spaces/Jaehan/ChatBot-1/README.md b/spaces/Jaehan/ChatBot-1/README.md deleted file mode 100644 index 496aa7c30cbd934b1233c08ac991f2f2faf5a3f6..0000000000000000000000000000000000000000 --- a/spaces/Jaehan/ChatBot-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ChatBot 1 -emoji: 🐢 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JeffJing/ZookChatBot/steamship/utils/__init__.py b/spaces/JeffJing/ZookChatBot/steamship/utils/__init__.py deleted file mode 100644 index fff9cf696eeb5d7af026657a2999375ea863fd17..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Collection of utility functions.""" diff --git a/spaces/JingyeChen22/TextDiffuser/text-to-image-with-template.sh b/spaces/JingyeChen22/TextDiffuser/text-to-image-with-template.sh deleted file mode 100644 index 23a28fb9e3447ea5bbbb733af78b649728dae818..0000000000000000000000000000000000000000 --- a/spaces/JingyeChen22/TextDiffuser/text-to-image-with-template.sh +++ /dev/null @@ -1,7 +0,0 @@ -CUDA_VISIBLE_DEVICES=0 python inference.py \ - --mode="text-to-image-with-template" \ - --resume_from_checkpoint="textdiffuser-ckpt/diffusion_backbone" \ - --prompt="a poster of monkey music festival" \ - --template_image="assets/examples/text-to-image-with-template/case2.jpg" \ - --output_dir="./output" \ - --vis_num=4 \ No newline at end of file diff --git a/spaces/KPCGD/bingo/Dockerfile b/spaces/KPCGD/bingo/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Kangarroar/ApplioRVC-Inference/run.sh b/spaces/Kangarroar/ApplioRVC-Inference/run.sh deleted file mode 100644 index 704c9fff20b42b8659f7b4c797cd2928af9dec7a..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/run.sh +++ /dev/null @@ -1,61 +0,0 @@ -#!/bin/bash - -if [[ "$(uname)" == "Darwin" ]]; then - # macOS specific env: - export PYTORCH_ENABLE_MPS_FALLBACK=1 - export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 -elif [[ "$(uname)" != "Linux" ]]; then - echo "Unsupported operating system." - exit 1 -fi - -if [ -d ".venv" ]; then - echo "Activate venv..." - source .venv/bin/activate -else - echo "Create venv..." - requirements_file="requirements.txt" - - # Check if Python 3.8 is installed - if ! command -v python3 &> /dev/null; then - echo "Python 3 not found. Attempting to install 3.8..." - if [[ "$(uname)" == "Darwin" ]] && command -v brew &> /dev/null; then - brew install python@3.8 - elif [[ "$(uname)" == "Linux" ]] && command -v apt-get &> /dev/null; then - sudo apt-get update - sudo apt-get install python3.8 - else - echo "Please install Python 3.8 manually." - exit 1 - fi - fi - - python3 -m venv .venv - source .venv/bin/activate - - # Check if required packages are installed and install them if not - if [ -f "${requirements_file}" ]; then - installed_packages=$(python3 -m pip freeze) - while IFS= read -r package; do - [[ "${package}" =~ ^#.* ]] && continue - package_name=$(echo "${package}" | sed 's/[<>=!].*//') - if ! echo "${installed_packages}" | grep -q "${package_name}"; then - echo "${package_name} not found. Attempting to install..." - python3 -m pip install --upgrade "${package}" - fi - done < "${requirements_file}" - else - echo "${requirements_file} not found. Please ensure the requirements file with required packages exists." - exit 1 - fi -fi - -# Download models -./tools/dlmodels.sh - -if [[ $? -ne 0 ]]; then - exit 1 -fi - -# Run the main script -python3 infer-web.py --pycmd python3 diff --git a/spaces/Kedreamix/YoloGesture/nets/__init__.py b/spaces/Kedreamix/YoloGesture/nets/__init__.py deleted file mode 100644 index 4287ca8617970fa8fc025b75cb319c7032706910..0000000000000000000000000000000000000000 --- a/spaces/Kedreamix/YoloGesture/nets/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# \ No newline at end of file diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/conformer_encoder.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/conformer_encoder.py deleted file mode 100644 index d31e97a28fffec3599558109971b771c64ee2a80..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/conformer_encoder.py +++ /dev/null @@ -1,262 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Encoder definition.""" - -import logging -import torch -from typing import Callable -from typing import Collection -from typing import Dict -from typing import List -from typing import Optional -from typing import Tuple - -from .convolution import ConvolutionModule -from .encoder_layer import EncoderLayer -from ..nets_utils import get_activation, make_pad_mask -from .vgg import VGG2L -from .attention import MultiHeadedAttention, RelPositionMultiHeadedAttention -from .embedding import PositionalEncoding, ScaledPositionalEncoding, RelPositionalEncoding -from .layer_norm import LayerNorm -from .multi_layer_conv import Conv1dLinear, MultiLayeredConv1d -from .positionwise_feed_forward import PositionwiseFeedForward -from .repeat import repeat -from .subsampling import Conv2dNoSubsampling, Conv2dSubsampling - - -class ConformerEncoder(torch.nn.Module): - """Conformer encoder module. - - :param int idim: input dim - :param int attention_dim: dimention of attention - :param int attention_heads: the number of heads of multi head attention - :param int linear_units: the number of units of position-wise feed forward - :param int num_blocks: the number of decoder blocks - :param float dropout_rate: dropout rate - :param float attention_dropout_rate: dropout rate in attention - :param float positional_dropout_rate: dropout rate after adding positional encoding - :param str or torch.nn.Module input_layer: input layer type - :param bool normalize_before: whether to use layer_norm before the first block - :param bool concat_after: whether to concat attention layer's input and output - if True, additional linear will be applied. - i.e. x -> x + linear(concat(x, att(x))) - if False, no additional linear will be applied. i.e. x -> x + att(x) - :param str positionwise_layer_type: linear of conv1d - :param int positionwise_conv_kernel_size: kernel size of positionwise conv1d layer - :param str encoder_pos_enc_layer_type: encoder positional encoding layer type - :param str encoder_attn_layer_type: encoder attention layer type - :param str activation_type: encoder activation function type - :param bool macaron_style: whether to use macaron style for positionwise layer - :param bool use_cnn_module: whether to use convolution module - :param int cnn_module_kernel: kernerl size of convolution module - :param int padding_idx: padding_idx for input_layer=embed - """ - - def __init__( - self, - input_size, - attention_dim=256, - attention_heads=4, - linear_units=2048, - num_blocks=6, - dropout_rate=0.1, - positional_dropout_rate=0.1, - attention_dropout_rate=0.0, - input_layer="conv2d", - normalize_before=True, - concat_after=False, - positionwise_layer_type="linear", - positionwise_conv_kernel_size=1, - macaron_style=False, - pos_enc_layer_type="abs_pos", - selfattention_layer_type="selfattn", - activation_type="swish", - use_cnn_module=False, - cnn_module_kernel=31, - padding_idx=-1, - no_subsample=False, - subsample_by_2=False, - ): - """Construct an Encoder object.""" - super().__init__() - - self._output_size = attention_dim - idim = input_size - - activation = get_activation(activation_type) - if pos_enc_layer_type == "abs_pos": - pos_enc_class = PositionalEncoding - elif pos_enc_layer_type == "scaled_abs_pos": - pos_enc_class = ScaledPositionalEncoding - elif pos_enc_layer_type == "rel_pos": - assert selfattention_layer_type == "rel_selfattn" - pos_enc_class = RelPositionalEncoding - else: - raise ValueError("unknown pos_enc_layer: " + pos_enc_layer_type) - - if input_layer == "linear": - self.embed = torch.nn.Sequential( - torch.nn.Linear(idim, attention_dim), - torch.nn.LayerNorm(attention_dim), - torch.nn.Dropout(dropout_rate), - pos_enc_class(attention_dim, positional_dropout_rate), - ) - elif input_layer == "conv2d": - logging.info("Encoder input layer type: conv2d") - if no_subsample: - self.embed = Conv2dNoSubsampling( - idim, - attention_dim, - dropout_rate, - pos_enc_class(attention_dim, positional_dropout_rate), - ) - else: - self.embed = Conv2dSubsampling( - idim, - attention_dim, - dropout_rate, - pos_enc_class(attention_dim, positional_dropout_rate), - subsample_by_2, # NOTE(Sx): added by songxiang - ) - elif input_layer == "vgg2l": - self.embed = VGG2L(idim, attention_dim) - elif input_layer == "embed": - self.embed = torch.nn.Sequential( - torch.nn.Embedding(idim, attention_dim, padding_idx=padding_idx), - pos_enc_class(attention_dim, positional_dropout_rate), - ) - elif isinstance(input_layer, torch.nn.Module): - self.embed = torch.nn.Sequential( - input_layer, - pos_enc_class(attention_dim, positional_dropout_rate), - ) - elif input_layer is None: - self.embed = torch.nn.Sequential( - pos_enc_class(attention_dim, positional_dropout_rate) - ) - else: - raise ValueError("unknown input_layer: " + input_layer) - self.normalize_before = normalize_before - if positionwise_layer_type == "linear": - positionwise_layer = PositionwiseFeedForward - positionwise_layer_args = ( - attention_dim, - linear_units, - dropout_rate, - activation, - ) - elif positionwise_layer_type == "conv1d": - positionwise_layer = MultiLayeredConv1d - positionwise_layer_args = ( - attention_dim, - linear_units, - positionwise_conv_kernel_size, - dropout_rate, - ) - elif positionwise_layer_type == "conv1d-linear": - positionwise_layer = Conv1dLinear - positionwise_layer_args = ( - attention_dim, - linear_units, - positionwise_conv_kernel_size, - dropout_rate, - ) - else: - raise NotImplementedError("Support only linear or conv1d.") - - if selfattention_layer_type == "selfattn": - logging.info("encoder self-attention layer type = self-attention") - encoder_selfattn_layer = MultiHeadedAttention - encoder_selfattn_layer_args = ( - attention_heads, - attention_dim, - attention_dropout_rate, - ) - elif selfattention_layer_type == "rel_selfattn": - assert pos_enc_layer_type == "rel_pos" - encoder_selfattn_layer = RelPositionMultiHeadedAttention - encoder_selfattn_layer_args = ( - attention_heads, - attention_dim, - attention_dropout_rate, - ) - else: - raise ValueError("unknown encoder_attn_layer: " + selfattention_layer_type) - - convolution_layer = ConvolutionModule - convolution_layer_args = (attention_dim, cnn_module_kernel, activation) - - self.encoders = repeat( - num_blocks, - lambda lnum: EncoderLayer( - attention_dim, - encoder_selfattn_layer(*encoder_selfattn_layer_args), - positionwise_layer(*positionwise_layer_args), - positionwise_layer(*positionwise_layer_args) if macaron_style else None, - convolution_layer(*convolution_layer_args) if use_cnn_module else None, - dropout_rate, - normalize_before, - concat_after, - ), - ) - if self.normalize_before: - self.after_norm = LayerNorm(attention_dim) - - def output_size(self) -> int: - return self._output_size - - def forward( - self, - xs_pad: torch.Tensor, - ilens: torch.Tensor, - prev_states: torch.Tensor = None, - ) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]: - """ - Args: - xs_pad: input tensor (B, L, D) - ilens: input lengths (B) - prev_states: Not to be used now. - Returns: - Position embedded tensor and mask - """ - masks = (~make_pad_mask(ilens)[:, None, :]).to(xs_pad.device) - - if isinstance(self.embed, (Conv2dSubsampling, Conv2dNoSubsampling, VGG2L)): - # print(xs_pad.shape) - xs_pad, masks = self.embed(xs_pad, masks) - # print(xs_pad[0].size()) - else: - xs_pad = self.embed(xs_pad) - xs_pad, masks = self.encoders(xs_pad, masks) - if isinstance(xs_pad, tuple): - xs_pad = xs_pad[0] - - if self.normalize_before: - xs_pad = self.after_norm(xs_pad) - olens = masks.squeeze(1).sum(1) - return xs_pad, olens, None - - # def forward(self, xs, masks): - # """Encode input sequence. - - # :param torch.Tensor xs: input tensor - # :param torch.Tensor masks: input mask - # :return: position embedded tensor and mask - # :rtype Tuple[torch.Tensor, torch.Tensor]: - # """ - # if isinstance(self.embed, (Conv2dSubsampling, VGG2L)): - # xs, masks = self.embed(xs, masks) - # else: - # xs = self.embed(xs) - - # xs, masks = self.encoders(xs, masks) - # if isinstance(xs, tuple): - # xs = xs[0] - - # if self.normalize_before: - # xs = self.after_norm(xs) - # return xs, masks diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/hifigan/env.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/hifigan/env.py deleted file mode 100644 index 8f0d306d518d0d86a40d7ee992fbad6f04fe875f..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/hifigan/env.py +++ /dev/null @@ -1,8 +0,0 @@ -import os -import shutil - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/templates/index.html b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/templates/index.html deleted file mode 100644 index c61492e03fc88214d5ce7f822e3804b5371daa27..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/templates/index.html +++ /dev/null @@ -1,523 +0,0 @@ - - - - - - - - - MockingBird Web Server - - - - - - - - - - - - - -
- -
-
-
- -
-
-
- 拟声鸟工具箱 -
- -
-
- -
-
1. 请输入中文
- -
-
- -
2. 请直接录音,点击停止结束
- - - -
-
-
或上传音频
- - -
-
-
-
3. 选择Synthesizer模型
- - - -
-
-
4. 选择Vocoder模型
- - - -
-
- -
- - - - -
-
-
- - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/normalize/__init__.py b/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/normalize/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kimata/Sanskrit-TTS/modules.py b/spaces/Kimata/Sanskrit-TTS/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Kwabbs/SENTIMENT_APP/app.py b/spaces/Kwabbs/SENTIMENT_APP/app.py deleted file mode 100644 index b30cdd7ee3b13f8009e318e7dd870f0cbb235c50..0000000000000000000000000000000000000000 --- a/spaces/Kwabbs/SENTIMENT_APP/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import streamlit as st -from transformers import AutoTokenizer, AutoModelForSequenceClassification -import torch -import numpy as np -from scipy.special import softmax - - -# Add description and title -st.write(""" -# Sentiment Analysis App -""") - - -# Add image -image = st.image("images.png", width=200) - - -# Get user input -text = st.text_input("Type here:") -button = st.button('Analyze') - -# Define the CSS style for the app -st.markdown( -""" - -""", -unsafe_allow_html=True -) - - -def preprocess(text): - new_text = [] - for t in text.split(" "): - t = '@user' if t.startswith('@') and len(t) > 1 else t - t = 'http' if t.startswith('http') else t - new_text.append(t) - return " ".join(new_text) - -@st.cache_resource() -def get_model(): - # Load the model and tokenizer - tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") - model = AutoModelForSequenceClassification.from_pretrained("MrDdz/bert-base-uncased") - return tokenizer,model -tokenizer, model = get_model() - -if button: - text_sample = tokenizer(text, padding = 'max_length',return_tensors = 'pt') - # print(text_sample) - output = model(**text_sample) - scores_ = output[0][0].detach().numpy() - scores_ = softmax(scores_) - - labels = ['Negative','Neutral','Positive'] - scores = {l:float(s) for (l,s) in zip(labels,scores_)} - st.write("Prediction :",scores) \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/conv_upsample.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/conv_upsample.py deleted file mode 100644 index 32505875a2162330ed7d00455f088d08d94f679e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/conv_upsample.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmengine.model import BaseModule, ModuleList - - -class ConvUpsample(BaseModule): - """ConvUpsample performs 2x upsampling after Conv. - - There are several `ConvModule` layers. In the first few layers, upsampling - will be applied after each layer of convolution. The number of upsampling - must be no more than the number of ConvModule layers. - - Args: - in_channels (int): Number of channels in the input feature map. - inner_channels (int): Number of channels produced by the convolution. - num_layers (int): Number of convolution layers. - num_upsample (int | optional): Number of upsampling layer. Must be no - more than num_layers. Upsampling will be applied after the first - ``num_upsample`` layers of convolution. Default: ``num_layers``. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - init_cfg (dict): Config dict for initialization. Default: None. - kwargs (key word augments): Other augments used in ConvModule. - """ - - def __init__(self, - in_channels, - inner_channels, - num_layers=1, - num_upsample=None, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - **kwargs): - super(ConvUpsample, self).__init__(init_cfg) - if num_upsample is None: - num_upsample = num_layers - assert num_upsample <= num_layers, \ - f'num_upsample({num_upsample})must be no more than ' \ - f'num_layers({num_layers})' - self.num_layers = num_layers - self.num_upsample = num_upsample - self.conv = ModuleList() - for i in range(num_layers): - self.conv.append( - ConvModule( - in_channels, - inner_channels, - 3, - padding=1, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - in_channels = inner_channels - - def forward(self, x): - num_upsample = self.num_upsample - for i in range(self.num_layers): - x = self.conv[i](x) - if num_upsample > 0: - num_upsample -= 1 - x = F.interpolate( - x, scale_factor=2, mode='bilinear', align_corners=False) - return x diff --git a/spaces/Lianjd/stock_dashboard/backtrader/filters/__init__.py b/spaces/Lianjd/stock_dashboard/backtrader/filters/__init__.py deleted file mode 100644 index 502f729fe1b6b8e6c79624f370c5b7cfe7831b70..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/filters/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -from .. import Filter - -from .datafilter import * -from .datafiller import * -from .session import * -from .calendardays import * -from .daysteps import * -from .bsplitter import * -from .heikinashi import * -from .renko import * diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/utils/misc.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/utils/misc.py deleted file mode 100644 index 65ce96dc5667494446110fda75e29243338e2b88..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/utils/misc.py +++ /dev/null @@ -1,62 +0,0 @@ -from functools import partial - -import torch -import numpy as np - - -def get_dims_with_exclusion(dim, exclude=None): - dims = list(range(dim)) - if exclude is not None: - dims.remove(exclude) - - return dims - - -def get_unique_labels(mask): - return np.nonzero(np.bincount(mask.flatten() + 1))[0] - 1 - - -def get_bbox_from_mask(mask): - rows = np.any(mask, axis=1) - cols = np.any(mask, axis=0) - rmin, rmax = np.where(rows)[0][[0, -1]] - cmin, cmax = np.where(cols)[0][[0, -1]] - - return rmin, rmax, cmin, cmax - - -def expand_bbox(bbox, expand_ratio, min_crop_size=None): - rmin, rmax, cmin, cmax = bbox - rcenter = 0.5 * (rmin + rmax) - ccenter = 0.5 * (cmin + cmax) - height = expand_ratio * (rmax - rmin + 1) - width = expand_ratio * (cmax - cmin + 1) - if min_crop_size is not None: - height = max(height, min_crop_size) - width = max(width, min_crop_size) - - rmin = int(round(rcenter - 0.5 * height)) - rmax = int(round(rcenter + 0.5 * height)) - cmin = int(round(ccenter - 0.5 * width)) - cmax = int(round(ccenter + 0.5 * width)) - - return rmin, rmax, cmin, cmax - - -def clamp_bbox(bbox, rmin, rmax, cmin, cmax): - return (max(rmin, bbox[0]), min(rmax, bbox[1]), - max(cmin, bbox[2]), min(cmax, bbox[3])) - - -def get_bbox_iou(b1, b2): - h_iou = get_segments_iou(b1[:2], b2[:2]) - w_iou = get_segments_iou(b1[2:4], b2[2:4]) - return h_iou * w_iou - - -def get_segments_iou(s1, s2): - a, b = s1 - c, d = s2 - intersection = max(0, min(b, d) - max(a, c) + 1) - union = max(1e-6, max(b, d) - min(a, c) + 1) - return intersection / union diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/flask_api.py b/spaces/MashiroSA/sovits-emu-voice-transform/flask_api.py deleted file mode 100644 index b3f1e06847b2711a8e5841a4c95375445470d2ee..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/flask_api.py +++ /dev/null @@ -1,60 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # 变调信息 - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW所需的采样率 - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # http获得wav文件并转换 - input_wav_path = io.BytesIO(wave_file.read()) - - # 模型推理 - if raw_infer: - # out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, cluster_infer_ratio=0, - auto_predict_f0=False, noice_scale=0.4, f0_filter=False) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path, cluster_infer_ratio=0, - auto_predict_f0=False, noice_scale=0.4, f0_filter=False) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # 返回音频 - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # 启用则为直接切片合成,False为交叉淡化方式 - # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音 - # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些 - raw_infer = True - # 每个模型和config是唯一对应的 - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - cluster_model_path = "logs/44k/kmeans_10000.pt" - svc_model = Svc(model_name, config_name, cluster_model_path=cluster_model_path) - svc = RealTimeVC() - # 此处与vst插件对应,不建议更改 - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/Mattdoc99/CollisonChat2/README.md b/spaces/Mattdoc99/CollisonChat2/README.md deleted file mode 100644 index 21d4c9d8830be34858c4538f3c3a97761f88fce6..0000000000000000000000000000000000000000 --- a/spaces/Mattdoc99/CollisonChat2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CollisonChat2 -emoji: 👀 -colorFrom: indigo -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/alexnet.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/alexnet.py deleted file mode 100644 index 89e36b8c7851f895d9ae7f07149f0e707456aab0..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/alexnet.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - - -class AlexNet(nn.Module): - """AlexNet backbone. - - Args: - num_classes (int): number of classes for classification. - """ - - def __init__(self, num_classes=-1): - super(AlexNet, self).__init__() - self.num_classes = num_classes - self.features = nn.Sequential( - nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(64, 192, kernel_size=5, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(192, 384, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(384, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(256, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - ) - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Dropout(), - nn.Linear(256 * 6 * 6, 4096), - nn.ReLU(inplace=True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(inplace=True), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - # use default initializer - pass - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - - x = self.features(x) - if self.num_classes > 0: - x = x.view(x.size(0), 256 * 6 * 6) - x = self.classifier(x) - - return x diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/timer.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/timer.py deleted file mode 100644 index e3db7d497d8b374e18b5297e0a1d6eb186fd8cba..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/timer.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from time import time - - -class TimerError(Exception): - - def __init__(self, message): - self.message = message - super(TimerError, self).__init__(message) - - -class Timer: - """A flexible Timer class. - - :Example: - - >>> import time - >>> import annotator.uniformer.mmcv as mmcv - >>> with mmcv.Timer(): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - 1.000 - >>> with mmcv.Timer(print_tmpl='it takes {:.1f} seconds'): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - it takes 1.0 seconds - >>> timer = mmcv.Timer() - >>> time.sleep(0.5) - >>> print(timer.since_start()) - 0.500 - >>> time.sleep(0.5) - >>> print(timer.since_last_check()) - 0.500 - >>> print(timer.since_start()) - 1.000 - """ - - def __init__(self, start=True, print_tmpl=None): - self._is_running = False - self.print_tmpl = print_tmpl if print_tmpl else '{:.3f}' - if start: - self.start() - - @property - def is_running(self): - """bool: indicate whether the timer is running""" - return self._is_running - - def __enter__(self): - self.start() - return self - - def __exit__(self, type, value, traceback): - print(self.print_tmpl.format(self.since_last_check())) - self._is_running = False - - def start(self): - """Start the timer.""" - if not self._is_running: - self._t_start = time() - self._is_running = True - self._t_last = time() - - def since_start(self): - """Total time since the timer is started. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - self._t_last = time() - return self._t_last - self._t_start - - def since_last_check(self): - """Time since the last checking. - - Either :func:`since_start` or :func:`since_last_check` is a checking - operation. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - dur = time() - self._t_last - self._t_last = time() - return dur - - -_g_timers = {} # global timers - - -def check_time(timer_id): - """Add check points in a single line. - - This method is suitable for running a task on a list of items. A timer will - be registered when the method is called for the first time. - - :Example: - - >>> import time - >>> import annotator.uniformer.mmcv as mmcv - >>> for i in range(1, 6): - >>> # simulate a code block - >>> time.sleep(i) - >>> mmcv.check_time('task1') - 2.000 - 3.000 - 4.000 - 5.000 - - Args: - timer_id (str): Timer identifier. - """ - if timer_id not in _g_timers: - _g_timers[timer_id] = Timer() - return 0 - else: - return _g_timers[timer_id].since_last_check() diff --git a/spaces/Mostafa92/detecting_plant_leaf_diseases/app.py b/spaces/Mostafa92/detecting_plant_leaf_diseases/app.py deleted file mode 100644 index 9414e9589e3f47867796ef89ab55a0ea0a23d4cd..0000000000000000000000000000000000000000 --- a/spaces/Mostafa92/detecting_plant_leaf_diseases/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import os -import shutil -import random as r -import gradio as gr -import torch -import torchvision.transforms as tt -import torch.nn as nn -import torch.nn.functional as F - - -""""Part(1) We add the model function and class and load the model in evaluation mode.""" -# The model function and class -def conv_block(in_channels, out_channels, pool=False): - layers = [nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True)] - if pool: layers.append(nn.MaxPool2d(4)) - return nn.Sequential(*layers) - -class PlantDiseasesResNet(nn.Module): - def __init__(self, in_channels, num_classes): - super().__init__() - - self.conv1 = conv_block(in_channels, 64) - self.conv2 = conv_block(64, 128, pool=True)# output = 128 x 64 x 64 - self.res1 = nn.Sequential(conv_block(128, 128), conv_block(128, 128)) - - self.conv3 = conv_block(128, 256, pool=True)# output = 256 x 16 x 16 - self.conv4 = conv_block(256, 512, pool=True)# output = 256 x 4 x 4 - self.res2 = nn.Sequential(conv_block(512, 512), conv_block(512, 512)) - - self.classifier = nn.Sequential(nn.MaxPool2d(4),# output = 512 x 1 x 1 - nn.Flatten(), - nn.Dropout(0.2), - nn.Linear(512, num_classes)) - - def forward(self, xb): - out = self.conv1(xb) - out = self.conv2(out) - out = self.res1(out) + out - - out = self.conv3(out) - out = self.conv4(out) - out = self.res2(out) + out - - out = self.classifier(out) - return out - -model = PlantDiseasesResNet(3, 39) -model.load_state_dict(torch.load('models/plant_leaf_diseases_detection_ResNet-v-1-1(99.3%acc).pth', map_location=torch.device('cpu'))) -model.eval() - - - -"""Part(2) To add more testing examples there are few things we need to do -see the comments above every line to understand what it does.""" - -# here is the directory that will hold our exampels and below it the dirctory of our test dataset. -examples_dir = './examples' -test_data_dir = './test_data' -# we get a sorted list of our classes -data_classes = sorted(os.listdir(test_data_dir)) - -# this function will reutrn a list with all the images in our test dataset in it to be used as exampels in this demo web app. -def get_exampels(): - examples = [] - # first we clean the examples directory before we start - shutil.rmtree(examples_dir) - os.mkdir(examples_dir) - for dir in data_classes: - count = 0 - for img in os.listdir(f'{test_data_dir}/{dir}'): - # now that we are sure that we have an empty exampels directory we can copy the images to it - shutil.copy2(f'{test_data_dir}/{dir}/{img}', f'{examples_dir}/{img}') - # and we can change the names of the images - os.rename(f'{examples_dir}/{img}', f'{examples_dir}/{dir}_{count}.jpg') - # and append it to our exampels list - examples.append(f'{examples_dir}/{dir}_{count}.jpg') - count += 1 - return examples -examples = get_exampels() -r.shuffle(examples) - - - - -"""Part(3) How many plnats we have and how many diseases? and what are they? A: This function will get us what we need""" -# -def get_plants_diseases(classes): - plants = [] - diseases = [] - # These two string will be added to the discription - diseases_str = '' - plants_str = '' - for plant in classes: - # get only the plant name by splitten at ___ and choosing the first part - plant_name = plant.split('___')[0] - # to add the plant name once without adding the Background_without_leaves category - if plant_name not in plants and plant_name != 'Background_without_leaves': - plants.append(plant_name) - plants_str += f'{plant_name.replace(",_bell", "")}, ' - # to add the diseases without adding the healthy or the Background_without_leaves category. - if 'healthy' not in plant and plant != 'Background_without_leaves': - diseases.append(plant) - diseases_str += f'{plant.replace("___", " ")}, ' - return plants, plants_str, diseases, diseases_str - - -plants, plants_str, diseases, diseases_str = get_plants_diseases(data_classes) - - - - - -"""Part(4) Creating the inference function """ -# This function will take an image as input and output a dict with 5 keys and values -# the keys are the predicted category and the values are the probabilites of these category -# the first key, value are the predicted class and prob of the input image -def inference(input_img): - tfms = tt.Compose([tt.Resize((256, 256)), - tt.ToTensor()]) - # applying transformation to the image - tensor = tfms(input_img) - batch = tensor.unsqueeze(0) - # if GPU is availabel use it to maxmize the speed - if torch.cuda.is_available(): - batch = batch.to('cuda') - model.to('cuda') - # pass the image tensor to the model without caluculating the gradients - with torch.no_grad(): - output = model(batch) - # get the probs using softmax - probs = F.softmax(output[0], dim=0) - # get the top 5 prop and categories using torch.topk - top_5_prob, top_5_classes = torch.topk(probs, 5) - result = {} - # add the result to the dictionary as keys and values - for i in range(top_5_prob.size(0)): - result[data_classes[top_5_classes[i]]] = top_5_prob[i].item() - return result - - - - - -"""Part(5) Putting it all together""" -# what input the model should take -inputs = gr.inputs.Image(type='pil') -# what output to give -outputs = gr.outputs.Label(type='confidences', num_top_classes=5) -title = 'Plant Leaf Diseases Detection With ResNet9 Architecture (99.3% Validation Accuracy)' -description = f'This Model can classify {len(plants)} types of plants leaf which are {plants_str}.\n\nAnd can also classify {len(diseases)} Diseases Associated with those plants like {diseases_str} With 99.1% Accuracy on the test dataset of 5000 examples.' - -# Reading the article -with open('article.md') as f: - article = f.read() - -# Running the program with all of these parts together -gr.Interface(inference, inputs, outputs, title=title, description=description, article=article, examples=examples, analytics_enabled=False).launch() diff --git a/spaces/Mrleo/MyChatGPT/Dockerfile b/spaces/Mrleo/MyChatGPT/Dockerfile deleted file mode 100644 index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000 --- a/spaces/Mrleo/MyChatGPT/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -RUN pip install --user -r requirements.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV my_api_key empty -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/NATSpeech/PortaSpeech/modules/tts/fs2_orig.py b/spaces/NATSpeech/PortaSpeech/modules/tts/fs2_orig.py deleted file mode 100644 index 89ef47a9a017d8b54b108f58d3445d148f8ea44e..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/modules/tts/fs2_orig.py +++ /dev/null @@ -1,102 +0,0 @@ -import torch -from torch import nn -from modules.commons.layers import Embedding -from modules.commons.nar_tts_modules import EnergyPredictor, PitchPredictor -from modules.tts.commons.align_ops import expand_states -from modules.tts.fs import FastSpeech -from utils.audio.cwt import cwt2f0, get_lf0_cwt -from utils.audio.pitch.utils import denorm_f0, f0_to_coarse, norm_f0 -import numpy as np - - -class FastSpeech2Orig(FastSpeech): - def __init__(self, dict_size, hparams, out_dims=None): - super().__init__(dict_size, hparams, out_dims) - predictor_hidden = hparams['predictor_hidden'] if hparams['predictor_hidden'] > 0 else self.hidden_size - if hparams['use_energy_embed']: - self.energy_embed = Embedding(300, self.hidden_size, 0) - self.energy_predictor = EnergyPredictor( - self.hidden_size, n_chans=predictor_hidden, - n_layers=5, dropout_rate=0.1, odim=2, - kernel_size=hparams['predictor_kernel']) - if hparams['pitch_type'] == 'cwt' and hparams['use_pitch_embed']: - self.pitch_predictor = PitchPredictor( - self.hidden_size, n_chans=predictor_hidden, - n_layers=5, dropout_rate=0.1, odim=11, - kernel_size=hparams['predictor_kernel']) - self.cwt_stats_layers = nn.Sequential( - nn.Linear(self.hidden_size, self.hidden_size), nn.ReLU(), - nn.Linear(self.hidden_size, self.hidden_size), nn.ReLU(), nn.Linear(self.hidden_size, 2)) - - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, spk_id=None, - f0=None, uv=None, energy=None, infer=False, **kwargs): - ret = {} - encoder_out = self.encoder(txt_tokens) # [B, T, C] - src_nonpadding = (txt_tokens > 0).float()[:, :, None] - style_embed = self.forward_style_embed(spk_embed, spk_id) - - # add dur - dur_inp = (encoder_out + style_embed) * src_nonpadding - mel2ph = self.forward_dur(dur_inp, mel2ph, txt_tokens, ret) - tgt_nonpadding = (mel2ph > 0).float()[:, :, None] - decoder_inp = decoder_inp_ = expand_states(encoder_out, mel2ph) - - # add pitch and energy embed - if self.hparams['use_pitch_embed']: - pitch_inp = (decoder_inp_ + style_embed) * tgt_nonpadding - decoder_inp = decoder_inp + self.forward_pitch(pitch_inp, f0, uv, mel2ph, ret, encoder_out) - - # add pitch and energy embed - if self.hparams['use_energy_embed']: - energy_inp = (decoder_inp_ + style_embed) * tgt_nonpadding - decoder_inp = decoder_inp + self.forward_energy(energy_inp, energy, ret) - - # decoder input - ret['decoder_inp'] = decoder_inp = (decoder_inp + style_embed) * tgt_nonpadding - if self.hparams['dec_inp_add_noise']: - B, T, _ = decoder_inp.shape - z = kwargs.get('adv_z', torch.randn([B, T, self.z_channels])).to(decoder_inp.device) - ret['adv_z'] = z - decoder_inp = torch.cat([decoder_inp, z], -1) - decoder_inp = self.dec_inp_noise_proj(decoder_inp) * tgt_nonpadding - ret['mel_out'] = self.forward_decoder(decoder_inp, tgt_nonpadding, ret, infer=infer, **kwargs) - return ret - - def forward_pitch(self, decoder_inp, f0, uv, mel2ph, ret, encoder_out=None): - if self.hparams['pitch_type'] == 'cwt': - decoder_inp = decoder_inp.detach() + self.hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach()) - pitch_padding = mel2ph == 0 - ret['cwt'] = cwt_out = self.pitch_predictor(decoder_inp) - stats_out = self.cwt_stats_layers(encoder_out[:, 0, :]) # [B, 2] - mean = ret['f0_mean'] = stats_out[:, 0] - std = ret['f0_std'] = stats_out[:, 1] - cwt_spec = cwt_out[:, :, :10] - if f0 is None: - std = std * self.hparams['cwt_std_scale'] - f0 = self.cwt2f0_norm(cwt_spec, mean, std, mel2ph) - if self.hparams['use_uv']: - assert cwt_out.shape[-1] == 11 - uv = cwt_out[:, :, -1] > 0 - ret['f0_denorm'] = f0_denorm = denorm_f0(f0, uv if self.hparams['use_uv'] else None, - pitch_padding=pitch_padding) - pitch = f0_to_coarse(f0_denorm) # start from 0 - pitch_embed = self.pitch_embed(pitch) - return pitch_embed - else: - return super(FastSpeech2Orig, self).forward_pitch(decoder_inp, f0, uv, mel2ph, ret, encoder_out) - - def forward_energy(self, decoder_inp, energy, ret): - decoder_inp = decoder_inp.detach() + self.hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach()) - ret['energy_pred'] = energy_pred = self.energy_predictor(decoder_inp)[:, :, 0] - energy_embed_inp = energy_pred if energy is None else energy - energy_embed_inp = torch.clamp(energy_embed_inp * 256 // 4, min=0, max=255).long() - energy_embed = self.energy_embed(energy_embed_inp) - return energy_embed - - def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph): - _, cwt_scales = get_lf0_cwt(np.ones(10)) - f0 = cwt2f0(cwt_spec, mean, std, cwt_scales) - f0 = torch.cat( - [f0] + [f0[:, -1:]] * (mel2ph.shape[1] - f0.shape[1]), 1) - f0_norm = norm_f0(f0, None) - return f0_norm diff --git a/spaces/NATSpeech/PortaSpeech/utils/audio/__init__.py b/spaces/NATSpeech/PortaSpeech/utils/audio/__init__.py deleted file mode 100644 index e8cc4466b27eeda4026e945a5388dca04817e8a1..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/utils/audio/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import librosa -import numpy as np -import pyloudnorm as pyln - -from utils.audio.vad import trim_long_silences - - -def librosa_pad_lr(x, fsize, fshift, pad_sides=1): - '''compute right padding (final frame) or both sides padding (first and final frames) - ''' - assert pad_sides in (1, 2) - # return int(fsize // 2) - pad = (x.shape[0] // fshift + 1) * fshift - x.shape[0] - if pad_sides == 1: - return 0, pad - else: - return pad // 2, pad // 2 + pad % 2 - - -def amp_to_db(x): - return 20 * np.log10(np.maximum(1e-5, x)) - - -def db_to_amp(x): - return 10.0 ** (x * 0.05) - - -def normalize(S, min_level_db): - return (S - min_level_db) / -min_level_db - - -def denormalize(D, min_level_db): - return (D * -min_level_db) + min_level_db - - -def librosa_wav2spec(wav_path, - fft_size=1024, - hop_size=256, - win_length=1024, - window="hann", - num_mels=80, - fmin=80, - fmax=-1, - eps=1e-6, - sample_rate=22050, - loud_norm=False, - trim_long_sil=False): - if isinstance(wav_path, str): - if trim_long_sil: - wav, _, _ = trim_long_silences(wav_path, sample_rate) - else: - wav, _ = librosa.core.load(wav_path, sr=sample_rate) - else: - wav = wav_path - - if loud_norm: - meter = pyln.Meter(sample_rate) # create BS.1770 meter - loudness = meter.integrated_loudness(wav) - wav = pyln.normalize.loudness(wav, loudness, -22.0) - if np.abs(wav).max() > 1: - wav = wav / np.abs(wav).max() - - # get amplitude spectrogram - x_stft = librosa.stft(wav, n_fft=fft_size, hop_length=hop_size, - win_length=win_length, window=window, pad_mode="constant") - linear_spc = np.abs(x_stft) # (n_bins, T) - - # get mel basis - fmin = 0 if fmin == -1 else fmin - fmax = sample_rate / 2 if fmax == -1 else fmax - mel_basis = librosa.filters.mel(sample_rate, fft_size, num_mels, fmin, fmax) - - # calculate mel spec - mel = mel_basis @ linear_spc - mel = np.log10(np.maximum(eps, mel)) # (n_mel_bins, T) - l_pad, r_pad = librosa_pad_lr(wav, fft_size, hop_size, 1) - wav = np.pad(wav, (l_pad, r_pad), mode='constant', constant_values=0.0) - wav = wav[:mel.shape[1] * hop_size] - - # log linear spec - linear_spc = np.log10(np.maximum(eps, linear_spc)) - return {'wav': wav, 'mel': mel.T, 'linear': linear_spc.T, 'mel_basis': mel_basis} diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/attention_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/attention_test.py deleted file mode 100644 index ceb96f5084d795cdbafa7cdb352fb4692034f803..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/attention_test.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for the attention layer.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from absl.testing import parameterized -import numpy as np -import tensorflow as tf - -from tensorflow.python.keras import keras_parameterized # pylint: disable=g-direct-tensorflow-import -from official.nlp.modeling.layers import attention - - -# This decorator runs the test in V1, V2-Eager, and V2-Functional mode. It -# guarantees forward compatibility of this code for the V2 switchover. -@keras_parameterized.run_all_keras_modes -class MultiHeadAttentionTest(keras_parameterized.TestCase): - - @parameterized.named_parameters( - ("key_value_same_proj", None, None, [40, 80]), - ("key_value_different_proj", 32, 60, [40, 60]), - ) - def test_non_masked_attention(self, value_size, output_shape, output_dims): - """Test that the attention layer can be created without a mask tensor.""" - test_layer = attention.MultiHeadAttention( - num_heads=12, - key_size=64, - value_size=value_size, - output_shape=output_shape) - # Create a 3-dimensional input (the first dimension is implicit). - query = tf.keras.Input(shape=(40, 80)) - value = tf.keras.Input(shape=(20, 80)) - output = test_layer([query, value]) - self.assertEqual(output.shape.as_list(), [None] + output_dims) - - def test_non_masked_self_attention(self): - """Test with one input (self-attenntion) and no mask tensor.""" - test_layer = attention.MultiHeadAttention(num_heads=12, key_size=64) - # Create a 3-dimensional input (the first dimension is implicit). - query = tf.keras.Input(shape=(40, 80)) - output = test_layer([query, query]) - self.assertEqual(output.shape.as_list(), [None, 40, 80]) - - def test_attention_scores(self): - """Test attention outputs with coefficients.""" - test_layer = attention.MultiHeadAttention( - num_heads=12, key_size=64, return_attention_scores=True) - # Create a 3-dimensional input (the first dimension is implicit). - query = tf.keras.Input(shape=(40, 80)) - output, coef = test_layer([query, query]) - self.assertEqual(output.shape.as_list(), [None, 40, 80]) - self.assertEqual(coef.shape.as_list(), [None, 12, 40, 40]) - - @parameterized.named_parameters(("with_bias", True), ("no_bias", False)) - def test_masked_attention(self, use_bias): - """Test with a mask tensor.""" - test_layer = attention.MultiHeadAttention( - num_heads=2, key_size=2, use_bias=use_bias) - # Create a 3-dimensional input (the first dimension is implicit). - batch_size = 3 - query = tf.keras.Input(shape=(4, 8)) - value = tf.keras.Input(shape=(2, 8)) - mask_tensor = tf.keras.Input(shape=(4, 2)) - output = test_layer([query, value], mask_tensor) - - # Create a model containing the test layer. - model = tf.keras.Model([query, value, mask_tensor], output) - - # Generate data for the input (non-mask) tensors. - from_data = 10 * np.random.random_sample((batch_size, 4, 8)) - to_data = 10 * np.random.random_sample((batch_size, 2, 8)) - - # Invoke the data with a random set of mask data. This should mask at least - # one element. - mask_data = np.random.randint(2, size=(batch_size, 4, 2)) - masked_output_data = model.predict([from_data, to_data, mask_data]) - - # Invoke the same data, but with a null mask (where no elements are masked). - null_mask_data = np.ones((batch_size, 4, 2)) - unmasked_output_data = model.predict([from_data, to_data, null_mask_data]) - - # Because one data is masked and one is not, the outputs should not be the - # same. - self.assertNotAllClose(masked_output_data, unmasked_output_data) - - # Tests the layer with three inputs: Q, K, V. - key = tf.keras.Input(shape=(2, 8)) - output = test_layer([query, value, key], mask_tensor) - model = tf.keras.Model([query, value, key, mask_tensor], output) - - masked_output_data = model.predict([from_data, to_data, to_data, mask_data]) - unmasked_output_data = model.predict( - [from_data, to_data, to_data, null_mask_data]) - # Because one data is masked and one is not, the outputs should not be the - # same. - self.assertNotAllClose(masked_output_data, unmasked_output_data) - - if use_bias: - self.assertLen(test_layer._query_dense.trainable_variables, 2) - self.assertLen(test_layer._output_dense.trainable_variables, 2) - else: - self.assertLen(test_layer._query_dense.trainable_variables, 1) - self.assertLen(test_layer._output_dense.trainable_variables, 1) - - def test_initializer(self): - """Test with a specified initializer.""" - test_layer = attention.MultiHeadAttention( - num_heads=12, - key_size=64, - kernel_initializer=tf.keras.initializers.TruncatedNormal(stddev=0.02)) - # Create a 3-dimensional input (the first dimension is implicit). - query = tf.keras.Input(shape=(40, 80)) - output = test_layer([query, query]) - self.assertEqual(output.shape.as_list(), [None, 40, 80]) - - @parameterized.named_parameters( - ("4d_inputs_one_free_batch", [3, 4], [3, 2], [4, 2], (2,)), - ("4D_inputs_2D_attention", [3, 4], [3, 2], [3, 4, 3, 2], (1, 2)), - ("5D_inputs_2D_attention", [5, 3, 4], [5, 3, 2], [3, 4, 3, 2], (2, 3))) - def test_high_dim_attention(self, q_dims, v_dims, mask_dims, attention_axes): - """Test with a mask tensor.""" - test_layer = attention.MultiHeadAttention( - num_heads=2, key_size=2, attention_axes=attention_axes) - batch_size, hidden_size = 3, 8 - # Generate data for the input (non-mask) tensors. - query_shape = [batch_size] + q_dims + [hidden_size] - value_shape = [batch_size] + v_dims + [hidden_size] - mask_shape = [batch_size] + mask_dims - query = 10 * np.random.random_sample(query_shape) - value = 10 * np.random.random_sample(value_shape) - - # Invoke the data with a random set of mask data. This should mask at least - # one element. - mask_data = np.random.randint(2, size=mask_shape).astype("bool") - output = test_layer([query, value], mask_data) - - # Invoke the same data, but with a null mask (where no elements are masked). - null_mask_data = np.ones(mask_shape) - unmasked_output = test_layer([query, value], null_mask_data) - # Because one data is masked and one is not, the outputs should not be the - # same. - self.assertNotAllClose(output, unmasked_output) - - -class SubclassAttention(attention.MultiHeadAttention): - - def _build_attention(self, qkv_rank): - pass - - def _compute_attention(self, - query_tensor, - key_tensor, - value_tensor, - attention_mask=None): - return value_tensor, None - - -@keras_parameterized.run_all_keras_modes -class AttentionSubclassTest(keras_parameterized.TestCase): - - def test_initializer(self): - """Test with a specified initializer.""" - test_layer = SubclassAttention( - num_heads=12, - key_size=64) - # Create a 3-dimensional input (the first dimension is implicit). - query = tf.keras.Input(shape=(40, 80)) - output = test_layer([query, query]) - self.assertEqual(output.shape.as_list(), [None, 40, 80]) - - -def _create_cache(batch_size, init_decode_length, num_heads, head_size): - return { - "key": - tf.zeros([batch_size, init_decode_length, num_heads, head_size], - dtype=tf.float32), - "value": - tf.zeros([batch_size, init_decode_length, num_heads, head_size], - dtype=tf.float32) - } - - -@keras_parameterized.run_all_keras_modes -class CachedAttentionTest(keras_parameterized.TestCase): - - def test_masked_attention(self): - """Test with a mask tensor.""" - num_heads, head_size = 2, 2 - # Create a 3-dimensional input (the first dimension is implicit). - from_seq_length = 4 - batch_size = 3 - # GPU/CPU case. - init_decode_length = 0 - # Directly tests the keras layer. - cache = _create_cache(batch_size, init_decode_length, num_heads, head_size) - layer = attention.CachedAttention(num_heads=num_heads, key_size=head_size) - - # Generate data for the input (non-mask) tensors. - from_data = tf.zeros((batch_size, from_seq_length, 8), dtype=np.float32) - # Invoke the data with a random set of mask data. This should mask at least - # one element. - mask_data = np.random.randint( - 2, size=(batch_size, from_seq_length, from_seq_length)) - masked_output_data, cache = layer([from_data, from_data], mask_data, cache) - self.assertEqual(masked_output_data.shape, (3, 4, 8)) - self.assertEqual(cache["value"].shape, (3, 4, 2, 2)) - - # Tests inputs without cache. - masked_output_data, cache = layer([from_data, from_data, mask_data]) - self.assertEqual(masked_output_data.shape, (3, 4, 8)) - self.assertIsNone(cache) - - def test_padded_decode(self): - """Test with a mask tensor.""" - num_heads, head_size = 2, 2 - from_seq_length = 4 - # TPU decoding should pre-allocate the entire sequence. - batch_size = 3 - init_decode_length = from_seq_length - - # Directly tests the keras layer. - cache = _create_cache(batch_size, init_decode_length, num_heads, head_size) - layer = attention.CachedAttention(num_heads=num_heads, key_size=head_size) - - # Generate data for the input (non-mask) tensors. - from_data = tf.zeros((batch_size, from_seq_length, 8), dtype=np.float32) - decode_loop_step = 2 - mask_data = np.random.randint( - 2, size=(batch_size, from_seq_length, from_seq_length), dtype=np.int32) - # Testing the invocation directly as Keras cannot consume inputs correctly. - masked_output_data, cache = layer([from_data, from_data], - mask_data, - cache, - decode_loop_step=decode_loop_step) - self.assertEqual(masked_output_data.shape, (3, 4, 8)) - self.assertEqual(cache["value"].shape, (3, 4, 2, 2)) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/losses.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/losses.py deleted file mode 100644 index 4b993061b3c51c9ae6456d84a79f7fea5d74c77e..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/losses.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Losses used for detection models.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from absl import logging -import tensorflow as tf - - -def focal_loss(logits, targets, alpha, gamma, normalizer): - """Compute the focal loss between `logits` and the golden `target` values. - - Focal loss = -(1-pt)^gamma * log(pt) - where pt is the probability of being classified to the true class. - - Args: - logits: A float32 tensor of size - [batch, height_in, width_in, num_predictions]. - targets: A float32 tensor of size - [batch, height_in, width_in, num_predictions]. - alpha: A float32 scalar multiplying alpha to the loss from positive examples - and (1-alpha) to the loss from negative examples. - gamma: A float32 scalar modulating loss from hard and easy examples. - normalizer: A float32 scalar normalizes the total loss from all examples. - - Returns: - loss: A float32 Tensor of size [batch, height_in, width_in, num_predictions] - representing normalized loss on the prediction map. - """ - with tf.name_scope('focal_loss'): - positive_label_mask = tf.math.equal(targets, 1.0) - cross_entropy = ( - tf.nn.sigmoid_cross_entropy_with_logits(labels=targets, logits=logits)) - # Below are comments/derivations for computing modulator. - # For brevity, let x = logits, z = targets, r = gamma, and p_t = sigmod(x) - # for positive samples and 1 - sigmoid(x) for negative examples. - # - # The modulator, defined as (1 - P_t)^r, is a critical part in focal loss - # computation. For r > 0, it puts more weights on hard examples, and less - # weights on easier ones. However if it is directly computed as (1 - P_t)^r, - # its back-propagation is not stable when r < 1. The implementation here - # resolves the issue. - # - # For positive samples (labels being 1), - # (1 - p_t)^r - # = (1 - sigmoid(x))^r - # = (1 - (1 / (1 + exp(-x))))^r - # = (exp(-x) / (1 + exp(-x)))^r - # = exp(log((exp(-x) / (1 + exp(-x)))^r)) - # = exp(r * log(exp(-x)) - r * log(1 + exp(-x))) - # = exp(- r * x - r * log(1 + exp(-x))) - # - # For negative samples (labels being 0), - # (1 - p_t)^r - # = (sigmoid(x))^r - # = (1 / (1 + exp(-x)))^r - # = exp(log((1 / (1 + exp(-x)))^r)) - # = exp(-r * log(1 + exp(-x))) - # - # Therefore one unified form for positive (z = 1) and negative (z = 0) - # samples is: - # (1 - p_t)^r = exp(-r * z * x - r * log(1 + exp(-x))). - neg_logits = -1.0 * logits - modulator = tf.math.exp(gamma * targets * neg_logits - - gamma * tf.math.log1p(tf.math.exp(neg_logits))) - loss = modulator * cross_entropy - weighted_loss = tf.where(positive_label_mask, alpha * loss, - (1.0 - alpha) * loss) - weighted_loss /= normalizer - return weighted_loss - - -class RpnScoreLoss(object): - """Region Proposal Network score loss function.""" - - def __init__(self, params): - self._rpn_batch_size_per_im = params.rpn_batch_size_per_im - self._binary_crossentropy = tf.keras.losses.BinaryCrossentropy( - reduction=tf.keras.losses.Reduction.SUM, from_logits=True) - - def __call__(self, score_outputs, labels): - """Computes total RPN detection loss. - - Computes total RPN detection loss including box and score from all levels. - - Args: - score_outputs: an OrderDict with keys representing levels and values - representing scores in [batch_size, height, width, num_anchors]. - labels: the dictionary that returned from dataloader that includes - groundturth targets. - - Returns: - rpn_score_loss: a scalar tensor representing total score loss. - """ - with tf.name_scope('rpn_loss'): - levels = sorted(score_outputs.keys()) - - score_losses = [] - for level in levels: - score_losses.append( - self._rpn_score_loss( - score_outputs[level], - labels[level], - normalizer=tf.cast( - tf.shape(score_outputs[level])[0] * - self._rpn_batch_size_per_im, dtype=tf.float32))) - - # Sums per level losses to total loss. - return tf.math.add_n(score_losses) - - def _rpn_score_loss(self, score_outputs, score_targets, normalizer=1.0): - """Computes score loss.""" - # score_targets has three values: - # (1) score_targets[i]=1, the anchor is a positive sample. - # (2) score_targets[i]=0, negative. - # (3) score_targets[i]=-1, the anchor is don't care (ignore). - with tf.name_scope('rpn_score_loss'): - mask = tf.math.logical_or(tf.math.equal(score_targets, 1), - tf.math.equal(score_targets, 0)) - - score_targets = tf.math.maximum(score_targets, - tf.zeros_like(score_targets)) - - score_targets = tf.expand_dims(score_targets, axis=-1) - score_outputs = tf.expand_dims(score_outputs, axis=-1) - score_loss = self._binary_crossentropy( - score_targets, score_outputs, sample_weight=mask) - - score_loss /= normalizer - return score_loss - - -class RpnBoxLoss(object): - """Region Proposal Network box regression loss function.""" - - def __init__(self, params): - logging.info('RpnBoxLoss huber_loss_delta %s', params.huber_loss_delta) - # The delta is typically around the mean value of regression target. - # for instances, the regression targets of 512x512 input with 6 anchors on - # P2-P6 pyramid is about [0.1, 0.1, 0.2, 0.2]. - self._huber_loss = tf.keras.losses.Huber( - delta=params.huber_loss_delta, reduction=tf.keras.losses.Reduction.SUM) - - def __call__(self, box_outputs, labels): - """Computes total RPN detection loss. - - Computes total RPN detection loss including box and score from all levels. - - Args: - box_outputs: an OrderDict with keys representing levels and values - representing box regression targets in - [batch_size, height, width, num_anchors * 4]. - labels: the dictionary that returned from dataloader that includes - groundturth targets. - - Returns: - rpn_box_loss: a scalar tensor representing total box regression loss. - """ - with tf.name_scope('rpn_loss'): - levels = sorted(box_outputs.keys()) - - box_losses = [] - for level in levels: - box_losses.append(self._rpn_box_loss(box_outputs[level], labels[level])) - - # Sum per level losses to total loss. - return tf.add_n(box_losses) - - def _rpn_box_loss(self, box_outputs, box_targets, normalizer=1.0): - """Computes box regression loss.""" - with tf.name_scope('rpn_box_loss'): - mask = tf.cast(tf.not_equal(box_targets, 0.0), dtype=tf.float32) - box_targets = tf.expand_dims(box_targets, axis=-1) - box_outputs = tf.expand_dims(box_outputs, axis=-1) - box_loss = self._huber_loss(box_targets, box_outputs, sample_weight=mask) - # The loss is normalized by the sum of non-zero weights and additional - # normalizer provided by the function caller. Using + 0.01 here to avoid - # division by zero. - box_loss /= normalizer * (tf.reduce_sum(mask) + 0.01) - return box_loss - - -class FastrcnnClassLoss(object): - """Fast R-CNN classification loss function.""" - - def __init__(self): - self._categorical_crossentropy = tf.keras.losses.CategoricalCrossentropy( - reduction=tf.keras.losses.Reduction.SUM, from_logits=True) - - def __call__(self, class_outputs, class_targets): - """Computes the class loss (Fast-RCNN branch) of Mask-RCNN. - - This function implements the classification loss of the Fast-RCNN. - - The classification loss is softmax on all RoIs. - Reference: https://github.com/facebookresearch/Detectron/blob/master/detectron/modeling/fast_rcnn_heads.py # pylint: disable=line-too-long - - Args: - class_outputs: a float tensor representing the class prediction for each box - with a shape of [batch_size, num_boxes, num_classes]. - class_targets: a float tensor representing the class label for each box - with a shape of [batch_size, num_boxes]. - - Returns: - a scalar tensor representing total class loss. - """ - with tf.name_scope('fast_rcnn_loss'): - batch_size, num_boxes, num_classes = class_outputs.get_shape().as_list() - class_targets = tf.cast(class_targets, dtype=tf.int32) - class_targets_one_hot = tf.one_hot(class_targets, num_classes) - return self._fast_rcnn_class_loss(class_outputs, class_targets_one_hot, - normalizer=batch_size * num_boxes / 2.0) - - def _fast_rcnn_class_loss(self, class_outputs, class_targets_one_hot, - normalizer): - """Computes classification loss.""" - with tf.name_scope('fast_rcnn_class_loss'): - class_loss = self._categorical_crossentropy(class_targets_one_hot, - class_outputs) - - class_loss /= normalizer - return class_loss - - -class FastrcnnBoxLoss(object): - """Fast R-CNN box regression loss function.""" - - def __init__(self, params): - logging.info('FastrcnnBoxLoss huber_loss_delta %s', params.huber_loss_delta) - # The delta is typically around the mean value of regression target. - # for instances, the regression targets of 512x512 input with 6 anchors on - # P2-P6 pyramid is about [0.1, 0.1, 0.2, 0.2]. - self._huber_loss = tf.keras.losses.Huber( - delta=params.huber_loss_delta, reduction=tf.keras.losses.Reduction.SUM) - - def __call__(self, box_outputs, class_targets, box_targets): - """Computes the box loss (Fast-RCNN branch) of Mask-RCNN. - - This function implements the box regression loss of the Fast-RCNN. As the - `box_outputs` produces `num_classes` boxes for each RoI, the reference model - expands `box_targets` to match the shape of `box_outputs` and selects only - the target that the RoI has a maximum overlap. (Reference: https://github.com/facebookresearch/Detectron/blob/master/detectron/roi_data/fast_rcnn.py) # pylint: disable=line-too-long - Instead, this function selects the `box_outputs` by the `class_targets` so - that it doesn't expand `box_targets`. - - The box loss is smooth L1-loss on only positive samples of RoIs. - Reference: https://github.com/facebookresearch/Detectron/blob/master/detectron/modeling/fast_rcnn_heads.py # pylint: disable=line-too-long - - Args: - box_outputs: a float tensor representing the box prediction for each box - with a shape of [batch_size, num_boxes, num_classes * 4]. - class_targets: a float tensor representing the class label for each box - with a shape of [batch_size, num_boxes]. - box_targets: a float tensor representing the box label for each box - with a shape of [batch_size, num_boxes, 4]. - - Returns: - box_loss: a scalar tensor representing total box regression loss. - """ - with tf.name_scope('fast_rcnn_loss'): - class_targets = tf.cast(class_targets, dtype=tf.int32) - - # Selects the box from `box_outputs` based on `class_targets`, with which - # the box has the maximum overlap. - (batch_size, num_rois, - num_class_specific_boxes) = box_outputs.get_shape().as_list() - num_classes = num_class_specific_boxes // 4 - box_outputs = tf.reshape(box_outputs, - [batch_size, num_rois, num_classes, 4]) - - box_indices = tf.reshape( - class_targets + tf.tile( - tf.expand_dims( - tf.range(batch_size) * num_rois * num_classes, 1), - [1, num_rois]) + tf.tile( - tf.expand_dims(tf.range(num_rois) * num_classes, 0), - [batch_size, 1]), [-1]) - - box_outputs = tf.matmul( - tf.one_hot( - box_indices, - batch_size * num_rois * num_classes, - dtype=box_outputs.dtype), tf.reshape(box_outputs, [-1, 4])) - box_outputs = tf.reshape(box_outputs, [batch_size, -1, 4]) - - return self._fast_rcnn_box_loss(box_outputs, box_targets, class_targets) - - def _fast_rcnn_box_loss(self, box_outputs, box_targets, class_targets, - normalizer=1.0): - """Computes box regression loss.""" - with tf.name_scope('fast_rcnn_box_loss'): - mask = tf.tile(tf.expand_dims(tf.greater(class_targets, 0), axis=2), - [1, 1, 4]) - mask = tf.cast(mask, dtype=tf.float32) - box_targets = tf.expand_dims(box_targets, axis=-1) - box_outputs = tf.expand_dims(box_outputs, axis=-1) - box_loss = self._huber_loss(box_targets, box_outputs, sample_weight=mask) - # The loss is normalized by the number of ones in mask, - # additianal normalizer provided by the user and using 0.01 here to avoid - # division by 0. - box_loss /= normalizer * (tf.reduce_sum(mask) + 0.01) - return box_loss - - -class MaskrcnnLoss(object): - """Mask R-CNN instance segmentation mask loss function.""" - - def __init__(self): - self._binary_crossentropy = tf.keras.losses.BinaryCrossentropy( - reduction=tf.keras.losses.Reduction.SUM, from_logits=True) - - def __call__(self, mask_outputs, mask_targets, select_class_targets): - """Computes the mask loss of Mask-RCNN. - - This function implements the mask loss of Mask-RCNN. As the `mask_outputs` - produces `num_classes` masks for each RoI, the reference model expands - `mask_targets` to match the shape of `mask_outputs` and selects only the - target that the RoI has a maximum overlap. (Reference: https://github.com/facebookresearch/Detectron/blob/master/detectron/roi_data/mask_rcnn.py) # pylint: disable=line-too-long - Instead, this implementation selects the `mask_outputs` by the `class_targets` - so that it doesn't expand `mask_targets`. Note that the selection logic is - done in the post-processing of mask_rcnn_fn in mask_rcnn_architecture.py. - - Args: - mask_outputs: a float tensor representing the prediction for each mask, - with a shape of - [batch_size, num_masks, mask_height, mask_width]. - mask_targets: a float tensor representing the binary mask of ground truth - labels for each mask with a shape of - [batch_size, num_masks, mask_height, mask_width]. - select_class_targets: a tensor with a shape of [batch_size, num_masks], - representing the foreground mask targets. - - Returns: - mask_loss: a float tensor representing total mask loss. - """ - with tf.name_scope('mask_rcnn_loss'): - (batch_size, num_masks, mask_height, - mask_width) = mask_outputs.get_shape().as_list() - - weights = tf.tile( - tf.reshape(tf.greater(select_class_targets, 0), - [batch_size, num_masks, 1, 1]), - [1, 1, mask_height, mask_width]) - weights = tf.cast(weights, dtype=tf.float32) - - mask_targets = tf.expand_dims(mask_targets, axis=-1) - mask_outputs = tf.expand_dims(mask_outputs, axis=-1) - mask_loss = self._binary_crossentropy(mask_targets, mask_outputs, - sample_weight=weights) - - # The loss is normalized by the number of 1's in weights and - # + 0.01 is used to avoid division by zero. - return mask_loss / (tf.reduce_sum(weights) + 0.01) - - -class RetinanetClassLoss(object): - """RetinaNet class loss.""" - - def __init__(self, params, num_classes): - self._num_classes = num_classes - self._focal_loss_alpha = params.focal_loss_alpha - self._focal_loss_gamma = params.focal_loss_gamma - - def __call__(self, cls_outputs, labels, num_positives): - """Computes total detection loss. - - Computes total detection loss including box and class loss from all levels. - - Args: - cls_outputs: an OrderDict with keys representing levels and values - representing logits in [batch_size, height, width, - num_anchors * num_classes]. - labels: the dictionary that returned from dataloader that includes - class groundturth targets. - num_positives: number of positive examples in the minibatch. - - Returns: - an integar tensor representing total class loss. - """ - # Sums all positives in a batch for normalization and avoids zero - # num_positives_sum, which would lead to inf loss during training - num_positives_sum = tf.reduce_sum(input_tensor=num_positives) + 1.0 - - cls_losses = [] - for level in cls_outputs.keys(): - cls_losses.append(self.class_loss( - cls_outputs[level], labels[level], num_positives_sum)) - # Sums per level losses to total loss. - return tf.add_n(cls_losses) - - def class_loss(self, cls_outputs, cls_targets, num_positives, - ignore_label=-2): - """Computes RetinaNet classification loss.""" - # Onehot encoding for classification labels. - cls_targets_one_hot = tf.one_hot(cls_targets, self._num_classes) - bs, height, width, _, _ = cls_targets_one_hot.get_shape().as_list() - cls_targets_one_hot = tf.reshape(cls_targets_one_hot, - [bs, height, width, -1]) - loss = focal_loss(tf.cast(cls_outputs, dtype=tf.float32), - tf.cast(cls_targets_one_hot, dtype=tf.float32), - self._focal_loss_alpha, - self._focal_loss_gamma, - num_positives) - - ignore_loss = tf.where( - tf.equal(cls_targets, ignore_label), - tf.zeros_like(cls_targets, dtype=tf.float32), - tf.ones_like(cls_targets, dtype=tf.float32), - ) - ignore_loss = tf.expand_dims(ignore_loss, -1) - ignore_loss = tf.tile(ignore_loss, [1, 1, 1, 1, self._num_classes]) - ignore_loss = tf.reshape(ignore_loss, tf.shape(input=loss)) - return tf.reduce_sum(input_tensor=ignore_loss * loss) - - -class RetinanetBoxLoss(object): - """RetinaNet box loss.""" - - def __init__(self, params): - self._huber_loss = tf.keras.losses.Huber( - delta=params.huber_loss_delta, reduction=tf.keras.losses.Reduction.SUM) - - def __call__(self, box_outputs, labels, num_positives): - """Computes box detection loss. - - Computes total detection loss including box and class loss from all levels. - - Args: - box_outputs: an OrderDict with keys representing levels and values - representing box regression targets in [batch_size, height, width, - num_anchors * 4]. - labels: the dictionary that returned from dataloader that includes - box groundturth targets. - num_positives: number of positive examples in the minibatch. - - Returns: - an integar tensor representing total box regression loss. - """ - # Sums all positives in a batch for normalization and avoids zero - # num_positives_sum, which would lead to inf loss during training - num_positives_sum = tf.reduce_sum(input_tensor=num_positives) + 1.0 - - box_losses = [] - for level in box_outputs.keys(): - # Onehot encoding for classification labels. - box_targets_l = labels[level] - box_losses.append( - self.box_loss(box_outputs[level], box_targets_l, num_positives_sum)) - # Sums per level losses to total loss. - return tf.add_n(box_losses) - - def box_loss(self, box_outputs, box_targets, num_positives): - """Computes RetinaNet box regression loss.""" - # The delta is typically around the mean value of regression target. - # for instances, the regression targets of 512x512 input with 6 anchors on - # P3-P7 pyramid is about [0.1, 0.1, 0.2, 0.2]. - normalizer = num_positives * 4.0 - mask = tf.cast(tf.not_equal(box_targets, 0.0), dtype=tf.float32) - box_targets = tf.expand_dims(box_targets, axis=-1) - box_outputs = tf.expand_dims(box_outputs, axis=-1) - box_loss = self._huber_loss(box_targets, box_outputs, sample_weight=mask) - box_loss /= normalizer - return box_loss - - -class ShapemaskMseLoss(object): - """ShapeMask mask Mean Squared Error loss function wrapper.""" - - def __call__(self, probs, labels, valid_mask): - """Compute instance segmentation loss. - - Args: - probs: A Tensor of shape [batch_size * num_points, height, width, - num_classes]. The logits are not necessarily between 0 and 1. - labels: A float32/float16 Tensor of shape [batch_size, num_instances, - mask_size, mask_size], where mask_size = - mask_crop_size * gt_upsample_scale for fine mask, or mask_crop_size - for coarse masks and shape priors. - valid_mask: a binary mask indicating valid training masks. - - Returns: - loss: an float tensor representing total mask classification loss. - """ - with tf.name_scope('shapemask_prior_loss'): - batch_size, num_instances = valid_mask.get_shape().as_list()[:2] - diff = (tf.cast(labels, dtype=tf.float32) - - tf.cast(probs, dtype=tf.float32)) - diff *= tf.cast( - tf.reshape(valid_mask, [batch_size, num_instances, 1, 1]), - tf.float32) - # Adding 0.001 in the denominator to avoid division by zero. - loss = tf.nn.l2_loss(diff) / (tf.reduce_sum(labels) + 0.001) - return loss - - -class ShapemaskLoss(object): - """ShapeMask mask loss function wrapper.""" - - def __init__(self): - self._binary_crossentropy = tf.keras.losses.BinaryCrossentropy( - reduction=tf.keras.losses.Reduction.SUM, from_logits=True) - - def __call__(self, logits, labels, valid_mask): - """ShapeMask mask cross entropy loss function wrapper. - - Args: - logits: A Tensor of shape [batch_size * num_instances, height, width, - num_classes]. The logits are not necessarily between 0 and 1. - labels: A float16/float32 Tensor of shape [batch_size, num_instances, - mask_size, mask_size], where mask_size = - mask_crop_size * gt_upsample_scale for fine mask, or mask_crop_size - for coarse masks and shape priors. - valid_mask: a binary mask of shape [batch_size, num_instances] - indicating valid training masks. - Returns: - loss: an float tensor representing total mask classification loss. - """ - with tf.name_scope('shapemask_loss'): - batch_size, num_instances = valid_mask.get_shape().as_list()[:2] - labels = tf.cast(labels, tf.float32) - logits = tf.cast(logits, tf.float32) - loss = self._binary_crossentropy(labels, logits) - loss *= tf.cast(tf.reshape( - valid_mask, [batch_size, num_instances, 1, 1]), loss.dtype) - # Adding 0.001 in the denominator to avoid division by zero. - loss = tf.reduce_sum(loss) / (tf.reduce_sum(labels) + 0.001) - return loss diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/code_tasks.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/code_tasks.py deleted file mode 100644 index 27cc7ecd1c76f2d765692ce0a94acd1df04ff681..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/code_tasks.py +++ /dev/null @@ -1,1381 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -"""Tasks for RL.""" - -import abc -import copy -import itertools -import random - -from absl import logging -import numpy as np -from six.moves import xrange - -from common import bf # brain coder -from common import reward as r # brain coder -from single_task import misc # brain coder -from single_task import test_tasks # brain coder - - -MAX_EXECUTION_STEPS = 5000 - - -def make_task(task_name, override_kwargs=None, max_code_length=100, - require_correct_syntax=False, - do_code_simplification=False, - correct_bonus=2.0, code_length_bonus=1.0): - """Make tasks with setting from paper.""" - logging.info('Making paper-config task.') - n = 16 # Number of test cases. - task_mapping = { - 'print-hello': ( - PrintTask, dict(base=27, fixed_string=[8, 5, 12, 12, 15])), - 'print': (PrintIntTask, dict(base=256, fixed_string=[1, 2, 3, 4, 5])), - 'echo': (EchoTask, dict(base=27, min_length=1, max_length=6)), - 'remove-char': ( - RemoveCharTask, dict(base=256, n=n, min_len=1, max_len=6)), - 'reverse': ( - ReverseTask, dict(base=256, n=n, min_len=1, max_len=6)), - 'reverse-tune': ( - ReverseTaskV2, dict(base=256, reward_type='static-bylen')), - 'remove-char-tune': (RemoveCharTaskV2, dict(base=27)), - 'prefix': (CommonPrefixTask, dict(base=27)), - 'find': (FindSubStrTask, dict(base=27)), - 'sort3': (SortFixedTaskV2, dict(base=27, n=150, length=3)), - 'count-char': (CountCharTaskV2, dict(n=n, max_len=6)), - 'bool-logic': (BooleanLogicTask, dict()), - 'add': (AddTask, dict(n=9)), - 'echo-twice': (EchoTwiceTask, dict(n=n)), - 'echo-thrice': (EchoThriceTask, dict(n=n)), - 'copy-reverse': (CopyReverseTask, dict(n=n)), - 'zero-cascade': (EchoZeroCascadeTask, dict(n=n)), - 'cascade': (EchoCascadeTask, dict(n=n)), - 'shift-left': (ShiftLeftTask, dict(n=n)), - 'shift-right': (ShiftRightTask, dict(n=n)), - 'riffle': (RiffleTask, dict(n=n)), - 'unriffle': (UnriffleTask, dict(n=n)), - 'middle-char': (MiddleCharTask, dict(n=n)), - 'remove-last': (RemoveLastTask, dict(n=n)), - 'remove-last-two': (RemoveLastTwoTask, dict(n=n)), - 'echo-alternating': (EchoAlternatingTask, dict(n=n)), - 'echo-half': (EchoHalfTask, dict(n=n)), - 'length': (LengthTask, dict(n=n)), - 'echo-second-seq': (EchoSecondSequenceTask, dict(n=n)), - 'echo-nth-seq': (EchoNthSequenceTask, dict(n=n)), - 'substring': (SubstringTask, dict(n=n)), - 'divide-2': (Divide2Task, dict(n=n)), - 'dedup': (DedupTask, dict(n=n)), - 'remove-target-char': (RemoveTargetCharTask, dict(n=n)), - 'list-index': (ListIndexTask, dict(n=n)), - 'fib': (FibonacciTask, dict()), - 'count-down': (BottlesOfBeerTask, dict()), - 'split': (SplitTask, dict()), - 'trim-left': (TrimLeftTask, dict()), - 'circle-route': ( - JudgeRouteCircleTask, dict(n=100, max_len=32)), - 'multiply': (MultiplyTask, dict(n=100)), - 'divmod': (DivModTask, dict(n=100)), - } - - if task_name not in task_mapping: - # Test tasks. - if task_name == 'test-hill-climb': - return test_tasks.BasicTaskManager(test_tasks.HillClimbingTask()) - raise ValueError('Unknown task type "%s"' % task_name) - task_cls, kwargs = task_mapping[task_name] - - if override_kwargs: - if not isinstance(override_kwargs, dict): - raise ValueError( - 'override_kwargs must be a dict, got: %s', override_kwargs) - kwargs.update(override_kwargs) - - task = task_cls(**kwargs) - - reward_fn = r.absolute_distance_reward - # reward_fn = r.absolute_mod_distance_reward - # reward_fn = r.absolute_log_distance_reward - logging.info('Using reward function: %s', reward_fn.__name__) - - # We want reward with and without code simplification to be scaled the same - # way. Without code simplification, give the maximum code length bonus - # every time. - min_code_length = 0.0 if do_code_simplification else max_code_length - - return MultiIOTaskManager( - task=task, correct_bonus=correct_bonus, - code_length_bonus=code_length_bonus, - max_code_length=max_code_length, min_code_length=min_code_length, - reward_fn=reward_fn, require_correct_syntax=require_correct_syntax) - - -def concat(lists): - if not lists: - return [] - l = lists[0] - for k in lists[1:]: - l += k - return l - - -def concat_join(lists, sep): - if not lists: - return [] - l = lists[0] - for k in lists[1:]: - l += [sep] + k - return l - - -def clipped_linear(x, x0, y0, slope, y_range): - min_y, max_y = y_range - return min(max(slope * (x - x0) + y0, min_y), max_y) - - -class MultiIOTaskManager(object): - """Supports tasks which test the code with multiple I/O examples.""" - - def __init__(self, task, max_code_length=32, min_code_length=0, - max_execution_steps=MAX_EXECUTION_STEPS, correct_bonus=1.0, - code_length_bonus=1.0, failure_reward=-2.0, reward_fn=None, - require_correct_syntax=False): - assert isinstance(task, BaseTask) - self.task = task - self.max_code_length = max_code_length - self.min_code_length = min_code_length - self.max_execution_steps = max_execution_steps - self.require_correct_syntax = require_correct_syntax - self.correct_bonus = correct_bonus - self.code_length_bonus = code_length_bonus - self.failure_reward = failure_reward - self.time_penalty = ( - 1.0 / (max_code_length - min_code_length) - if max_code_length > min_code_length else 0.0) - if reward_fn is None: - self.reward_fn = r.absolute_distance_reward - else: - self.reward_fn = reward_fn - self.input_type = ( - task.input_type if hasattr(task, 'input_type') else misc.IOType.integer) - self.output_type = ( - task.output_type if hasattr(task, 'output_type') - else misc.IOType.integer) - self._compute_best_reward() - - def _compute_best_reward(self): - io_seqs = self.task.make_io_set() - reward = 0.0 - for _, output_seq in io_seqs: - reward += self.reward_fn(output_seq, output_seq, self.task.base) - reward += self.correct_bonus - reward += self.code_length_bonus # Bonus for shortest code. - self.best_reward = reward - self.good_reward = 0.75 * reward - logging.info('Known best reward: %.4f', self.best_reward) - - def _score_batch(self, code_strings): - return [self._score_code(code) for code in code_strings] - - def _score_code(self, code): - """Run test cases on code and compute reward. - - Args: - code: A single BF code string. - - Returns: - misc.RewardInfo namedtuple instance containing reward and code execution - information, including inputs, expected outputs, code outputs, input - and output types, and reason for the reward obtained. - """ - # Get list of 2-tuples, each containing an input sequence and an output - # sequence. - io_seqs = self.task.make_io_set() - terminal_reward = 0.0 - results = [] - reason = 'correct' - for input_seq, output_seq in io_seqs: - eval_result = bf.evaluate( - code, input_buffer=input_seq, timeout=0.1, - max_steps=self.max_execution_steps, - base=self.task.base, - require_correct_syntax=self.require_correct_syntax) - result, success = eval_result.output, eval_result.success - if not success: - # Code execution timed out. - terminal_reward = self.failure_reward - results = [] - reason = eval_result.failure_reason - break - else: - terminal_reward += self.reward_fn(result, output_seq, self.task.base) - if result == output_seq: - terminal_reward += self.correct_bonus # Bonus for correct answer. - - # Only add additional reward for shorter code. Subtracting reward - # interferes with the main objective. Only optimize for length once - # any solution is found. - if self.min_code_length == self.max_code_length: - terminal_reward += self.code_length_bonus - else: - terminal_reward += self.code_length_bonus * clipped_linear( - x=len(code), x0=self.min_code_length, y0=1.0, - slope=-self.time_penalty, y_range=(0.0, 1.0)) - - # reason remains 'correct' if it is already - elif reason == 'correct': - reason = 'wrong' - results.append(result) - - # Return list of rewards, one for each char in the code. All are 0 except - # for the terminal reward. - terminal_reward /= self.best_reward - return misc.RewardInfo( - episode_rewards=[0.0] * (len(code) - 1) + [terminal_reward], - input_case=misc.IOTuple(i for i, o in io_seqs), - correct_output=misc.IOTuple(o for i, o in io_seqs), - code_output=misc.IOTuple(results), - input_type=self.input_type, - output_type=self.output_type, - reason=reason) - - def rl_batch(self, batch_size): - """Produces list of reward functions. One for each program in the batch.""" - return [self._score_code] * batch_size - - -def conditional_overwrite(current_value, new_value, allowed_overwrite_values): - if current_value in allowed_overwrite_values: - return new_value - return current_value - - -class BaseTask(object): - """A coding task. - - All coding tasks should inherit this class. - """ - __metaclass__ = abc.ABCMeta - - def __init__(self, base=256): - self.base = base # All tasks must set the integer base that the expect. - - @abc.abstractmethod - def make_io_set(self): - """Generate a set of test cases for the task. - - Returns: - List of tuples, where each tuple is (input_case, output_case). - input_case and output_case are lists of integers. - """ - pass - - -# ============================================================================== -# ICLR tasks. -# ============================================================================== - - -class PrintTask(BaseTask): - """Print string coding task. - - Code needs to output a fixed string (given as a hyperparameter to the - task constructor). Program input is ignored. - """ - - def __init__(self, base, fixed_string=None): - super(type(self), self).__init__() - self.base = base # base includes EOS - self.eos = 0 - if fixed_string: - self.fixed_string = fixed_string - else: - self.fixed_string = [1, 2, 3, 0] # ABC - self.min_length = self.max_length = len(self.fixed_string) - - def make_io_set(self): - return [(list(), list(self.fixed_string))] - - -class RemoveCharTaskV2(BaseTask): - """Remove character coding task (version 2). - - Code needs to pipe input to output, but with all the 'A' (value 1) chars - removed. 'A' appears exactly once in each input. - - Test cases are hard-coded. - """ - - def __init__(self, base): - super(type(self), self).__init__() - self.base = base - self.eos = 0 - self.remove_char = 1 - assert base >= 27 - - def make_io_set(self): - rm = self.remove_char - return [ - ([rm, 0], [0]), - ([20, rm, 0], [20, 0]), - ([rm, 13, 0], [13, 0]), - ([6, rm, 17, 0], [6, 17, 0]), - ([rm, 11, 24, 0], [11, 24, 0]), - ([2, 16, 21, rm, 0], [2, 16, 21, 0]), - ([18, rm, 12, 26, 7, 0], [18, 12, 26, 7, 0]), - ([9, 10, 22, rm, 4, 0], [9, 10, 22, 4, 0])] - - -class RemoveCharTask(BaseTask): - """Remove character coding task. - - Code needs to pipe input to output, but with all the 'A' (value 1) chars - removed. 'A' appears at least once in each input. - - Test cases are dynamically generated, allowing for the number of test cases - to be a hyperparameter. - """ - - def __init__(self, base, n, min_len, max_len): - super(type(self), self).__init__() - self.base = base - self.eos = 0 - self.remove_char = 1 - assert base >= 27 - self._io_pairs = self._make_io_examples(n, min_len, max_len) - - def _make_io_examples(self, n, min_len, max_len): - """Generate test cases for the task.""" - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [] - for _ in xrange(n): - length = rand.randrange(min_len, max_len + 1) - rm_char_pos = rand.randrange(0, length) - input_seq = [rand.randrange(1, self.base) for _ in xrange(length)] - input_seq[rm_char_pos] = self.remove_char - output_seq = list(input_seq) - del output_seq[rm_char_pos] - output_seq.append(0) - io_examples.append((input_seq, output_seq)) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class ReverseTaskV2(BaseTask): - """Reverse string coding task (version 2). - - Code needs to pipe input to output, but in reverse order. - - Stochastic test case = new test case randomly generated for every run of - `make_io_set`, i.e. different test cases every time code is scored. - - Task supports different types of test cases: - rand-one: Code is scored on one stochastic test case. - rand-many: Code is scored on 5 stochastic test cases. - static-bylen: Code is scored on 5 static test cases. There is one test - case for string lengths 1 through 5. - rand-bylen: Code is scored on 5 stochastic test cases, where there is one - test case for string lengths 1 through 5. - """ - - def __init__(self, base, reward_type): - super(type(self), self).__init__() - self.base = base # base includes EOS - assert base >= 27 - self.eos = 0 - self.io_pair_fn = { - # One random example at a time. - 'rand-one': lambda: self._io_rand(1), - # K randomy examples at a time (any lengths). - 'rand-many': lambda: self._io_rand(5), - # Static examples, one for each length. - 'static-bylen': self._io_static_by_len, - # Random examples, one for each length. - 'rand-bylen': self._io_rand_by_len}[reward_type] - - def _make_io_examples(self, sequences): - outputs = [list(i) for i in sequences] - for o in outputs: - o.reverse() - o.append(0) - inputs = [i + [0] for i in sequences] - return zip(inputs, outputs) - - def _io_rand(self, k): - inputs = [(np.random.choice(26, random.randrange(1, 6)) + 1).tolist() - for _ in xrange(k)] - return self._make_io_examples(inputs) - - def _io_rand_by_len(self, k=5): - inputs = [(np.random.choice(26, length) + 1).tolist() - for length in xrange(1, k + 1)] - return self._make_io_examples(inputs) - - def _io_static_by_len(self): - return [ - ([7, 0], [7, 0]), - ([6, 2, 0], [2, 6, 0]), - ([5, 1, 10, 0], [10, 1, 5, 0]), - ([8, 6, 5, 15, 0], [15, 5, 6, 8, 0]), - ([10, 12, 5, 2, 7, 0], [7, 2, 5, 12, 10, 0])] - - def make_io_set(self): - return self.io_pair_fn() - - -class ReverseTask(BaseTask): - """Reverse string coding task. - - Code needs to pipe input to output, but in reverse order. - - Test cases are dynamically generated, allowing for the number of test cases - to be a hyperparameter. - """ - - def __init__(self, base, n, min_len, max_len): - super(type(self), self).__init__() - self.base = base # base includes EOS - assert base >= 27 - self.eos = 0 - self._io_pairs = self._make_io_examples(n, min_len, max_len) - - def _make_io_examples(self, n, min_len, max_len): - """Generate test cases for the task.""" - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [] - for _ in xrange(n): - length = rand.randrange(min_len, max_len + 1) - input_seq = [rand.randrange(1, self.base) for _ in xrange(length)] - output_seq = list(input_seq) - output_seq.reverse() - output_seq.append(0) - io_examples.append((input_seq, output_seq)) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class CommonPrefixTask(BaseTask): - """Common prefix coding task. - - Code needs to output the common prefix between two input lists. Input lists - are variable length, where each list ends with a 0. A common prefix is a - sequence which both lists start with. - """ - - def __init__(self, base): - super(type(self), self).__init__() - assert base >= 27 - self.base = base - self.eos = 0 - - def make_io_set(self): - return [ - ([12, 24, 18, 0, 12, 5, 0], [12, 0]), - ([1, 2, 3, 0, 1, 2, 17, 14, 0], [1, 2, 0]), - ([15, 2, 1, 9, 2, 0, 15, 2, 1, 25, 8, 14, 0], [15, 2, 1, 0]), - ([14, 9, 7, 8, 6, 16, 0, 14, 9, 7, 8, 8, 6, 8, 26, 0], - [14, 9, 7, 8, 0]), - ([12, 4, 16, 22, 1, 17, 0, 12, 4, 16, 22, 1, 8, 10, 0], - [12, 4, 16, 22, 1, 0])] - - -class CountCharTask(BaseTask): - - def __init__(self): - super(type(self), self).__init__() - self.base = 27 - self.eos = 0 - self.char = 1 - self.input_type = misc.IOType.string - self.output_type = misc.IOType.integer - - def make_io_set(self): - return [ - ([10, 0], [0]), - ([1, 0], [1]), - ([1, 1, 0], [2]), - ([11, 1, 0], [1]), - ([1, 24, 0], [1]), - ([13, 6, 0], [0]), - ([9, 2, 7, 0], [0]), - ([1, 24, 11, 0], [1]), - ([19, 1, 1, 0], [2]), - ([1, 6, 1, 0], [2]), - ([22, 16, 17, 9, 0], [0]), - ([1, 1, 1, 19, 0], [3]), - ([1, 1, 1, 1, 0], [4]), - ([9, 4, 19, 11, 5, 0], [0]), - ([24, 11, 26, 1, 15, 0], [1]), - ([1, 1, 20, 1, 1, 0], [4]), - ([1, 1, 1, 1, 1, 0], [5])] - - -class CountCharTaskV2(BaseTask): - """Count char coding task (version 2). - - Code must output the number of occurances of character 'A' (value 1) in an - input string. - - Test cases are dynamically generated, allowing for the number of test cases - to be a hyperparameter. - """ - - def __init__(self, n, max_len): - super(type(self), self).__init__() - self.base = 27 - self.eos = 0 - self.char = 1 - self.other_chars = [c for c in xrange(self.base) - if c not in (self.eos, self.char)] - self.input_type = misc.IOType.string - self.output_type = misc.IOType.integer - self._io_pairs = self._make_io_examples(n, max_len) - - def _make_io_examples(self, n, max_len): - """Generate test cases for the task.""" - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [] - io_examples.append(([10, 0], [0])) - io_examples.append(([1, 0], [1])) - io_examples.append(([1, 1, 0], [2])) - io_examples.append(([9, 4, 19, 11, 5, 0], [0])) - io_examples.append(([24, 11, 26, 1, 15, 0], [1])) - for _ in xrange(n - 5): - length = rand.randrange(2, max_len + 1) - num_chars = rand.randrange(0, max_len + 1) - input_seq = [self.char] * num_chars + [0] * (length - num_chars) - rand.shuffle(input_seq) - for i in xrange(len(input_seq)): - if not input_seq[i]: - input_seq[i] = self.other_chars[rand.randrange(len(self.other_chars))] - output_seq = [num_chars] - io_examples.append((input_seq, output_seq)) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class AddTask(BaseTask): - """Addition coding task. - - Code needs to read in two integers and output their sum mod the BF base, - followed by a terminating 0. - """ - - def __init__(self, n=16): - super(type(self), self).__init__() - self.base = 256 - self.input_type = misc.IOType.integer - self.output_type = misc.IOType.integer - self._io_pairs = self._make_io_examples(n) - - def _make_io_examples(self, n): - """Generate test cases for the task.""" - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [ - ([4, 0], [4, 0]), - ([0, 5], [5, 0]), - ([1, 2], [3, 0]), - ([67, 21], [88, 0]), - ([55, 56], [111, 0]), - ([128, 33], [161, 0]), - ([221, 251], [216, 0]), - ([130, 127], [1, 0]), - ([255, 1], [0, 0])] - extra_examples = max(n - len(io_examples), 0) - for _ in xrange(extra_examples): - a = rand.randrange(256) - b = rand.randrange(256) - input_seq = [a, b] - output_seq = [(a + b) % 256, 0] - io_examples.append((input_seq, output_seq)) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class BooleanLogicTask(BaseTask): - """Boolean logic (truth table) coding task. - - Code needs to memorize a boolean truth table. Specifically, it must encode a - mapping from triple of bools to a single bool. - """ - - def __init__(self): - super(type(self), self).__init__() - self.base = 2 - self.input_type = misc.IOType.boolean - self.output_type = misc.IOType.boolean - # X(~Z) + (~Y)(~Z) + (~X)YZ - self._truth_fn = ( - lambda x, y, z: # pylint: disable=g-long-lambda - (x and not z) or (not y and not z) or (not x and y and z)) - self._test_cases = [ - ([x, y, z], [int(self._truth_fn(x, y, z))]) - for x, y, z in itertools.product(range(2), range(2), range(2))] - - def make_io_set(self): - return copy.deepcopy(self._test_cases) - - -# ------------------------------------------------------------------------------ -# The following tasks are generated from known BF solutions. This guarantees -# that each task can be solved within the maximum code length, and maximum -# execution steps. -# ------------------------------------------------------------------------------ - - -def default_input_fn_factory(min_length=1, max_length=6, base=256): - def _input_gen(rand): - l = rand.randrange(min_length, max_length + 1) - return [rand.randrange(base) for _ in xrange(l)] - return _input_gen - - -class KnownCodeBaseTask(BaseTask): - """These tasks generate their test cases from a known BF solution. - - This ensures that each task has a solution which is under the max character - length, and that it solves the test cases under the max number of execution - steps. - """ - - def __init__(self, code_solution, make_input_fn, n=100, base=256, - max_steps=5000, seed=6849275409234): - super(KnownCodeBaseTask, self).__init__() - # Make sure known solution is less than the code length used in experiments. - assert len(code_solution) < 100 - self.code_solution = code_solution - self.make_input_fn = make_input_fn - self.n = n - self.base = base - self.max_steps = max_steps - self.seed = seed - self._test_cases = list(self._test_case_generator(code_solution)) - - def _test_case_generator(self, code_solution): - rand = random.Random(self.seed) - for _ in xrange(self.n): - input_case = self.make_input_fn(rand) - result = bf.evaluate( - code_solution, input_buffer=input_case, max_steps=self.max_steps, - base=self.base, require_correct_syntax=False) - if not result.success: - raise RuntimeError( - 'Program must succeed. Failed on input: %s' % input_case) - yield input_case, result.output - - def make_io_set(self): - return copy.deepcopy(self._test_cases) - - -class EchoTwiceTask(KnownCodeBaseTask): - """Echo twice.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,.[>,.]<[<]>[.>].', - default_input_fn_factory(), - **kwargs) - - -class EchoThriceTask(KnownCodeBaseTask): - """Echo three times.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,.[>,.]<[<]>[.>].<[<]>[.>].', - default_input_fn_factory(), - **kwargs) - - -class CopyReverseTask(KnownCodeBaseTask): - """Echo forwards, backwards, and then forwards again.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,.[>,.]<[.<].>[.>].', - default_input_fn_factory(), - **kwargs) - - -class EchoZeroCascadeTask(KnownCodeBaseTask): - """Print k-th char with k zeros inbetween (1-indexed).""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - ',[.>[->+>.<<]>+[-<+>]<<,]', - default_input_fn_factory(), - **kwargs) - - -class EchoCascadeTask(KnownCodeBaseTask): - """Print k-th char k times (1-indexed).""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - ',>>+<<[>>[-<+>]<[->+<<.>]>+<<,].', - default_input_fn_factory(base=20), - **kwargs) - - -class ShiftLeftTask(KnownCodeBaseTask): - """Circulate shift input left.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - ',>,[.,]<.,.', - default_input_fn_factory(), - **kwargs) - - -class ShiftRightTask(KnownCodeBaseTask): - """Circular shift input right.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,[>,]<.[-]<[<]>[.>].', - default_input_fn_factory(), - **kwargs) - - -class RiffleTask(KnownCodeBaseTask): - """Shuffle like a deck of cards. - - For input of length N, output values in the following index order: - N-1, 0, N-2, 1, N-3, 2, ... - """ - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,[>,]<[.[-]<[<]>.[-]>[>]<]', - default_input_fn_factory(base=20, max_length=8), - **kwargs) - - -class UnriffleTask(KnownCodeBaseTask): - """Inverse of riffle.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,[>,[.[-]],]<[.<].', - default_input_fn_factory(base=20, max_length=8), - **kwargs) - - -class MiddleCharTask(KnownCodeBaseTask): - """Print middle char if length is odd, or 0 if even.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,[>,]<<[[>]<[,<[<]>,>[>]][>]<<]>.', - default_input_fn_factory(max_length=10), - **kwargs) - - -class RemoveLastTask(KnownCodeBaseTask): - """Remove last character.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - ',>,[[<.[-]>[-<+>]],].', - default_input_fn_factory(base=20), - **kwargs) - - -class RemoveLastTwoTask(KnownCodeBaseTask): - """Remove last two characters.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - ',>,>,[[<<.[-]>[-<+>]>[-<+>]],].', - default_input_fn_factory(base=10), - **kwargs) - - -class EchoAlternatingTask(KnownCodeBaseTask): - # Print even numbered chars first (0-indexed), then odd numbered chars - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>,[.,>,]<<[<]>[.>].', - default_input_fn_factory(base=20, max_length=8), - **kwargs) - - -class EchoHalfTask(KnownCodeBaseTask): - """Echo only first half of the input (round down when odd lengthed).""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>>+>,[[<]>+[>],]<[<]>-[-[-<<+>]<[>]>]<<[->+<]>[[>]>.,<+[<]>-].', - default_input_fn_factory(base=20, max_length=9), - **kwargs) - - -class LengthTask(KnownCodeBaseTask): - """Print length of the input sequence.""" - - def __init__(self, **kwargs): - super(type(self), self).__init__( - '>+>,[[<]>+[>],]<[<]>-.', - default_input_fn_factory(max_length=14), - **kwargs) - - -class EchoSecondSequenceTask(KnownCodeBaseTask): - """Echo second sequence. Sequences are separated by 0.""" - - def __init__(self, **kwargs): - def echo_second_gen(rand): - l = rand.randrange(1, 6) - x = [rand.randrange(256) for _ in xrange(l)] - l = rand.randrange(1, 6) - y = [rand.randrange(256) for _ in xrange(l)] - return x + [0] + y + [0] - super(type(self), self).__init__( - ',[,],[.,].', - echo_second_gen, - **kwargs) - - -class EchoNthSequenceTask(KnownCodeBaseTask): - """Echo n-th sequence (1-indexed). Sequences are separated by 0.""" - - def __init__(self, **kwargs): - def echo_nth_gen(rand): - k = rand.randrange(1, 7) - n = rand.randrange(1, k + 1) - x = [] - for _ in xrange(k): - l = rand.randrange(0, 4) - x += [rand.randrange(256) for _ in xrange(l)] + [0] - return [n] + x - super(type(self), self).__init__( - ',-[->,[,]<],[.,].', - echo_nth_gen, - **kwargs) - - -class SubstringTask(KnownCodeBaseTask): - """Echo substring. - - First two inputs are i and l, where i is the starting index (0-indexed) - and l is the length of the substring. - """ - - def __init__(self, **kwargs): - def substring_gen(rand): - l = rand.randrange(2, 16) - i, j = sorted([rand.randrange(l), rand.randrange(l)]) - n = j - i - x = [rand.randrange(256) for _ in xrange(l)] + [0] - return [i, n] + x - super(type(self), self).__init__( - '>,<,>[->,<]>,<<[->>.,<<]', - substring_gen, - **kwargs) - - -class Divide2Task(KnownCodeBaseTask): - """Divide by 2 (integer floor division).""" - - def __init__(self, **kwargs): - def int_input_gen(rand): - return [rand.randrange(256)] - super(type(self), self).__init__( - ',[-[->>+<]>[<]<]>>.', - int_input_gen, - **kwargs) - - -class DedupTask(KnownCodeBaseTask): - """Deduplicate adjacent duplicate chars.""" - - def __init__(self, **kwargs): - def dedup_input_gen(rand): - np_random = np.random.RandomState(rand.randrange(2147483647)) - num_unique = rand.randrange(1, 5) - unique = np_random.choice(6, num_unique, replace=False) + 1 - return [v for v in unique for _ in xrange(rand.randrange(1, 5))] + [0] - super(type(self), self).__init__( - '>>,.[[-<+<+>>],[-<->]<[[-<->]<.>]<[->>+<<]>>]', - dedup_input_gen, - **kwargs) - - -# ============================================================================== -# Extra tasks. -# ============================================================================== - - -class PrintIntTask(BaseTask): - """Print integer coding task. - - Code needs to output a fixed single value (given as a hyperparameter to the - task constructor). Program input is ignored. - """ - - def __init__(self, base, fixed_string): - super(type(self), self).__init__() - self.base = base - self.eos = 0 - self.fixed_string = fixed_string - self.input_type = misc.IOType.integer - self.output_type = misc.IOType.integer - - def make_io_set(self): - return [(list(), list(self.fixed_string))] - - -class EchoTask(BaseTask): - """Echo string coding task. - - Code needs to pipe input to putput (without any modifications). - """ - - def __init__(self, base, min_length=1, max_length=5): - super(type(self), self).__init__() - self.base = base # base includes EOS - self.eos = 0 - self.min_length = min_length - self.max_length = max_length - self._io_pairs = self._make_io_examples(25) - - def _make_io_examples(self, n): - # Test cases are fixed, but varied. - np_random = np.random.RandomState(1234567890) - io_pairs = [] - for _ in xrange(n): - length = np_random.randint(self.min_length, self.max_length + 1) - input_seq = np_random.randint(1, self.base, length).tolist() + [self.eos] - output_seq = list(input_seq) - io_pairs.append((input_seq, output_seq)) - return io_pairs - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class JudgeRouteCircleTask(BaseTask): - """Judge route circle coding task. - - Code needs to determine if the given route makes a closed loop. - Encoding: U = 1, R = 2, D = 3, L = 4. - - Based on - https://leetcode.com/problems/judge-route-circle/description/ - """ - base = 256 - input_type = misc.IOType.integer - output_type = misc.IOType.integer - - def __init__(self, n, max_len=12): - super(type(self), self).__init__() - self.eos = 0 - self._io_pairs = self._make_io_examples(n, max_len) - self.input_type = misc.IOType.integer - self.output_type = misc.IOType.integer - - def _solve(self, input_seq): - assert input_seq[-1] == 0 - pos = [0, 0] # (x, y) - for move in input_seq[:-1]: - assert 0 < move <= 4 - if move & 1 == 0: # Left or Right. - pos[0] += 3 - move # Add or subtract 1. - else: - pos[1] += 2 - move # Add or subtract 1. - return [int(not pos[0] and not pos[1])] - - def _make_io_examples(self, n, max_len): - """Generate test cases for the task.""" - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [] - io_examples.append(([0], [1])) - io_examples.append(([4, 2, 0], [1])) - io_examples.append(([2, 4, 0], [1])) - io_examples.append(([3, 1, 0], [1])) - io_examples.append(([1, 3, 0], [1])) - io_examples.append(([1, 0], [0])) - io_examples.append(([2, 0], [0])) - io_examples.append(([3, 0], [0])) - io_examples.append(([4, 0], [0])) - for _ in xrange(n): - is_true = rand.randrange(2) - length = rand.randrange(1, max_len + 1) - if is_true: - # Make a true case. - length = (length >> 1) << 1 # Make even. - partition = (rand.randrange(length + 1) >> 1) << 1 - a = partition >> 1 - b = (length - partition) >> 1 - counts = {1: a, 2: b, 3: a, 4: b} - else: - # Make a false case. - partitions = ( - [0] - + sorted([rand.randrange(length + 1) for _ in range(3)]) - + [length]) - counts = {n: partitions[n] - partitions[n - 1] for n in range(1, 5)} - if counts[1] == counts[3] and counts[2] == counts[4]: - # By chance we sampled a true case. Make it false by exchanging - # one count between even and odd pairs. - base = 1 + 2 * rand.randrange(2) - a, b = (base, base + 1) if rand.randrange(2) else (base + 1, base) - if counts[a] == length or counts[b] == 0: - # If counts are at their extreme values, then swap who gets - # incremented and decremented. - a, b = b, a - counts[a] += 1 - counts[b] -= 1 - assert counts[a] <= length and counts[b] >= 0 - assert sum(counts.values()) == length - input_seq = [n for n in xrange(1, 5) for _ in xrange(counts[n])] - rand.shuffle(input_seq) - input_seq += [0] - output_seq = self._solve(input_seq) - assert output_seq[0] == is_true - io_examples.append((input_seq, output_seq)) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class MultiplyTask(BaseTask): - """Multiply coding task. - - Code needs to multiple two ints. - - Solution: - http://robl.co/brief-look-at-brainfuck/ - ,>,><<[->[->+>+<<]>>[-<<+>>]<<<]>>. - """ - base = 512 - input_type = misc.IOType.integer - output_type = misc.IOType.integer - - def __init__(self, n): - super(type(self), self).__init__() - self.eos = 0 - self._io_pairs = self._make_io_examples(n) - self.input_type = misc.IOType.integer - self.output_type = misc.IOType.integer - - def _factors(self, n): - return set(i for i in range(1, int(n**0.5) + 1) if n % i == 0) - - def _make_io_examples(self, n): - """Generate test cases for the task.""" - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [] - for _ in xrange(n): - n = rand.randrange(self.base) - if n == 0: - a, b = 0, rand.randrange(self.base) - else: - f = list(self._factors(n)) - a = f[rand.randrange(len(f))] - b = n // a - if rand.randrange(2): - a, b = b, a - io_examples.append(([a, b], [n])) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class DivModTask(BaseTask): - """Divmod coding task. - - Code needs to take the quotient and remainder of two ints. - - Solution: - http://robl.co/brief-look-at-brainfuck/ - ,>,><<[>[->+>+<<]>[-<<-[>]>>>[<[-<->]<[>]>>[[-]>>+<]>-<]<<]>>>+<<[-<<+>>]<<<]> - >>>>[-<<<<<+>>>>>]<<<<<.>.> - """ - base = 512 - input_type = misc.IOType.integer - output_type = misc.IOType.integer - - def __init__(self, n): - super(type(self), self).__init__() - self.eos = 0 - self._io_pairs = self._make_io_examples(n) - self.input_type = misc.IOType.integer - self.output_type = misc.IOType.integer - - def _make_io_examples(self, n): - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [] - for _ in xrange(n): - n = rand.randrange(0, self.base) - k = rand.randrange(1, self.base) # Divisor cannot be 0. - io_examples.append(([n, k], list(divmod(n, k)))) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class FibonacciTask(BaseTask): - - def __init__(self): - super(type(self), self).__init__() - self.base = 256 - self.input_type = misc.IOType.integer - self.output_type = misc.IOType.integer - - def make_io_set(self): - return [ - ([0], [0, 1]), - ([1], [1, 1]), - ([2], [1, 2]), - ([3], [2, 3]), - ([4], [3, 5]), - ([5], [5, 8]), - ([6], [8, 13]), - ([7], [13, 21]), - ([8], [21, 34]), - ([9], [34, 55]), - ([10], [55, 89]), - ([11], [89, 144]), - ([12], [144, 233]), - ([13], [233, 121])] - - -class FindSubStrTask(BaseTask): - """Find sub-string coding task. - - Code needs to output a bool: True if the input string contains a hard-coded - substring, 'AB' (values [1, 2]). - """ - - def __init__(self, base): - super(type(self), self).__init__() - assert base >= 27 - self.base = base - self.eos = 0 - self.find_str = [1, 2] - self.input_type = misc.IOType.string - self.output_type = misc.IOType.boolean - - def make_io_set(self): - return [ - ([1, 1, 23, 0], [0]), - ([21, 3, 2, 0], [0]), - ([2, 1, 19, 0], [0]), - ([2, 24, 15, 3, 0], [0]), - ([24, 6, 10, 16, 4, 0], [0]), - ([1, 2, 12, 0], [1]), - ([7, 1, 2, 0], [1]), - ([1, 2, 11, 3, 0], [1]), - ([1, 1, 2, 18, 0], [1]), - ([7, 25, 1, 2, 0], [1]), - ([3, 1, 2, 11, 8, 0], [1]), - ([15, 16, 20, 1, 2, 0], [1])] - - -class SortFixedTask(BaseTask): - """Sort list coding task. - - Code needs to output a sorted input list. The task consists of lists of the - same length L, where L is provided to this task's constructor as a - hyperparameter. - """ - - def __init__(self, base, length=3): - super(type(self), self).__init__() - assert base >= 27 - self.base = base - self.eos = 0 - self.length = length - assert length == 3 # More lengths will be supported. - - def make_io_set(self): - if self.length == 3: - return [ - ([1, 20, 6], [1, 6, 20]), - ([13, 6, 7], [6, 7, 13]), - ([24, 2, 23], [2, 23, 24]), - ([16, 12, 3], [3, 12, 16]), - ([11, 24, 4], [4, 11, 24]), - ([10, 1, 19], [1, 10, 19])] - - -class SortFixedTaskV2(BaseTask): - """Sort list coding task (version 2). - - Code needs to output a sorted input list. The task consists of lists of the - same length L, where L is provided to this task's constructor as a - hyperparameter. - - Test cases are dynamically generated, allowing for the number of test cases - to be a hyperparameter. - """ - - def __init__(self, base, n, length=3): - super(type(self), self).__init__() - assert base >= 27 - self.base = base - self.eos = 0 - self._io_pairs = self._make_io_examples(n, length) - self.input_type = misc.IOType.integer - self.output_type = misc.IOType.integer - - def _make_io_examples(self, n, length): - rand = random.Random(6849275409234) # Test cases are fixed, but varied. - io_examples = [] - for _ in xrange(n): - input_seq = [rand.randrange(1, self.base) for _ in xrange(length)] - output_seq = sorted(input_seq) - io_examples.append((input_seq, output_seq)) - return io_examples - - def make_io_set(self): - return copy.deepcopy(self._io_pairs) - - -class RemoveTargetCharTask(KnownCodeBaseTask): - """Remove target character from string, where first input is the target. - - Target can appear multiple times. - """ - - def __init__(self, **kwargs): - def randrange_hole(rand, a, hole, b): - x = rand.randrange(a, b - 1) - if x >= hole: - return x + 1 - return x - def remove_target_char_gen(rand): - char = rand.randrange(1, 6) - l = rand.randrange(1, 8) - input_seq = [randrange_hole(rand, 1, char, 256) for _ in xrange(l)] - idx = range(l) - rand.shuffle(idx) - num_targets = rand.randrange(0, l) - for pos in idx[:num_targets]: - input_seq[pos] = char - return [char] + input_seq + [0] - super(type(self), self).__init__( - ',>>>,[<<<[->+>+<<]>>[->->+<<]>[>[-<+>]<.[-]]>[-]<<<[-<+>]>>,].', - remove_target_char_gen, - **kwargs) - - -class ListIndexTask(KnownCodeBaseTask): - """Echo i-th value in the given list.""" - - def __init__(self, **kwargs): - def array_index_gen(rand): - l = rand.randrange(1, 16) - i = rand.randrange(l) - return [i] + [rand.randrange(256) for _ in xrange(l)] + [0] - super(type(self), self).__init__( - ',[->,<]>,.', - array_index_gen, - **kwargs) - - -# ============================================================================== -# Tasks based on primaryobjects paper. -# ============================================================================== - - -def string2tokens(string): - return [ord(c) for c in string] - - -def stringlist2tokens(strings): - return [string2tokens(string) for string in strings] - - -def string2tokens_b27(string): - return [ord(c.lower()) - ord('a') + 1 for c in string] - - -def stringlist2tokens_b27(strings): - return [string2tokens_b27(string) for string in strings] - - -class BottlesOfBeerTask(BaseTask): - """Bottles of beer coding task. - - This is a counting task. Code needs to read in an int N and then output - every int from N to 0, each separated by a 0. - """ - base = 256 - input_type = misc.IOType.integer - output_type = misc.IOType.integer - - def make_io_set(self): - return [ - ([1], [1, 0]), - ([2], [2, 0, 1, 0]), - ([3], [3, 0, 2, 0, 1, 0]), - ([4], [4, 0, 3, 0, 2, 0, 1, 0]), - ([5], [5, 0, 4, 0, 3, 0, 2, 0, 1, 0]), - ([6], [6, 0, 5, 0, 4, 0, 3, 0, 2, 0, 1, 0])] - - -class SplitTask(BaseTask): - """Split coding task. - - Code needs to pipe input strings to output, but insert a 0 after every 3 - characters. This is in essence splitting the string into intervals of length - 3. - """ - base = 28 - input_type = misc.IOType.string - output_type = misc.IOType.integer - - def _splicer(self, lst, insert, interval=3): - for i, item in enumerate(lst): - yield item - if (i + 1) % interval == 0 and i < len(lst) - 1: - yield insert - - def __init__(self): - super(type(self), self).__init__() - inputs = stringlist2tokens_b27( - ['hello', 'orange', 'spaghetti', 'wins', 'one']) - targets = [list(self._splicer(i, 27)) for i in inputs] - self._test_cases = list(zip(inputs, targets)) - - def make_io_set(self): - return copy.deepcopy(self._test_cases) - - -class TrimLeftTask(BaseTask): - """Trim left coding task. - - Code needs to pipe input strings to output, but remove everything before the - first quotation char ("). - """ - base = 256 - input_type = misc.IOType.integer - output_type = misc.IOType.integer - - def __init__(self): - super(type(self), self).__init__() - inputs = stringlist2tokens( - ['a "inside" over', 'xy "test" rights', 'ca6 "foresting" service', - 'abc"def"yz.', 'A"B"']) - targets = stringlist2tokens( - ['"inside" over', '"test" rights', '"foresting" service', '"def"yz.', - '"B"']) - self._test_cases = list(zip(inputs, targets)) - - def make_io_set(self): - return copy.deepcopy(self._test_cases) diff --git a/spaces/NimaKL/FireWatch5k/README.md b/spaces/NimaKL/FireWatch5k/README.md deleted file mode 100644 index 760ccd4413f6407daa1d4b10f8484353539bff03..0000000000000000000000000000000000000000 --- a/spaces/NimaKL/FireWatch5k/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: FireWatch -emoji: 🔥 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/NonnaRose/Image-Caption/app.py b/spaces/NonnaRose/Image-Caption/app.py deleted file mode 100644 index 7be02987b945e1cbec1d122414ccf274cfc63e86..0000000000000000000000000000000000000000 --- a/spaces/NonnaRose/Image-Caption/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch -import re -import gradio as gr -from transformers import AutoTokenizer, ViTFeatureExtractor, VisionEncoderDecoderModel - -device='cpu' -encoder_checkpoint = "nlpconnect/vit-gpt2-image-captioning" -decoder_checkpoint = "nlpconnect/vit-gpt2-image-captioning" -model_checkpoint = "nlpconnect/vit-gpt2-image-captioning" -feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint) -tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint) -model = VisionEncoderDecoderModel.from_pretrained(model_checkpoint).to(device) - - -def predict(image,max_length=64, num_beams=4): - image = image.convert('RGB') - image = feature_extractor(image, return_tensors="pt").pixel_values.to(device) - clean_text = lambda x: x.replace('<|endoftext|>','').split('\n')[0] - caption_ids = model.generate(image, max_length = max_length)[0] - caption_text = clean_text(tokenizer.decode(caption_ids)) - return caption_text - - - -input = gr.inputs.Image(label="Upload any Image", type = 'pil', optional=True) -output = gr.outputs.Textbox(type="auto",label="Captions") -examples = [f"example{i}.jpg" for i in range(1,7)] - -title = "Image Captioning " -description = "Made by : shreyasdixit.tech" -interface = gr.Interface( - - fn=predict, - description=description, - inputs = input, - theme="grass", - outputs=output, - examples = examples, - title=title, - ) -interface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Norod78/Apocalyptify/app.py b/spaces/Norod78/Apocalyptify/app.py deleted file mode 100644 index 3922ef971add5d1cc835b75819a2afef3866a609..0000000000000000000000000000000000000000 --- a/spaces/Norod78/Apocalyptify/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import os -os.system("pip install --upgrade pip") -os.system("pip install dlib") -import sys -import face_detection -import PIL -from PIL import Image, ImageOps -import numpy as np - -import torch -torch.set_grad_enabled(False) -net = torch.jit.load('apocalyptify_p2s2p_torchscript_cpu.pt') -net.eval() - - -def tensor2im(var): - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - -def image_as_array(image_in): - im_array = np.array(image_in, np.float32) - im_array = (im_array/255)*2 - 1 - im_array = np.transpose(im_array, (2, 0, 1)) - im_array = np.expand_dims(im_array, 0) - return im_array - -def find_aligned_face(image_in, size=256): - aligned_image, n_faces, quad = face_detection.align(image_in, face_index=0, output_size=size) - return aligned_image, n_faces, quad - -def align_first_face(image_in, size=256): - aligned_image, n_faces, quad = find_aligned_face(image_in,size=size) - if n_faces == 0: - try: - image_in = ImageOps.exif_transpose(image_in) - except: - print("exif problem, not rotating") - image_in = image_in.resize((size, size)) - im_array = image_as_array(image_in) - else: - im_array = image_as_array(aligned_image) - - return im_array - -def img_concat_h(im1, im2): - dst = Image.new('RGB', (im1.width + im2.width, im1.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (im1.width, 0)) - return dst - -import gradio as gr - -def face2drag( - img: Image.Image, - size: int -) -> Image.Image: - - aligned_img = align_first_face(img) - if aligned_img is None: - output=None - else: - input = torch.Tensor(aligned_img) - output = net(input) - output = tensor2im(output[0]) - output = img_concat_h(tensor2im(torch.Tensor(aligned_img)[0]), output) - - return output - -import os -import collections -from typing import Union, List -import numpy as np -from PIL import Image -import PIL.Image -import PIL.ImageFile -import numpy as np -import scipy.ndimage -import requests - -def inference(img): - out = face2drag(img, 256) - return out - - -title = "Apocalyptify" -description = "How will your face look after the Apocalypse? Will you become a Zombie? A Ghoul? A person who fights them? Upload an image with a face, or click one of the examples below. If a face could not be detected, A face will still be created based on the features of the input." -article = "

Github Repo

samples: Sample00001Sample00002Sample00003Sample00004Sample00005

The Apocalypse model was fine tuned on a pre-trained Pixel2Style2Pixel model by Doron Adler

" - -examples=[['Example00001.jpg'],['Example00002.jpg'],['Example00003.jpg'],['Example00004.jpg'],['Example00005.jpg'], ['Example00006.jpg']] -gr.Interface(inference, gr.inputs.Image(type="pil",shape=(1024,1024)), gr.outputs.Image(type="pil"),title=title,description=description,article=article,examples=examples,enable_queue=True,allow_flagging=False).launch() diff --git a/spaces/Nultx/VITS-TTS/transforms.py b/spaces/Nultx/VITS-TTS/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md deleted file mode 100644 index 04f3f15d3ed391e26ca87f726ae88f30d1d414ab..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -name: ❓ Questions/Help -about: If you have questions, please first search existing issues and docs -labels: 'question, needs triage' ---- - -## ❓ Questions and Help - -### Before asking: -1. search the issues. -2. search the docs. - - - -#### What is your question? - -#### Code - - - -#### What have you tried? - -#### What's your environment? - - - fairseq Version (e.g., 1.0 or main): - - PyTorch Version (e.g., 1.0) - - OS (e.g., Linux): - - How you installed fairseq (`pip`, source): - - Build command you used (if compiling from source): - - Python version: - - CUDA/cuDNN version: - - GPU models and configuration: - - Any other relevant information: diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/docs/conf.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/docs/conf.py deleted file mode 100644 index 87b0db98c77d0c240c030a0b48354c86b84358d1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/docs/conf.py +++ /dev/null @@ -1,134 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# -# fairseq documentation build configuration file, created by -# sphinx-quickstart on Fri Aug 17 21:45:30 2018. -# -# This file is execfile()d with the current directory set to its -# containing dir. -# -# Note that not all possible configuration values are present in this -# autogenerated file. -# -# All configuration values have a default; values that are commented out -# serve to show the default. - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. - -import os -import sys -from fairseq import __version__ - - -# source code directory, relative to this file, for sphinx-autobuild -sys.path.insert(0, os.path.abspath("..")) - -source_suffix = [".rst"] - -# -- General configuration ------------------------------------------------ - -# If your documentation needs a minimal Sphinx version, state it here. -# -# needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "sphinx.ext.autodoc", - "sphinx.ext.intersphinx", - "sphinx.ext.viewcode", - "sphinx.ext.napoleon", - "sphinxarg.ext", -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -# The master toctree document. -master_doc = "index" - -# General information about the project. -project = "fairseq" -copyright = "Facebook AI Research (FAIR)" -author = "Facebook AI Research (FAIR)" - -github_doc_root = "https://github.com/pytorch/fairseq/tree/main/docs/" - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = __version__ -# The full version, including alpha/beta/rc tags. -release = __version__ - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This patterns also effect to html_static_path and html_extra_path -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = "sphinx" -highlight_language = "python" - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = False - - -# -- Options for HTML output ---------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = "sphinx_rtd_theme" - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ["_static"] - -html_context = { - "css_files": [ - "_static/theme_overrides.css", # override wide tables in RTD theme - ], -} - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# This is required for the alabaster theme -# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars -# html_sidebars = { -# '**': [ -# 'about.html', -# 'navigation.html', -# 'relations.html', # needs 'show_related': True theme option to display -# 'searchbox.html', -# 'donate.html', -# ] -# } - - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - "numpy": ("http://docs.scipy.org/doc/numpy/", None), - "python": ("https://docs.python.org/", None), - "torch": ("https://pytorch.org/docs/master/", None), -} diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py deleted file mode 100644 index 2fa846075b6872cdcc0baebca0b9acbb9ffcd287..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import logging - -import torch.hub - -from .demucs import Demucs -from .utils import deserialize_model - -logger = logging.getLogger(__name__) -ROOT = "https://dl.fbaipublicfiles.com/adiyoss/denoiser/" -DNS_48_URL = ROOT + "dns48-11decc9d8e3f0998.th" -DNS_64_URL = ROOT + "dns64-a7761ff99a7d5bb6.th" -MASTER_64_URL = ROOT + "master64-8a5dfb4bb92753dd.th" - - -def _demucs(pretrained, url, **kwargs): - model = Demucs(**kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url(url, map_location='cpu') - model.load_state_dict(state_dict) - return model - - -def dns48(pretrained=True): - return _demucs(pretrained, DNS_48_URL, hidden=48) - - -def dns64(pretrained=True): - return _demucs(pretrained, DNS_64_URL, hidden=64) - - -def master64(pretrained=True): - return _demucs(pretrained, MASTER_64_URL, hidden=64) - - -def add_model_flags(parser): - group = parser.add_mutually_exclusive_group(required=False) - group.add_argument( - "-m", "--model_path", help="Path to local trained model." - ) - group.add_argument( - "--dns48", action="store_true", - help="Use pre-trained real time H=48 model trained on DNS." - ) - group.add_argument( - "--dns64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS." - ) - group.add_argument( - "--master64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS and Valentini." - ) - - -def get_model(args): - """ - Load local model package or torchhub pre-trained model. - """ - if args.model_path: - logger.info("Loading model from %s", args.model_path) - pkg = torch.load(args.model_path) - model = deserialize_model(pkg) - elif args.dns64: - logger.info("Loading pre-trained real time H=64 model trained on DNS.") - model = dns64() - elif args.master64: - logger.info( - "Loading pre-trained real time H=64 model trained on DNS and Valentini." - ) - model = master64() - else: - logger.info("Loading pre-trained real time H=48 model trained on DNS.") - model = dns48() - logger.debug(model) - return model diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp deleted file mode 100644 index ece47a8d908b93cec102743070c9057986d39d3f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp +++ /dev/null @@ -1,51 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include -#include - -std::vector -lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l); - -std::vector lightconv_cuda_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters); - -#define CHECK_CUDA(x) \ - AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - AT_ASSERTM(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -std::vector -lightconv_forward(at::Tensor input, at::Tensor filters, int padding_l) { - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return lightconv_cuda_forward(input, filters, padding_l); -} - -std::vector lightconv_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters) { - CHECK_INPUT(gradOutput); - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return lightconv_cuda_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &lightconv_forward, "lighconv forward (CUDA)"); - m.def("backward", &lightconv_backward, "lighconv backward (CUDA)"); -} diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/scalar_bias.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/scalar_bias.py deleted file mode 100644 index c96247c75914fabb8a2b7ff731bb82b588f72690..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/scalar_bias.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import torch - - -class ScalarBias(torch.autograd.Function): - """ - Adds a vector of scalars, used in self-attention mechanism to allow - the model to optionally attend to this vector instead of the past - """ - - @staticmethod - def forward(ctx, input, dim, bias_init): - size = list(input.size()) - size[dim] += 1 - output = input.new(*size).fill_(bias_init) - output.narrow(dim, 1, size[dim] - 1).copy_(input) - ctx.dim = dim - return output - - @staticmethod - def backward(ctx, grad): - return grad.narrow(ctx.dim, 1, grad.size(ctx.dim) - 1), None, None - - -def scalar_bias(input, dim, bias_init=0): - return ScalarBias.apply(input, dim, bias_init) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/language_model/prepare-wikitext-103.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/language_model/prepare-wikitext-103.sh deleted file mode 100644 index 751302156f0a6829af9c2ee5e0e2ca62c2cd4187..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/language_model/prepare-wikitext-103.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -URLS=( - "https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip" -) -FILES=( - "wikitext-103-v1.zip" -) - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit -1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - elif [ ${file: -4} == ".zip" ]; then - unzip $file - fi - fi -done -cd .. diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/wsc/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/wsc/__init__.py deleted file mode 100644 index 78afa4728eeed96142900118f6452730023466c9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/wsc/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import wsc_criterion # noqa -from . import wsc_task # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/utils/trie.py b/spaces/OFA-Sys/OFA-vqa/utils/trie.py deleted file mode 100644 index 76d331d87fd99096e8228f34f297379221941045..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/utils/trie.py +++ /dev/null @@ -1,25 +0,0 @@ -from collections import defaultdict - - -class TreeNode(): - def __init__(self): - self.child = defaultdict(TreeNode) - -class Trie: - - def __init__(self, eos): - self.root = TreeNode() - self.eos = eos - - def insert(self, word): - cur = self.root - for c in word: - cur = cur.child[c] - - def get_next_layer(self, word): - cur = self.root - for c in word: - cur = cur.child.get(c) - if cur is None: - return [self.eos] - return list(cur.child.keys()) \ No newline at end of file diff --git a/spaces/OFA-Sys/small-stable-diffusion-v0/app.py b/spaces/OFA-Sys/small-stable-diffusion-v0/app.py deleted file mode 100644 index df96c1f42a1e3bc2a41bc5f0f47a467982c1af1d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/small-stable-diffusion-v0/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'OFA-Sys/small-stable-diffusion-v0' -prefix = '' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
-
-

Small Stable Diffusion V0

-
-

- Demo for Small Stable Diffusion V0 Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

- Duplicate Space -
- """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
-
-

This space was created using SD Space Creator.

-
- """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/sampling.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/sampling.py deleted file mode 100644 index a2d0f6648b349c5ea39fd29785b77c961a58fa22..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/sampling.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch - -from detectron2.layers import nonzero_tuple - -__all__ = ["subsample_labels"] - - -def subsample_labels( - labels: torch.Tensor, num_samples: int, positive_fraction: float, bg_label: int -): - """ - Return `num_samples` (or fewer, if not enough found) - random samples from `labels` which is a mixture of positives & negatives. - It will try to return as many positives as possible without - exceeding `positive_fraction * num_samples`, and then try to - fill the remaining slots with negatives. - - Args: - labels (Tensor): (N, ) label vector with values: - * -1: ignore - * bg_label: background ("negative") class - * otherwise: one or more foreground ("positive") classes - num_samples (int): The total number of labels with value >= 0 to return. - Values that are not sampled will be filled with -1 (ignore). - positive_fraction (float): The number of subsampled labels with values > 0 - is `min(num_positives, int(positive_fraction * num_samples))`. The number - of negatives sampled is `min(num_negatives, num_samples - num_positives_sampled)`. - In order words, if there are not enough positives, the sample is filled with - negatives. If there are also not enough negatives, then as many elements are - sampled as is possible. - bg_label (int): label index of background ("negative") class. - - Returns: - pos_idx, neg_idx (Tensor): - 1D vector of indices. The total length of both is `num_samples` or fewer. - """ - positive = nonzero_tuple((labels != -1) & (labels != bg_label))[0] - negative = nonzero_tuple(labels == bg_label)[0] - - num_pos = int(num_samples * positive_fraction) - # protect against not enough positive examples - num_pos = min(positive.numel(), num_pos) - num_neg = num_samples - num_pos - # protect against not enough negative examples - num_neg = min(negative.numel(), num_neg) - - # randomly select positive and negative examples - perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos] - perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg] - - pos_idx = positive[perm1] - neg_idx = negative[perm2] - return pos_idx, neg_idx diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/multiscale.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/multiscale.py deleted file mode 100644 index 65f0a54925593e9da8106bfc6d65a4098ce001d7..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/multiscale.py +++ /dev/null @@ -1,244 +0,0 @@ -from typing import List, Tuple, Union, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from saicinpainting.training.modules.base import get_conv_block_ctor, get_activation -from saicinpainting.training.modules.pix2pixhd import ResnetBlock - - -class ResNetHead(nn.Module): - def __init__(self, input_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True)): - assert (n_blocks >= 0) - super(ResNetHead, self).__init__() - - conv_layer = get_conv_block_ctor(conv_kind) - - model = [nn.ReflectionPad2d(3), - conv_layer(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - activation] - - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [conv_layer(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1), - norm_layer(ngf * mult * 2), - activation] - - mult = 2 ** n_downsampling - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class ResNetTail(nn.Module): - def __init__(self, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - add_in_proj=None): - assert (n_blocks >= 0) - super(ResNetTail, self).__init__() - - mult = 2 ** n_downsampling - - model = [] - - if add_in_proj is not None: - model.append(nn.Conv2d(add_in_proj, ngf * mult, kernel_size=1)) - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, - output_padding=1), - up_norm_layer(int(ngf * mult / 2)), - up_activation] - self.model = nn.Sequential(*model) - - out_layers = [] - for _ in range(out_extra_layers_n): - out_layers += [nn.Conv2d(ngf, ngf, kernel_size=1, padding=0), - up_norm_layer(ngf), - up_activation] - out_layers += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - - if add_out_act: - out_layers.append(get_activation('tanh' if add_out_act is True else add_out_act)) - - self.out_proj = nn.Sequential(*out_layers) - - def forward(self, input, return_last_act=False): - features = self.model(input) - out = self.out_proj(features) - if return_last_act: - return out, features - else: - return out - - -class MultiscaleResNet(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=2, n_blocks_head=2, n_blocks_tail=6, n_scales=3, - norm_layer=nn.BatchNorm2d, padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - out_cumulative=False, return_only_hr=False): - super().__init__() - - self.heads = nn.ModuleList([ResNetHead(input_nc, ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_head, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation) - for i in range(n_scales)]) - tail_in_feats = ngf * (2 ** n_downsampling) + ngf - self.tails = nn.ModuleList([ResNetTail(output_nc, - ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_tail, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation, up_norm_layer=up_norm_layer, - up_activation=up_activation, add_out_act=add_out_act, - out_extra_layers_n=out_extra_layers_n, - add_in_proj=None if (i == n_scales - 1) else tail_in_feats) - for i in range(n_scales)]) - - self.out_cumulative = out_cumulative - self.return_only_hr = return_only_hr - - @property - def num_scales(self): - return len(self.heads) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> Union[torch.Tensor, List[torch.Tensor]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: Depending on return_only_hr: - True: Only the most HR output - False: List of outputs of different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.heads) == len(ms_inputs), (len(self.heads), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.heads), (len(self.heads), len(ms_inputs), smallest_scales_num) - - cur_heads = self.heads[-smallest_scales_num:] - ms_features = [cur_head(cur_inp) for cur_head, cur_inp in zip(cur_heads, ms_inputs)] - - all_outputs = [] - prev_tail_features = None - for i in range(len(ms_features)): - scale_i = -i - 1 - - cur_tail_input = ms_features[-i - 1] - if prev_tail_features is not None: - if prev_tail_features.shape != cur_tail_input.shape: - prev_tail_features = F.interpolate(prev_tail_features, size=cur_tail_input.shape[2:], - mode='bilinear', align_corners=False) - cur_tail_input = torch.cat((cur_tail_input, prev_tail_features), dim=1) - - cur_out, cur_tail_feats = self.tails[scale_i](cur_tail_input, return_last_act=True) - - prev_tail_features = cur_tail_feats - all_outputs.append(cur_out) - - if self.out_cumulative: - all_outputs_cum = [all_outputs[0]] - for i in range(1, len(ms_features)): - cur_out = all_outputs[i] - cur_out_cum = cur_out + F.interpolate(all_outputs_cum[-1], size=cur_out.shape[2:], - mode='bilinear', align_corners=False) - all_outputs_cum.append(cur_out_cum) - all_outputs = all_outputs_cum - - if self.return_only_hr: - return all_outputs[-1] - else: - return all_outputs[::-1] - - -class MultiscaleDiscriminatorSimple(nn.Module): - def __init__(self, ms_impl): - super().__init__() - self.ms_impl = nn.ModuleList(ms_impl) - - @property - def num_scales(self): - return len(self.ms_impl) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> List[Tuple[torch.Tensor, List[torch.Tensor]]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: List of pairs (prediction, features) for different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.ms_impl) == len(ms_inputs), (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.ms_impl), \ - (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - - return [cur_discr(cur_input) for cur_discr, cur_input in zip(self.ms_impl[-smallest_scales_num:], ms_inputs)] - - -class SingleToMultiScaleInputMixin: - def forward(self, x: torch.Tensor) -> List: - orig_height, orig_width = x.shape[2:] - factors = [2 ** i for i in range(self.num_scales)] - ms_inputs = [F.interpolate(x, size=(orig_height // f, orig_width // f), mode='bilinear', align_corners=False) - for f in factors] - return super().forward(ms_inputs) - - -class GeneratorMultiToSingleOutputMixin: - def forward(self, x): - return super().forward(x)[0] - - -class DiscriminatorMultiToSingleOutputMixin: - def forward(self, x): - out_feat_tuples = super().forward(x) - return out_feat_tuples[0][0], [f for _, flist in out_feat_tuples for f in flist] - - -class DiscriminatorMultiToSingleOutputStackedMixin: - def __init__(self, *args, return_feats_only_levels=None, **kwargs): - super().__init__(*args, **kwargs) - self.return_feats_only_levels = return_feats_only_levels - - def forward(self, x): - out_feat_tuples = super().forward(x) - outs = [out for out, _ in out_feat_tuples] - scaled_outs = [outs[0]] + [F.interpolate(cur_out, size=outs[0].shape[-2:], - mode='bilinear', align_corners=False) - for cur_out in outs[1:]] - out = torch.cat(scaled_outs, dim=1) - if self.return_feats_only_levels is not None: - feat_lists = [out_feat_tuples[i][1] for i in self.return_feats_only_levels] - else: - feat_lists = [flist for _, flist in out_feat_tuples] - feats = [f for flist in feat_lists for f in flist] - return out, feats - - -class MultiscaleDiscrSingleInput(SingleToMultiScaleInputMixin, DiscriminatorMultiToSingleOutputStackedMixin, MultiscaleDiscriminatorSimple): - pass - - -class MultiscaleResNetSingle(GeneratorMultiToSingleOutputMixin, SingleToMultiScaleInputMixin, MultiscaleResNet): - pass diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/metrics/__init__.py b/spaces/OpenMotionLab/MotionGPT/mGPT/metrics/__init__.py deleted file mode 100644 index 33c20815c943401fb346bb0ab5a512dd9713658c..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/metrics/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .base import BaseMetrics diff --git a/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/vit.py b/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/psanet_r50-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/psanet_r50-d8.py deleted file mode 100644 index 689513fa9d2a40f14bf0ae4ae61f38f0dcc1b3da..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/psanet_r50-d8.py +++ /dev/null @@ -1,49 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='PSAHead', - in_channels=2048, - in_index=3, - channels=512, - mask_size=(97, 97), - psa_type='bi-direction', - compact=False, - shrink_factor=2, - normalization_factor=1.0, - psa_softmax=True, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/PKUWilliamYang/StyleGANEX/datasets/ffhq_degradation_dataset.py b/spaces/PKUWilliamYang/StyleGANEX/datasets/ffhq_degradation_dataset.py deleted file mode 100644 index b43ff6b1d82c1c491900f119a62f259ac4294b61..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/datasets/ffhq_degradation_dataset.py +++ /dev/null @@ -1,235 +0,0 @@ -import cv2 -import math -import numpy as np -import os.path as osp -import torch -import torch.utils.data as data -from basicsr.data import degradations as degradations -from basicsr.data.data_util import paths_from_folder -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torchvision.transforms.functional import (adjust_brightness, adjust_contrast, adjust_hue, adjust_saturation, - normalize) - - -@DATASET_REGISTRY.register() -class FFHQDegradationDataset(data.Dataset): - """FFHQ dataset for GFPGAN. - It reads high resolution images, and then generate low-quality (LQ) images on-the-fly. - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - io_backend (dict): IO backend type and other kwarg. - mean (list | tuple): Image mean. - std (list | tuple): Image std. - use_hflip (bool): Whether to horizontally flip. - Please see more options in the codes. - """ - - def __init__(self, opt): - super(FFHQDegradationDataset, self).__init__() - self.opt = opt - # file client (io backend) - self.file_client = None - self.io_backend_opt = opt['io_backend'] - - self.gt_folder = opt['dataroot_gt'] - self.mean = opt['mean'] - self.std = opt['std'] - self.out_size = opt['out_size'] - - self.crop_components = opt.get('crop_components', False) # facial components - self.eye_enlarge_ratio = opt.get('eye_enlarge_ratio', 1) # whether enlarge eye regions - - if self.crop_components: - # load component list from a pre-process pth files - self.components_list = torch.load(opt.get('component_path')) - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = self.gt_folder - if not self.gt_folder.endswith('.lmdb'): - raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}") - with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin: - self.paths = [line.split('.')[0] for line in fin] - else: - # disk backend: scan file list from a folder - self.paths = paths_from_folder(self.gt_folder) - - # degradation configurations - self.blur_kernel_size = opt['blur_kernel_size'] - self.kernel_list = opt['kernel_list'] - self.kernel_prob = opt['kernel_prob'] - self.blur_sigma = opt['blur_sigma'] - self.downsample_range = opt['downsample_range'] - self.noise_range = opt['noise_range'] - self.jpeg_range = opt['jpeg_range'] - - # color jitter - self.color_jitter_prob = opt.get('color_jitter_prob') - self.color_jitter_pt_prob = opt.get('color_jitter_pt_prob') - self.color_jitter_shift = opt.get('color_jitter_shift', 20) - # to gray - self.gray_prob = opt.get('gray_prob') - - logger = get_root_logger() - logger.info(f'Blur: blur_kernel_size {self.blur_kernel_size}, sigma: [{", ".join(map(str, self.blur_sigma))}]') - logger.info(f'Downsample: downsample_range [{", ".join(map(str, self.downsample_range))}]') - logger.info(f'Noise: [{", ".join(map(str, self.noise_range))}]') - logger.info(f'JPEG compression: [{", ".join(map(str, self.jpeg_range))}]') - - if self.color_jitter_prob is not None: - logger.info(f'Use random color jitter. Prob: {self.color_jitter_prob}, shift: {self.color_jitter_shift}') - if self.gray_prob is not None: - logger.info(f'Use random gray. Prob: {self.gray_prob}') - self.color_jitter_shift /= 255. - - @staticmethod - def color_jitter(img, shift): - """jitter color: randomly jitter the RGB values, in numpy formats""" - jitter_val = np.random.uniform(-shift, shift, 3).astype(np.float32) - img = img + jitter_val - img = np.clip(img, 0, 1) - return img - - @staticmethod - def color_jitter_pt(img, brightness, contrast, saturation, hue): - """jitter color: randomly jitter the brightness, contrast, saturation, and hue, in torch Tensor formats""" - fn_idx = torch.randperm(4) - for fn_id in fn_idx: - if fn_id == 0 and brightness is not None: - brightness_factor = torch.tensor(1.0).uniform_(brightness[0], brightness[1]).item() - img = adjust_brightness(img, brightness_factor) - - if fn_id == 1 and contrast is not None: - contrast_factor = torch.tensor(1.0).uniform_(contrast[0], contrast[1]).item() - img = adjust_contrast(img, contrast_factor) - - if fn_id == 2 and saturation is not None: - saturation_factor = torch.tensor(1.0).uniform_(saturation[0], saturation[1]).item() - img = adjust_saturation(img, saturation_factor) - - if fn_id == 3 and hue is not None: - hue_factor = torch.tensor(1.0).uniform_(hue[0], hue[1]).item() - img = adjust_hue(img, hue_factor) - return img - - def get_component_coordinates(self, index, status): - """Get facial component (left_eye, right_eye, mouth) coordinates from a pre-loaded pth file""" - components_bbox = self.components_list[f'{index:08d}'] - if status[0]: # hflip - # exchange right and left eye - tmp = components_bbox['left_eye'] - components_bbox['left_eye'] = components_bbox['right_eye'] - components_bbox['right_eye'] = tmp - # modify the width coordinate - components_bbox['left_eye'][0] = self.out_size - components_bbox['left_eye'][0] - components_bbox['right_eye'][0] = self.out_size - components_bbox['right_eye'][0] - components_bbox['mouth'][0] = self.out_size - components_bbox['mouth'][0] - - # get coordinates - locations = [] - for part in ['left_eye', 'right_eye', 'mouth']: - mean = components_bbox[part][0:2] - mean[0] = mean[0] * 2 + 128 ######## - mean[1] = mean[1] * 2 + 128 ######## - half_len = components_bbox[part][2] * 2 ######## - if 'eye' in part: - half_len *= self.eye_enlarge_ratio - loc = np.hstack((mean - half_len + 1, mean + half_len)) - loc = torch.from_numpy(loc).float() - locations.append(loc) - return locations - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # load gt image - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - img_bytes = self.file_client.get(gt_path) - img_gt = imfrombytes(img_bytes, float32=True) - - # random horizontal flip - img_gt, status = augment(img_gt, hflip=self.opt['use_hflip'], rotation=False, return_status=True) - h, w, _ = img_gt.shape - - # get facial component coordinates - if self.crop_components: - locations = self.get_component_coordinates(index, status) - loc_left_eye, loc_right_eye, loc_mouth = locations - - # ------------------------ generate lq image ------------------------ # - # blur - kernel = degradations.random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - self.blur_kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - noise_range=None) - img_lq = cv2.filter2D(img_gt, -1, kernel) - # downsample - scale = np.random.uniform(self.downsample_range[0], self.downsample_range[1]) - img_lq = cv2.resize(img_lq, (int(w // scale), int(h // scale)), interpolation=cv2.INTER_LINEAR) - # noise - if self.noise_range is not None: - img_lq = degradations.random_add_gaussian_noise(img_lq, self.noise_range) - # jpeg compression - if self.jpeg_range is not None: - img_lq = degradations.random_add_jpg_compression(img_lq, self.jpeg_range) - - # resize to original size - img_lq = cv2.resize(img_lq, (int(w // self.opt['scale']), int(h // self.opt['scale'])), interpolation=cv2.INTER_LINEAR) - - # random color jitter (only for lq) - if self.color_jitter_prob is not None and (np.random.uniform() < self.color_jitter_prob): - img_lq = self.color_jitter(img_lq, self.color_jitter_shift) - # random to gray (only for lq) - if self.gray_prob and np.random.uniform() < self.gray_prob: - img_lq = cv2.cvtColor(img_lq, cv2.COLOR_BGR2GRAY) - img_lq = np.tile(img_lq[:, :, None], [1, 1, 3]) - if self.opt.get('gt_gray'): # whether convert GT to gray images - img_gt = cv2.cvtColor(img_gt, cv2.COLOR_BGR2GRAY) - img_gt = np.tile(img_gt[:, :, None], [1, 1, 3]) # repeat the color channels - - # BGR to RGB, HWC to CHW, numpy to tensor - #img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True) - img_gt = img2tensor(img_gt, bgr2rgb=True, float32=True) - img_lq = img2tensor(img_lq, bgr2rgb=True, float32=True) - - # random color jitter (pytorch version) (only for lq) - if self.color_jitter_pt_prob is not None and (np.random.uniform() < self.color_jitter_pt_prob): - brightness = self.opt.get('brightness', (0.5, 1.5)) - contrast = self.opt.get('contrast', (0.5, 1.5)) - saturation = self.opt.get('saturation', (0, 1.5)) - hue = self.opt.get('hue', (-0.1, 0.1)) - img_lq = self.color_jitter_pt(img_lq, brightness, contrast, saturation, hue) - - # round and clip - img_lq = torch.clamp((img_lq * 255.0).round(), 0, 255) / 255. - - # normalize - normalize(img_gt, self.mean, self.std, inplace=True) - normalize(img_lq, self.mean, self.std, inplace=True) - - ''' - if self.crop_components: - return_dict = { - 'lq': img_lq, - 'gt': img_gt, - 'gt_path': gt_path, - 'loc_left_eye': loc_left_eye, - 'loc_right_eye': loc_right_eye, - 'loc_mouth': loc_mouth - } - return return_dict - else: - return {'lq': img_lq, 'gt': img_gt, 'gt_path': gt_path} - ''' - return img_lq, img_gt - - def __len__(self): - return len(self.paths) \ No newline at end of file diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/encoders/psp_encoders.py b/spaces/PKUWilliamYang/StyleGANEX/models/encoders/psp_encoders.py deleted file mode 100644 index b8ed6a10130312fa44923db44f953be90936f26d..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/models/encoders/psp_encoders.py +++ /dev/null @@ -1,357 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from torch.nn import Linear, Conv2d, BatchNorm2d, PReLU, Sequential, Module - -from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE -from models.stylegan2.model import EqualLinear - - -class GradualStyleBlock(Module): - def __init__(self, in_c, out_c, spatial, max_pooling=False): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - self.max_pooling = max_pooling - num_pools = int(np.log2(spatial)) - modules = [] - modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - # To make E accept more general H*W images, we add global average pooling to - # resize all features to 1*1*512 before mapping to latent codes - if self.max_pooling: - x = F.adaptive_max_pool2d(x, 1) ##### modified - else: - x = F.adaptive_avg_pool2d(x, 1) ##### modified - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - -class AdaptiveInstanceNorm(nn.Module): - def __init__(self, fin, style_dim=512): - super().__init__() - - self.norm = nn.InstanceNorm2d(fin, affine=False) - self.style = nn.Linear(style_dim, fin * 2) - - self.style.bias.data[:fin] = 1 - self.style.bias.data[fin:] = 0 - - def forward(self, input, style): - style = self.style(style).unsqueeze(2).unsqueeze(3) - gamma, beta = style.chunk(2, 1) - out = self.norm(input) - out = gamma * out + beta - return out - - -class FusionLayer(Module): ##### modified - def __init__(self, inchannel, outchannel, use_skip_torgb=True, use_att=0): - super(FusionLayer, self).__init__() - - self.transform = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=3, stride=1, padding=1), - nn.LeakyReLU()) - self.fusion_out = nn.Conv2d(outchannel*2, outchannel, kernel_size=3, stride=1, padding=1) - self.fusion_out.weight.data *= 0.01 - self.fusion_out.weight[:,0:outchannel,1,1].data += torch.eye(outchannel) - - self.use_skip_torgb = use_skip_torgb - if use_skip_torgb: - self.fusion_skip = nn.Conv2d(3+outchannel, 3, kernel_size=3, stride=1, padding=1) - self.fusion_skip.weight.data *= 0.01 - self.fusion_skip.weight[:,0:3,1,1].data += torch.eye(3) - - self.use_att = use_att - if use_att: - modules = [] - modules.append(nn.Linear(512, outchannel)) - for _ in range(use_att): - modules.append(nn.LeakyReLU(negative_slope=0.2, inplace=True)) - modules.append(nn.Linear(outchannel, outchannel)) - modules.append(nn.LeakyReLU(negative_slope=0.2, inplace=True)) - self.linear = Sequential(*modules) - self.norm = AdaptiveInstanceNorm(outchannel*2, outchannel) - self.conv = nn.Conv2d(outchannel*2, 1, 3, 1, 1, bias=True) - - def forward(self, feat, out, skip, editing_w=None): - x = self.transform(feat) - # similar to VToonify, use editing vector as condition - # fuse encoder feature and decoder feature with a predicted attention mask m_E - # if self.use_att = False, just fuse them with a simple conv layer - if self.use_att and editing_w is not None: - label = self.linear(editing_w) - m_E = (F.relu(self.conv(self.norm(torch.cat([out, abs(out-x)], dim=1), label)))).tanh() - x = x * m_E - out = self.fusion_out(torch.cat((out, x), dim=1)) - if self.use_skip_torgb: - skip = self.fusion_skip(torch.cat((skip, x), dim=1)) - return out, skip - - -class ResnetBlock(nn.Module): - def __init__(self, dim): - super(ResnetBlock, self).__init__() - - self.conv_block = nn.Sequential(Conv2d(dim, dim, 3, 1, 1), - nn.LeakyReLU(), - Conv2d(dim, dim, 3, 1, 1)) - self.relu = nn.LeakyReLU() - - def forward(self, x): - out = x + self.conv_block(x) - return self.relu(out) - -# trainable light-weight translation network T -# for sketch/mask-to-face translation, -# we add a trainable T to map y to an intermediate domain where E can more easily extract features. -class ResnetGenerator(nn.Module): - def __init__(self, in_channel=19, res_num=2): - super(ResnetGenerator, self).__init__() - - modules = [] - modules.append(Conv2d(in_channel, 16, 3, 2, 1)) - modules.append(nn.LeakyReLU()) - modules.append(Conv2d(16, 16, 3, 2, 1)) - modules.append(nn.LeakyReLU()) - for _ in range(res_num): - modules.append(ResnetBlock(16)) - for _ in range(2): - modules.append(nn.ConvTranspose2d(16, 16, 3, 2, 1, output_padding=1)) - modules.append(nn.LeakyReLU()) - modules.append(Conv2d(16, 64, 3, 1, 1, bias=False)) - modules.append(BatchNorm2d(64)) - modules.append(PReLU(64)) - self.model = Sequential(*modules) - - def forward(self, input): - return self.model(input) - -class GradualStyleEncoder(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(GradualStyleEncoder, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - - # for sketch/mask-to-face translation, add a new network T - if opts.input_nc != 3: - self.input_label_layer = ResnetGenerator(opts.input_nc, opts.res_num) - - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - self.style_count = opts.n_styles - self.coarse_ind = 3 - self.middle_ind = 7 - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16, 'max_pooling' in opts and opts.max_pooling) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32, 'max_pooling' in opts and opts.max_pooling) - else: - style = GradualStyleBlock(512, 512, 64, 'max_pooling' in opts and opts.max_pooling) - self.styles.append(style) - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - # we concatenate pSp features in the middle layers and - # add a convolution layer to map the concatenated features to the first-layer input feature f of G. - self.featlayer = nn.Conv2d(768, 512, kernel_size=1, stride=1, padding=0) ##### modified - self.skiplayer = nn.Conv2d(768, 3, kernel_size=1, stride=1, padding=0) ##### modified - - # skip connection - if 'use_skip' in opts and opts.use_skip: ##### modified - self.fusion = nn.ModuleList() - channels = [[256,512], [256,512], [256,512], [256,512], [128,512], [64,256], [64,128]] - # opts.skip_max_layer: how many layers are skipped to the decoder - for inc, outc in channels[:max(1, min(7, opts.skip_max_layer))]: # from 4 to 256 - self.fusion.append(FusionLayer(inc, outc, opts.use_skip_torgb, opts.use_att)) - - def _upsample_add(self, x, y): - '''Upsample and add two feature maps. - Args: - x: (Variable) top feature map to be upsampled. - y: (Variable) lateral feature map. - Returns: - (Variable) added feature map. - Note in PyTorch, when input size is odd, the upsampled feature map - with `F.upsample(..., scale_factor=2, mode='nearest')` - maybe not equal to the lateral feature map size. - e.g. - original input size: [N,_,15,15] -> - conv2d feature map size: [N,_,8,8] -> - upsampled feature map size: [N,_,16,16] - So we choose bilinear upsample which supports arbitrary output sizes. - ''' - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y - - # return_feat: return f - # return_full: return f and the skipped encoder features - # return [out, feats] - # out is the style latent code w+ - # feats[0] is f for the 1st conv layer, feats[1] is f for the 1st torgb layer - # feats[2-8] is the skipped encoder features - def forward(self, x, return_feat=False, return_full=False): ##### modified - if x.shape[1] != 3: - x = self.input_label_layer(x) - else: - x = self.input_layer(x) - c256 = x ##### modified - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 2: ##### modified - c128 = x - elif i == 6: - c1 = x - elif i == 10: ##### modified - c21 = x ##### modified - elif i == 15: ##### modified - c22 = x ##### modified - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - for j in range(self.coarse_ind): - latents.append(self.styles[j](c3)) - - p2 = self._upsample_add(c3, self.latlayer1(c2)) - for j in range(self.coarse_ind, self.middle_ind): - latents.append(self.styles[j](p2)) - - p1 = self._upsample_add(p2, self.latlayer2(c1)) - for j in range(self.middle_ind, self.style_count): - latents.append(self.styles[j](p1)) - - out = torch.stack(latents, dim=1) - - if not return_feat: - return out - - feats = [self.featlayer(torch.cat((c21, c22, c2), dim=1)), self.skiplayer(torch.cat((c21, c22, c2), dim=1))] - - if return_full: ##### modified - feats += [c2, c2, c22, c21, c1, c128, c256] - - return out, feats - - - # only compute the first-layer feature f - # E_F in the paper - def get_feat(self, x): ##### modified - # for sketch/mask-to-face translation - # use a trainable light-weight translation network T - if x.shape[1] != 3: - x = self.input_label_layer(x) - else: - x = self.input_layer(x) - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 10: ##### modified - c21 = x ##### modified - elif i == 15: ##### modified - c22 = x ##### modified - elif i == 20: - c2 = x - break - return self.featlayer(torch.cat((c21, c22, c2), dim=1)) - -class BackboneEncoderUsingLastLayerIntoW(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoW, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoW') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1)) - self.linear = EqualLinear(512, 512, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_pool(x) - x = x.view(-1, 512) - x = self.linear(x) - return x - - -class BackboneEncoderUsingLastLayerIntoWPlus(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoWPlus, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoWPlus') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.n_styles = opts.n_styles - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_layer_2 = Sequential(BatchNorm2d(512), - torch.nn.AdaptiveAvgPool2d((7, 7)), - Flatten(), - Linear(512 * 7 * 7, 512)) - self.linear = EqualLinear(512, 512 * self.n_styles, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer_2(x) - x = self.linear(x) - x = x.view(-1, self.n_styles, 512) - return x diff --git a/spaces/PSLD/PSLD/stable-diffusion/debug/inverse_bip_ldm_laion.sh b/spaces/PSLD/PSLD/stable-diffusion/debug/inverse_bip_ldm_laion.sh deleted file mode 100644 index cb9e88f75398f2f3619dc07fe60c1f0a3c82852f..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/debug/inverse_bip_ldm_laion.sh +++ /dev/null @@ -1,13 +0,0 @@ -export CUDA_VISIBLE_DEVICES='1' -python scripts/inverse.py \ - --file_id='00019.png' \ - --task_config='configs/box_inpainting_config.yaml' \ - --inpainting=1 \ - --general_inverse=0 \ - --gamma=1e-1 \ - --omega=1 \ - --W=256 \ - --H=256 \ - --scale=5.0 \ - --laion400m \ - --outdir="outputs/psld-ldm-laion400m-bip" diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/chord-entry.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/chord-entry.go deleted file mode 100644 index a335f09c8edaf7781bf70e935cab71412320cc0d..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/chord-entry.go and /dev/null differ diff --git a/spaces/Pinwheel/SuperGlue-Image-Matching/README.md b/spaces/Pinwheel/SuperGlue-Image-Matching/README.md deleted file mode 100644 index 83fe879b2b5fb99b8674299ffc3d03640b004409..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/SuperGlue-Image-Matching/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SuperGlue Image Matching -emoji: 🧚‍♀️ -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.8.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Prasanna18/AnatomyBOT/app.py b/spaces/Prasanna18/AnatomyBOT/app.py deleted file mode 100644 index fae7fc9642512518f74cf3653e9c032139f93764..0000000000000000000000000000000000000000 --- a/spaces/Prasanna18/AnatomyBOT/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import streamlit as st -from streamlit_chat import message -from langchain.chains import ConversationalRetrievalChain -from langchain.document_loaders import PyPDFLoader, DirectoryLoader -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.llms import CTransformers -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.vectorstores import FAISS -from langchain.memory import ConversationBufferMemory -from transformers import GPT2LMHeadModel, AutoTokenizer -import requests -import io -from PIL import Image - -# Set up Hugging Face API URL and headers -API_URL = "https://api-inference.huggingface.co/models/stabilityai/stable-diffusion-xl-base-1.0" -headers = {"Authorization": "Bearer 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"} - -# Load PDF files from a directory -loader = DirectoryLoader('data/', glob="*.pdf", loader_cls=PyPDFLoader) -documents = loader.load() - -# Split text into chunks -text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50) -text_chunks = text_splitter.split_documents(documents) - -# Create embeddings -embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2", model_kwargs={'device': "cpu"}) - -# Vector store -vector_store = FAISS.from_documents(text_chunks, embeddings) - -# Create the Language Model (LLM) -llm = CTransformers(model="model.bin", model_type="llama", - config={'max_new_tokens': 128, 'temperature': 0.01}) - -# Initialize ConversationBufferMemory -memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) - -# Create a ConversationalRetrievalChain -chain = ConversationalRetrievalChain.from_llm(llm=llm, chain_type='stuff', - retriever=vector_store.as_retriever(search_kwargs={"k": 2}), - memory=memory) - -# Initialize Streamlit app -st.title("MedicoBOT⚕️! A Saviour for Medicos!") - -# Function for the chat conversation -def conversation_chat(query): - result = chain({"question": query, "chat_history": st.session_state['history']}) - st.session_state['history'].append((query, result["answer"])) - return result["answer"] - -# Initialize session state -def initialize_session_state(): - if 'history' not in st.session_state: - st.session_state['history'] = [] - - if 'generated' not in st.session_state: - st.session_state['generated'] = ["Hello! Ask me anything about Anatomy and Physiology!"] - - if 'past' not in st.session_state: - st.session_state['past'] = ["Hello!"] - -# Function to display the chat history -def display_chat_history(): - reply_container = st.container() - container = st.container() - - with container: - with st.form(key='my_form', clear_on_submit=True): - user_input = st.text_input("Question:", placeholder="Ask me anything about Anatomy and Physiology!", key='input') - submit_button = st.form_submit_button(label='Send') - - if submit_button and user_input: - output = conversation_chat(user_input) - - st.session_state['past'].append(user_input) - st.session_state['generated'].append(output) - - if st.session_state['generated']: - with reply_container: - for i in range(len(st.session_state['generated'])): - message(st.session_state["past"][i], is_user=True, key=str(i) + '_user', avatar_style='big-smile') - message(st.session_state["generated"][i], key=str(i), avatar_style='big-ears-neutral') - -# Function to query the Hugging Face model with user input -def query(payload): - response = requests.post(API_URL, headers=headers, json=payload) - return response.content - -if __name__ == "__main__": - initialize_session_state() - display_chat_history() diff --git a/spaces/RamAnanth1/T2I-Adapter/ldm/util.py b/spaces/RamAnanth1/T2I-Adapter/ldm/util.py deleted file mode 100644 index 8ba38853e7a07228cc2c187742b5c45d7359b3f9..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/T2I-Adapter/ldm/util.py +++ /dev/null @@ -1,203 +0,0 @@ -import importlib - -import torch -import numpy as np -from collections import abc -from einops import rearrange -from functools import partial - -import multiprocessing as mp -from threading import Thread -from queue import Queue - -from inspect import isfunction -from PIL import Image, ImageDraw, ImageFont - - -def log_txt_as_img(wh, xc, size=10): - # wh a tuple of (width, height) - # xc a list of captions to plot - b = len(xc) - txts = list() - for bi in range(b): - txt = Image.new("RGB", wh, color="white") - draw = ImageDraw.Draw(txt) - font = ImageFont.truetype('data/DejaVuSans.ttf', size=size) - nc = int(40 * (wh[0] / 256)) - lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc)) - - try: - draw.text((0, 0), lines, fill="black", font=font) - except UnicodeEncodeError: - print("Cant encode string for logging. Skipping.") - - txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0 - txts.append(txt) - txts = np.stack(txts) - txts = torch.tensor(txts) - return txts - - -def ismap(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] > 3) - - -def isimage(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def mean_flat(tensor): - """ - https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86 - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.") - return total_params - - -def instantiate_from_config(config): - if not "target" in config: - if config == '__is_first_stage__': - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def _do_parallel_data_prefetch(func, Q, data, idx, idx_to_fn=False): - # create dummy dataset instance - - # run prefetching - if idx_to_fn: - res = func(data, worker_id=idx) - else: - res = func(data) - Q.put([idx, res]) - Q.put("Done") - - -def parallel_data_prefetch( - func: callable, data, n_proc, target_data_type="ndarray", cpu_intensive=True, use_worker_id=False -): - # if target_data_type not in ["ndarray", "list"]: - # raise ValueError( - # "Data, which is passed to parallel_data_prefetch has to be either of type list or ndarray." - # ) - if isinstance(data, np.ndarray) and target_data_type == "list": - raise ValueError("list expected but function got ndarray.") - elif isinstance(data, abc.Iterable): - if isinstance(data, dict): - print( - f'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.' - ) - data = list(data.values()) - if target_data_type == "ndarray": - data = np.asarray(data) - else: - data = list(data) - else: - raise TypeError( - f"The data, that shall be processed parallel has to be either an np.ndarray or an Iterable, but is actually {type(data)}." - ) - - if cpu_intensive: - Q = mp.Queue(1000) - proc = mp.Process - else: - Q = Queue(1000) - proc = Thread - # spawn processes - if target_data_type == "ndarray": - arguments = [ - [func, Q, part, i, use_worker_id] - for i, part in enumerate(np.array_split(data, n_proc)) - ] - else: - step = ( - int(len(data) / n_proc + 1) - if len(data) % n_proc != 0 - else int(len(data) / n_proc) - ) - arguments = [ - [func, Q, part, i, use_worker_id] - for i, part in enumerate( - [data[i: i + step] for i in range(0, len(data), step)] - ) - ] - processes = [] - for i in range(n_proc): - p = proc(target=_do_parallel_data_prefetch, args=arguments[i]) - processes += [p] - - # start processes - print(f"Start prefetching...") - import time - - start = time.time() - gather_res = [[] for _ in range(n_proc)] - try: - for p in processes: - p.start() - - k = 0 - while k < n_proc: - # get result - res = Q.get() - if res == "Done": - k += 1 - else: - gather_res[res[0]] = res[1] - - except Exception as e: - print("Exception: ", e) - for p in processes: - p.terminate() - - raise e - finally: - for p in processes: - p.join() - print(f"Prefetching complete. [{time.time() - start} sec.]") - - if target_data_type == 'ndarray': - if not isinstance(gather_res[0], np.ndarray): - return np.concatenate([np.asarray(r) for r in gather_res], axis=0) - - # order outputs - return np.concatenate(gather_res, axis=0) - elif target_data_type == 'list': - out = [] - for r in gather_res: - out.extend(r) - return out - else: - return gather_res diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/__init__.py deleted file mode 100644 index d35875dbb817576dd3e4b6036eae37c21f91f192..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/__init__.py +++ /dev/null @@ -1,176 +0,0 @@ -"""Rich text and beautiful formatting in the terminal.""" - -import os -from typing import IO, TYPE_CHECKING, Any, Callable, Optional, Union - -from ._extension import load_ipython_extension # noqa: F401 - -__all__ = ["get_console", "reconfigure", "print", "inspect"] - -if TYPE_CHECKING: - from .console import Console - -# Global console used by alternative print -_console: Optional["Console"] = None - -try: - _IMPORT_CWD = os.path.abspath(os.getcwd()) -except FileNotFoundError: - # Can happen if the cwd has been deleted - _IMPORT_CWD = "" - - -def get_console() -> "Console": - """Get a global :class:`~rich.console.Console` instance. This function is used when Rich requires a Console, - and hasn't been explicitly given one. - - Returns: - Console: A console instance. - """ - global _console - if _console is None: - from .console import Console - - _console = Console() - - return _console - - -def reconfigure(*args: Any, **kwargs: Any) -> None: - """Reconfigures the global console by replacing it with another. - - Args: - console (Console): Replacement console instance. - """ - from pip._vendor.rich.console import Console - - new_console = Console(*args, **kwargs) - _console = get_console() - _console.__dict__ = new_console.__dict__ - - -def print( - *objects: Any, - sep: str = " ", - end: str = "\n", - file: Optional[IO[str]] = None, - flush: bool = False, -) -> None: - r"""Print object(s) supplied via positional arguments. - This function has an identical signature to the built-in print. - For more advanced features, see the :class:`~rich.console.Console` class. - - Args: - sep (str, optional): Separator between printed objects. Defaults to " ". - end (str, optional): Character to write at end of output. Defaults to "\\n". - file (IO[str], optional): File to write to, or None for stdout. Defaults to None. - flush (bool, optional): Has no effect as Rich always flushes output. Defaults to False. - - """ - from .console import Console - - write_console = get_console() if file is None else Console(file=file) - return write_console.print(*objects, sep=sep, end=end) - - -def print_json( - json: Optional[str] = None, - *, - data: Any = None, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = True, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, -) -> None: - """Pretty prints JSON. Output will be valid JSON. - - Args: - json (str): A string containing JSON. - data (Any): If json is not supplied, then encode this data. - indent (int, optional): Number of spaces to indent. Defaults to 2. - highlight (bool, optional): Enable highlighting of output: Defaults to True. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - """ - - get_console().print_json( - json, - data=data, - indent=indent, - highlight=highlight, - skip_keys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - - -def inspect( - obj: Any, - *, - console: Optional["Console"] = None, - title: Optional[str] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = False, - value: bool = True, -) -> None: - """Inspect any Python object. - - * inspect() to see summarized info. - * inspect(, methods=True) to see methods. - * inspect(, help=True) to see full (non-abbreviated) help. - * inspect(, private=True) to see private attributes (single underscore). - * inspect(, dunder=True) to see attributes beginning with double underscore. - * inspect(, all=True) to see all attributes. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value. Defaults to True. - """ - _console = console or get_console() - from pip._vendor.rich._inspect import Inspect - - # Special case for inspect(inspect) - is_inspect = obj is inspect - - _inspect = Inspect( - obj, - title=title, - help=is_inspect or help, - methods=is_inspect or methods, - docs=is_inspect or docs, - private=private, - dunder=dunder, - sort=sort, - all=all, - value=value, - ) - _console.print(_inspect) - - -if __name__ == "__main__": # pragma: no cover - print("Hello, **World**") diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py deleted file mode 100644 index 9ebeb62d5c9ff9368740ec2b85cd6b8e9e222e4c..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py +++ /dev/null @@ -1,240 +0,0 @@ -# Copyright 2016–2021 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import abc -import re -import typing - -if typing.TYPE_CHECKING: - from pip._vendor.tenacity import RetryCallState - - -class retry_base(abc.ABC): - """Abstract base class for retry strategies.""" - - @abc.abstractmethod - def __call__(self, retry_state: "RetryCallState") -> bool: - pass - - def __and__(self, other: "retry_base") -> "retry_all": - return retry_all(self, other) - - def __or__(self, other: "retry_base") -> "retry_any": - return retry_any(self, other) - - -class _retry_never(retry_base): - """Retry strategy that never rejects any result.""" - - def __call__(self, retry_state: "RetryCallState") -> bool: - return False - - -retry_never = _retry_never() - - -class _retry_always(retry_base): - """Retry strategy that always rejects any result.""" - - def __call__(self, retry_state: "RetryCallState") -> bool: - return True - - -retry_always = _retry_always() - - -class retry_if_exception(retry_base): - """Retry strategy that retries if an exception verifies a predicate.""" - - def __init__(self, predicate: typing.Callable[[BaseException], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome.failed: - return self.predicate(retry_state.outcome.exception()) - else: - return False - - -class retry_if_exception_type(retry_if_exception): - """Retries if an exception has been raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: isinstance(e, exception_types)) - - -class retry_if_not_exception_type(retry_if_exception): - """Retries except an exception has been raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: not isinstance(e, exception_types)) - - -class retry_unless_exception_type(retry_if_exception): - """Retries until an exception is raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: not isinstance(e, exception_types)) - - def __call__(self, retry_state: "RetryCallState") -> bool: - # always retry if no exception was raised - if not retry_state.outcome.failed: - return True - return self.predicate(retry_state.outcome.exception()) - - -class retry_if_exception_cause_type(retry_base): - """Retries if any of the causes of the raised exception is of one or more types. - - The check on the type of the cause of the exception is done recursively (until finding - an exception in the chain that has no `__cause__`) - """ - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_cause_types = exception_types - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome.failed: - exc = retry_state.outcome.exception() - while exc is not None: - if isinstance(exc.__cause__, self.exception_cause_types): - return True - exc = exc.__cause__ - - return False - - -class retry_if_result(retry_base): - """Retries if the result verifies a predicate.""" - - def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if not retry_state.outcome.failed: - return self.predicate(retry_state.outcome.result()) - else: - return False - - -class retry_if_not_result(retry_base): - """Retries if the result refutes a predicate.""" - - def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if not retry_state.outcome.failed: - return not self.predicate(retry_state.outcome.result()) - else: - return False - - -class retry_if_exception_message(retry_if_exception): - """Retries if an exception message equals or matches.""" - - def __init__( - self, - message: typing.Optional[str] = None, - match: typing.Optional[str] = None, - ) -> None: - if message and match: - raise TypeError(f"{self.__class__.__name__}() takes either 'message' or 'match', not both") - - # set predicate - if message: - - def message_fnc(exception: BaseException) -> bool: - return message == str(exception) - - predicate = message_fnc - elif match: - prog = re.compile(match) - - def match_fnc(exception: BaseException) -> bool: - return bool(prog.match(str(exception))) - - predicate = match_fnc - else: - raise TypeError(f"{self.__class__.__name__}() missing 1 required argument 'message' or 'match'") - - super().__init__(predicate) - - -class retry_if_not_exception_message(retry_if_exception_message): - """Retries until an exception message equals or matches.""" - - def __init__( - self, - message: typing.Optional[str] = None, - match: typing.Optional[str] = None, - ) -> None: - super().__init__(message, match) - # invert predicate - if_predicate = self.predicate - self.predicate = lambda *args_, **kwargs_: not if_predicate(*args_, **kwargs_) - - def __call__(self, retry_state: "RetryCallState") -> bool: - if not retry_state.outcome.failed: - return True - return self.predicate(retry_state.outcome.exception()) - - -class retry_any(retry_base): - """Retries if any of the retries condition is valid.""" - - def __init__(self, *retries: retry_base) -> None: - self.retries = retries - - def __call__(self, retry_state: "RetryCallState") -> bool: - return any(r(retry_state) for r in self.retries) - - -class retry_all(retry_base): - """Retries if all the retries condition are valid.""" - - def __init__(self, *retries: retry_base) -> None: - self.retries = retries - - def __call__(self, retry_state: "RetryCallState") -> bool: - return all(r(retry_state) for r in self.retries) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/semantic_version/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/semantic_version/__init__.py deleted file mode 100644 index 1528bda574c4e50f10e99a46551a4a4d9378a6b2..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/semantic_version/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) The python-semanticversion project -# This code is distributed under the two-clause BSD License. - - -from .base import compare, match, validate, SimpleSpec, NpmSpec, Spec, SpecItem, Version - - -__author__ = "Raphaël Barrois " -try: - # Python 3.8+ - from importlib.metadata import version - - __version__ = version("semantic_version") -except ImportError: - import pkg_resources - - __version__ = pkg_resources.get_distribution("semantic_version").version diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/easy_install.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/easy_install.py deleted file mode 100644 index 444d3b33110b65c14ff5a043d0ca4137e92b30eb..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/easy_install.py +++ /dev/null @@ -1,2312 +0,0 @@ -""" -Easy Install ------------- - -A tool for doing automatic download/extract/build of distutils-based Python -packages. For detailed documentation, see the accompanying EasyInstall.txt -file, or visit the `EasyInstall home page`__. - -__ https://setuptools.pypa.io/en/latest/deprecated/easy_install.html - -""" - -from glob import glob -from distutils.util import get_platform -from distutils.util import convert_path, subst_vars -from distutils.errors import ( - DistutilsArgError, DistutilsOptionError, - DistutilsError, DistutilsPlatformError, -) -from distutils import log, dir_util -from distutils.command.build_scripts import first_line_re -from distutils.spawn import find_executable -from distutils.command import install -import sys -import os -import zipimport -import shutil -import tempfile -import zipfile -import re -import stat -import random -import textwrap -import warnings -import site -import struct -import contextlib -import subprocess -import shlex -import io -import configparser -import sysconfig - - -from sysconfig import get_path - -from setuptools import SetuptoolsDeprecationWarning - -from setuptools import Command -from setuptools.sandbox import run_setup -from setuptools.command import setopt -from setuptools.archive_util import unpack_archive -from setuptools.package_index import ( - PackageIndex, parse_requirement_arg, URL_SCHEME, -) -from setuptools.command import bdist_egg, egg_info -from setuptools.wheel import Wheel -from pkg_resources import ( - normalize_path, resource_string, - get_distribution, find_distributions, Environment, Requirement, - Distribution, PathMetadata, EggMetadata, WorkingSet, DistributionNotFound, - VersionConflict, DEVELOP_DIST, -) -import pkg_resources -from .._path import ensure_directory -from ..extern.jaraco.text import yield_lines - - -# Turn on PEP440Warnings -warnings.filterwarnings("default", category=pkg_resources.PEP440Warning) - -__all__ = [ - 'easy_install', 'PthDistributions', 'extract_wininst_cfg', - 'get_exe_prefixes', -] - - -def is_64bit(): - return struct.calcsize("P") == 8 - - -def _to_bytes(s): - return s.encode('utf8') - - -def isascii(s): - try: - s.encode('ascii') - return True - except UnicodeError: - return False - - -def _one_liner(text): - return textwrap.dedent(text).strip().replace('\n', '; ') - - -class easy_install(Command): - """Manage a download/build/install process""" - description = "Find/get/install Python packages" - command_consumes_arguments = True - - user_options = [ - ('prefix=', None, "installation prefix"), - ("zip-ok", "z", "install package as a zipfile"), - ("multi-version", "m", "make apps have to require() a version"), - ("upgrade", "U", "force upgrade (searches PyPI for latest versions)"), - ("install-dir=", "d", "install package to DIR"), - ("script-dir=", "s", "install scripts to DIR"), - ("exclude-scripts", "x", "Don't install scripts"), - ("always-copy", "a", "Copy all needed packages to install dir"), - ("index-url=", "i", "base URL of Python Package Index"), - ("find-links=", "f", "additional URL(s) to search for packages"), - ("build-directory=", "b", - "download/extract/build in DIR; keep the results"), - ('optimize=', 'O', - "also compile with optimization: -O1 for \"python -O\", " - "-O2 for \"python -OO\", and -O0 to disable [default: -O0]"), - ('record=', None, - "filename in which to record list of installed files"), - ('always-unzip', 'Z', "don't install as a zipfile, no matter what"), - ('site-dirs=', 'S', "list of directories where .pth files work"), - ('editable', 'e', "Install specified packages in editable form"), - ('no-deps', 'N', "don't install dependencies"), - ('allow-hosts=', 'H', "pattern(s) that hostnames must match"), - ('local-snapshots-ok', 'l', - "allow building eggs from local checkouts"), - ('version', None, "print version information and exit"), - ('no-find-links', None, - "Don't load find-links defined in packages being installed"), - ('user', None, "install in user site-package '%s'" % site.USER_SITE) - ] - boolean_options = [ - 'zip-ok', 'multi-version', 'exclude-scripts', 'upgrade', 'always-copy', - 'editable', - 'no-deps', 'local-snapshots-ok', 'version', - 'user' - ] - - negative_opt = {'always-unzip': 'zip-ok'} - create_index = PackageIndex - - def initialize_options(self): - warnings.warn( - "easy_install command is deprecated. " - "Use build and pip and other standards-based tools.", - EasyInstallDeprecationWarning, - ) - - # the --user option seems to be an opt-in one, - # so the default should be False. - self.user = 0 - self.zip_ok = self.local_snapshots_ok = None - self.install_dir = self.script_dir = self.exclude_scripts = None - self.index_url = None - self.find_links = None - self.build_directory = None - self.args = None - self.optimize = self.record = None - self.upgrade = self.always_copy = self.multi_version = None - self.editable = self.no_deps = self.allow_hosts = None - self.root = self.prefix = self.no_report = None - self.version = None - self.install_purelib = None # for pure module distributions - self.install_platlib = None # non-pure (dists w/ extensions) - self.install_headers = None # for C/C++ headers - self.install_lib = None # set to either purelib or platlib - self.install_scripts = None - self.install_data = None - self.install_base = None - self.install_platbase = None - self.install_userbase = site.USER_BASE - self.install_usersite = site.USER_SITE - self.no_find_links = None - - # Options not specifiable via command line - self.package_index = None - self.pth_file = self.always_copy_from = None - self.site_dirs = None - self.installed_projects = {} - # Always read easy_install options, even if we are subclassed, or have - # an independent instance created. This ensures that defaults will - # always come from the standard configuration file(s)' "easy_install" - # section, even if this is a "develop" or "install" command, or some - # other embedding. - self._dry_run = None - self.verbose = self.distribution.verbose - self.distribution._set_command_options( - self, self.distribution.get_option_dict('easy_install') - ) - - def delete_blockers(self, blockers): - extant_blockers = ( - filename for filename in blockers - if os.path.exists(filename) or os.path.islink(filename) - ) - list(map(self._delete_path, extant_blockers)) - - def _delete_path(self, path): - log.info("Deleting %s", path) - if self.dry_run: - return - - is_tree = os.path.isdir(path) and not os.path.islink(path) - remover = rmtree if is_tree else os.unlink - remover(path) - - @staticmethod - def _render_version(): - """ - Render the Setuptools version and installation details, then exit. - """ - ver = '{}.{}'.format(*sys.version_info) - dist = get_distribution('setuptools') - tmpl = 'setuptools {dist.version} from {dist.location} (Python {ver})' - print(tmpl.format(**locals())) - raise SystemExit() - - def finalize_options(self): # noqa: C901 # is too complex (25) # FIXME - self.version and self._render_version() - - py_version = sys.version.split()[0] - - self.config_vars = dict(sysconfig.get_config_vars()) - - self.config_vars.update({ - 'dist_name': self.distribution.get_name(), - 'dist_version': self.distribution.get_version(), - 'dist_fullname': self.distribution.get_fullname(), - 'py_version': py_version, - 'py_version_short': f'{sys.version_info.major}.{sys.version_info.minor}', - 'py_version_nodot': f'{sys.version_info.major}{sys.version_info.minor}', - 'sys_prefix': self.config_vars['prefix'], - 'sys_exec_prefix': self.config_vars['exec_prefix'], - # Only python 3.2+ has abiflags - 'abiflags': getattr(sys, 'abiflags', ''), - 'platlibdir': getattr(sys, 'platlibdir', 'lib'), - }) - with contextlib.suppress(AttributeError): - # only for distutils outside stdlib - self.config_vars.update({ - 'implementation_lower': install._get_implementation().lower(), - 'implementation': install._get_implementation(), - }) - - # pypa/distutils#113 Python 3.9 compat - self.config_vars.setdefault( - 'py_version_nodot_plat', - getattr(sys, 'windir', '').replace('.', ''), - ) - - self.config_vars['userbase'] = self.install_userbase - self.config_vars['usersite'] = self.install_usersite - if self.user and not site.ENABLE_USER_SITE: - log.warn("WARNING: The user site-packages directory is disabled.") - - self._fix_install_dir_for_user_site() - - self.expand_basedirs() - self.expand_dirs() - - self._expand( - 'install_dir', 'script_dir', 'build_directory', - 'site_dirs', - ) - # If a non-default installation directory was specified, default the - # script directory to match it. - if self.script_dir is None: - self.script_dir = self.install_dir - - if self.no_find_links is None: - self.no_find_links = False - - # Let install_dir get set by install_lib command, which in turn - # gets its info from the install command, and takes into account - # --prefix and --home and all that other crud. - self.set_undefined_options( - 'install_lib', ('install_dir', 'install_dir') - ) - # Likewise, set default script_dir from 'install_scripts.install_dir' - self.set_undefined_options( - 'install_scripts', ('install_dir', 'script_dir') - ) - - if self.user and self.install_purelib: - self.install_dir = self.install_purelib - self.script_dir = self.install_scripts - # default --record from the install command - self.set_undefined_options('install', ('record', 'record')) - self.all_site_dirs = get_site_dirs() - self.all_site_dirs.extend(self._process_site_dirs(self.site_dirs)) - - if not self.editable: - self.check_site_dir() - default_index = os.getenv("__EASYINSTALL_INDEX", "https://pypi.org/simple/") - # ^ Private API for testing purposes only - self.index_url = self.index_url or default_index - self.shadow_path = self.all_site_dirs[:] - for path_item in self.install_dir, normalize_path(self.script_dir): - if path_item not in self.shadow_path: - self.shadow_path.insert(0, path_item) - - if self.allow_hosts is not None: - hosts = [s.strip() for s in self.allow_hosts.split(',')] - else: - hosts = ['*'] - if self.package_index is None: - self.package_index = self.create_index( - self.index_url, search_path=self.shadow_path, hosts=hosts, - ) - self.local_index = Environment(self.shadow_path + sys.path) - - if self.find_links is not None: - if isinstance(self.find_links, str): - self.find_links = self.find_links.split() - else: - self.find_links = [] - if self.local_snapshots_ok: - self.package_index.scan_egg_links(self.shadow_path + sys.path) - if not self.no_find_links: - self.package_index.add_find_links(self.find_links) - self.set_undefined_options('install_lib', ('optimize', 'optimize')) - self.optimize = self._validate_optimize(self.optimize) - - if self.editable and not self.build_directory: - raise DistutilsArgError( - "Must specify a build directory (-b) when using --editable" - ) - if not self.args: - raise DistutilsArgError( - "No urls, filenames, or requirements specified (see --help)") - - self.outputs = [] - - @staticmethod - def _process_site_dirs(site_dirs): - if site_dirs is None: - return - - normpath = map(normalize_path, sys.path) - site_dirs = [ - os.path.expanduser(s.strip()) for s in - site_dirs.split(',') - ] - for d in site_dirs: - if not os.path.isdir(d): - log.warn("%s (in --site-dirs) does not exist", d) - elif normalize_path(d) not in normpath: - raise DistutilsOptionError( - d + " (in --site-dirs) is not on sys.path" - ) - else: - yield normalize_path(d) - - @staticmethod - def _validate_optimize(value): - try: - value = int(value) - if value not in range(3): - raise ValueError - except ValueError as e: - raise DistutilsOptionError( - "--optimize must be 0, 1, or 2" - ) from e - - return value - - def _fix_install_dir_for_user_site(self): - """ - Fix the install_dir if "--user" was used. - """ - if not self.user: - return - - self.create_home_path() - if self.install_userbase is None: - msg = "User base directory is not specified" - raise DistutilsPlatformError(msg) - self.install_base = self.install_platbase = self.install_userbase - scheme_name = f'{os.name}_user' - self.select_scheme(scheme_name) - - def _expand_attrs(self, attrs): - for attr in attrs: - val = getattr(self, attr) - if val is not None: - if os.name == 'posix' or os.name == 'nt': - val = os.path.expanduser(val) - val = subst_vars(val, self.config_vars) - setattr(self, attr, val) - - def expand_basedirs(self): - """Calls `os.path.expanduser` on install_base, install_platbase and - root.""" - self._expand_attrs(['install_base', 'install_platbase', 'root']) - - def expand_dirs(self): - """Calls `os.path.expanduser` on install dirs.""" - dirs = [ - 'install_purelib', - 'install_platlib', - 'install_lib', - 'install_headers', - 'install_scripts', - 'install_data', - ] - self._expand_attrs(dirs) - - def run(self, show_deprecation=True): - if show_deprecation: - self.announce( - "WARNING: The easy_install command is deprecated " - "and will be removed in a future version.", - log.WARN, - ) - if self.verbose != self.distribution.verbose: - log.set_verbosity(self.verbose) - try: - for spec in self.args: - self.easy_install(spec, not self.no_deps) - if self.record: - outputs = self.outputs - if self.root: # strip any package prefix - root_len = len(self.root) - for counter in range(len(outputs)): - outputs[counter] = outputs[counter][root_len:] - from distutils import file_util - - self.execute( - file_util.write_file, (self.record, outputs), - "writing list of installed files to '%s'" % - self.record - ) - self.warn_deprecated_options() - finally: - log.set_verbosity(self.distribution.verbose) - - def pseudo_tempname(self): - """Return a pseudo-tempname base in the install directory. - This code is intentionally naive; if a malicious party can write to - the target directory you're already in deep doodoo. - """ - try: - pid = os.getpid() - except Exception: - pid = random.randint(0, sys.maxsize) - return os.path.join(self.install_dir, "test-easy-install-%s" % pid) - - def warn_deprecated_options(self): - pass - - def check_site_dir(self): # noqa: C901 # is too complex (12) # FIXME - """Verify that self.install_dir is .pth-capable dir, if needed""" - - instdir = normalize_path(self.install_dir) - pth_file = os.path.join(instdir, 'easy-install.pth') - - if not os.path.exists(instdir): - try: - os.makedirs(instdir) - except (OSError, IOError): - self.cant_write_to_target() - - # Is it a configured, PYTHONPATH, implicit, or explicit site dir? - is_site_dir = instdir in self.all_site_dirs - - if not is_site_dir and not self.multi_version: - # No? Then directly test whether it does .pth file processing - is_site_dir = self.check_pth_processing() - else: - # make sure we can write to target dir - testfile = self.pseudo_tempname() + '.write-test' - test_exists = os.path.exists(testfile) - try: - if test_exists: - os.unlink(testfile) - open(testfile, 'w').close() - os.unlink(testfile) - except (OSError, IOError): - self.cant_write_to_target() - - if not is_site_dir and not self.multi_version: - # Can't install non-multi to non-site dir with easy_install - pythonpath = os.environ.get('PYTHONPATH', '') - log.warn(self.__no_default_msg, self.install_dir, pythonpath) - - if is_site_dir: - if self.pth_file is None: - self.pth_file = PthDistributions(pth_file, self.all_site_dirs) - else: - self.pth_file = None - - if self.multi_version and not os.path.exists(pth_file): - self.pth_file = None # don't create a .pth file - self.install_dir = instdir - - __cant_write_msg = textwrap.dedent(""" - can't create or remove files in install directory - - The following error occurred while trying to add or remove files in the - installation directory: - - %s - - The installation directory you specified (via --install-dir, --prefix, or - the distutils default setting) was: - - %s - """).lstrip() # noqa - - __not_exists_id = textwrap.dedent(""" - This directory does not currently exist. Please create it and try again, or - choose a different installation directory (using the -d or --install-dir - option). - """).lstrip() # noqa - - __access_msg = textwrap.dedent(""" - Perhaps your account does not have write access to this directory? If the - installation directory is a system-owned directory, you may need to sign in - as the administrator or "root" account. If you do not have administrative - access to this machine, you may wish to choose a different installation - directory, preferably one that is listed in your PYTHONPATH environment - variable. - - For information on other options, you may wish to consult the - documentation at: - - https://setuptools.pypa.io/en/latest/deprecated/easy_install.html - - Please make the appropriate changes for your system and try again. - """).lstrip() # noqa - - def cant_write_to_target(self): - msg = self.__cant_write_msg % (sys.exc_info()[1], self.install_dir,) - - if not os.path.exists(self.install_dir): - msg += '\n' + self.__not_exists_id - else: - msg += '\n' + self.__access_msg - raise DistutilsError(msg) - - def check_pth_processing(self): - """Empirically verify whether .pth files are supported in inst. dir""" - instdir = self.install_dir - log.info("Checking .pth file support in %s", instdir) - pth_file = self.pseudo_tempname() + ".pth" - ok_file = pth_file + '.ok' - ok_exists = os.path.exists(ok_file) - tmpl = _one_liner(""" - import os - f = open({ok_file!r}, 'w') - f.write('OK') - f.close() - """) + '\n' - try: - if ok_exists: - os.unlink(ok_file) - dirname = os.path.dirname(ok_file) - os.makedirs(dirname, exist_ok=True) - f = open(pth_file, 'w') - except (OSError, IOError): - self.cant_write_to_target() - else: - try: - f.write(tmpl.format(**locals())) - f.close() - f = None - executable = sys.executable - if os.name == 'nt': - dirname, basename = os.path.split(executable) - alt = os.path.join(dirname, 'pythonw.exe') - use_alt = ( - basename.lower() == 'python.exe' and - os.path.exists(alt) - ) - if use_alt: - # use pythonw.exe to avoid opening a console window - executable = alt - - from distutils.spawn import spawn - - spawn([executable, '-E', '-c', 'pass'], 0) - - if os.path.exists(ok_file): - log.info( - "TEST PASSED: %s appears to support .pth files", - instdir - ) - return True - finally: - if f: - f.close() - if os.path.exists(ok_file): - os.unlink(ok_file) - if os.path.exists(pth_file): - os.unlink(pth_file) - if not self.multi_version: - log.warn("TEST FAILED: %s does NOT support .pth files", instdir) - return False - - def install_egg_scripts(self, dist): - """Write all the scripts for `dist`, unless scripts are excluded""" - if not self.exclude_scripts and dist.metadata_isdir('scripts'): - for script_name in dist.metadata_listdir('scripts'): - if dist.metadata_isdir('scripts/' + script_name): - # The "script" is a directory, likely a Python 3 - # __pycache__ directory, so skip it. - continue - self.install_script( - dist, script_name, - dist.get_metadata('scripts/' + script_name) - ) - self.install_wrapper_scripts(dist) - - def add_output(self, path): - if os.path.isdir(path): - for base, dirs, files in os.walk(path): - for filename in files: - self.outputs.append(os.path.join(base, filename)) - else: - self.outputs.append(path) - - def not_editable(self, spec): - if self.editable: - raise DistutilsArgError( - "Invalid argument %r: you can't use filenames or URLs " - "with --editable (except via the --find-links option)." - % (spec,) - ) - - def check_editable(self, spec): - if not self.editable: - return - - if os.path.exists(os.path.join(self.build_directory, spec.key)): - raise DistutilsArgError( - "%r already exists in %s; can't do a checkout there" % - (spec.key, self.build_directory) - ) - - @contextlib.contextmanager - def _tmpdir(self): - tmpdir = tempfile.mkdtemp(prefix=u"easy_install-") - try: - # cast to str as workaround for #709 and #710 and #712 - yield str(tmpdir) - finally: - os.path.exists(tmpdir) and rmtree(tmpdir) - - def easy_install(self, spec, deps=False): - with self._tmpdir() as tmpdir: - if not isinstance(spec, Requirement): - if URL_SCHEME(spec): - # It's a url, download it to tmpdir and process - self.not_editable(spec) - dl = self.package_index.download(spec, tmpdir) - return self.install_item(None, dl, tmpdir, deps, True) - - elif os.path.exists(spec): - # Existing file or directory, just process it directly - self.not_editable(spec) - return self.install_item(None, spec, tmpdir, deps, True) - else: - spec = parse_requirement_arg(spec) - - self.check_editable(spec) - dist = self.package_index.fetch_distribution( - spec, tmpdir, self.upgrade, self.editable, - not self.always_copy, self.local_index - ) - if dist is None: - msg = "Could not find suitable distribution for %r" % spec - if self.always_copy: - msg += " (--always-copy skips system and development eggs)" - raise DistutilsError(msg) - elif dist.precedence == DEVELOP_DIST: - # .egg-info dists don't need installing, just process deps - self.process_distribution(spec, dist, deps, "Using") - return dist - else: - return self.install_item(spec, dist.location, tmpdir, deps) - - def install_item(self, spec, download, tmpdir, deps, install_needed=False): - - # Installation is also needed if file in tmpdir or is not an egg - install_needed = install_needed or self.always_copy - install_needed = install_needed or os.path.dirname(download) == tmpdir - install_needed = install_needed or not download.endswith('.egg') - install_needed = install_needed or ( - self.always_copy_from is not None and - os.path.dirname(normalize_path(download)) == - normalize_path(self.always_copy_from) - ) - - if spec and not install_needed: - # at this point, we know it's a local .egg, we just don't know if - # it's already installed. - for dist in self.local_index[spec.project_name]: - if dist.location == download: - break - else: - install_needed = True # it's not in the local index - - log.info("Processing %s", os.path.basename(download)) - - if install_needed: - dists = self.install_eggs(spec, download, tmpdir) - for dist in dists: - self.process_distribution(spec, dist, deps) - else: - dists = [self.egg_distribution(download)] - self.process_distribution(spec, dists[0], deps, "Using") - - if spec is not None: - for dist in dists: - if dist in spec: - return dist - - def select_scheme(self, name): - try: - install._select_scheme(self, name) - except AttributeError: - # stdlib distutils - install.install.select_scheme(self, name.replace('posix', 'unix')) - - # FIXME: 'easy_install.process_distribution' is too complex (12) - def process_distribution( # noqa: C901 - self, requirement, dist, deps=True, *info, - ): - self.update_pth(dist) - self.package_index.add(dist) - if dist in self.local_index[dist.key]: - self.local_index.remove(dist) - self.local_index.add(dist) - self.install_egg_scripts(dist) - self.installed_projects[dist.key] = dist - log.info(self.installation_report(requirement, dist, *info)) - if (dist.has_metadata('dependency_links.txt') and - not self.no_find_links): - self.package_index.add_find_links( - dist.get_metadata_lines('dependency_links.txt') - ) - if not deps and not self.always_copy: - return - elif requirement is not None and dist.key != requirement.key: - log.warn("Skipping dependencies for %s", dist) - return # XXX this is not the distribution we were looking for - elif requirement is None or dist not in requirement: - # if we wound up with a different version, resolve what we've got - distreq = dist.as_requirement() - requirement = Requirement(str(distreq)) - log.info("Processing dependencies for %s", requirement) - try: - distros = WorkingSet([]).resolve( - [requirement], self.local_index, self.easy_install - ) - except DistributionNotFound as e: - raise DistutilsError(str(e)) from e - except VersionConflict as e: - raise DistutilsError(e.report()) from e - if self.always_copy or self.always_copy_from: - # Force all the relevant distros to be copied or activated - for dist in distros: - if dist.key not in self.installed_projects: - self.easy_install(dist.as_requirement()) - log.info("Finished processing dependencies for %s", requirement) - - def should_unzip(self, dist): - if self.zip_ok is not None: - return not self.zip_ok - if dist.has_metadata('not-zip-safe'): - return True - if not dist.has_metadata('zip-safe'): - return True - return False - - def maybe_move(self, spec, dist_filename, setup_base): - dst = os.path.join(self.build_directory, spec.key) - if os.path.exists(dst): - msg = ( - "%r already exists in %s; build directory %s will not be kept" - ) - log.warn(msg, spec.key, self.build_directory, setup_base) - return setup_base - if os.path.isdir(dist_filename): - setup_base = dist_filename - else: - if os.path.dirname(dist_filename) == setup_base: - os.unlink(dist_filename) # get it out of the tmp dir - contents = os.listdir(setup_base) - if len(contents) == 1: - dist_filename = os.path.join(setup_base, contents[0]) - if os.path.isdir(dist_filename): - # if the only thing there is a directory, move it instead - setup_base = dist_filename - ensure_directory(dst) - shutil.move(setup_base, dst) - return dst - - def install_wrapper_scripts(self, dist): - if self.exclude_scripts: - return - for args in ScriptWriter.best().get_args(dist): - self.write_script(*args) - - def install_script(self, dist, script_name, script_text, dev_path=None): - """Generate a legacy script wrapper and install it""" - spec = str(dist.as_requirement()) - is_script = is_python_script(script_text, script_name) - - if is_script: - body = self._load_template(dev_path) % locals() - script_text = ScriptWriter.get_header(script_text) + body - self.write_script(script_name, _to_bytes(script_text), 'b') - - @staticmethod - def _load_template(dev_path): - """ - There are a couple of template scripts in the package. This - function loads one of them and prepares it for use. - """ - # See https://github.com/pypa/setuptools/issues/134 for info - # on script file naming and downstream issues with SVR4 - name = 'script.tmpl' - if dev_path: - name = name.replace('.tmpl', ' (dev).tmpl') - - raw_bytes = resource_string('setuptools', name) - return raw_bytes.decode('utf-8') - - def write_script(self, script_name, contents, mode="t", blockers=()): - """Write an executable file to the scripts directory""" - self.delete_blockers( # clean up old .py/.pyw w/o a script - [os.path.join(self.script_dir, x) for x in blockers] - ) - log.info("Installing %s script to %s", script_name, self.script_dir) - target = os.path.join(self.script_dir, script_name) - self.add_output(target) - - if self.dry_run: - return - - mask = current_umask() - ensure_directory(target) - if os.path.exists(target): - os.unlink(target) - with open(target, "w" + mode) as f: - f.write(contents) - chmod(target, 0o777 - mask) - - def install_eggs(self, spec, dist_filename, tmpdir): - # .egg dirs or files are already built, so just return them - installer_map = { - '.egg': self.install_egg, - '.exe': self.install_exe, - '.whl': self.install_wheel, - } - try: - install_dist = installer_map[ - dist_filename.lower()[-4:] - ] - except KeyError: - pass - else: - return [install_dist(dist_filename, tmpdir)] - - # Anything else, try to extract and build - setup_base = tmpdir - if os.path.isfile(dist_filename) and not dist_filename.endswith('.py'): - unpack_archive(dist_filename, tmpdir, self.unpack_progress) - elif os.path.isdir(dist_filename): - setup_base = os.path.abspath(dist_filename) - - if (setup_base.startswith(tmpdir) # something we downloaded - and self.build_directory and spec is not None): - setup_base = self.maybe_move(spec, dist_filename, setup_base) - - # Find the setup.py file - setup_script = os.path.join(setup_base, 'setup.py') - - if not os.path.exists(setup_script): - setups = glob(os.path.join(setup_base, '*', 'setup.py')) - if not setups: - raise DistutilsError( - "Couldn't find a setup script in %s" % - os.path.abspath(dist_filename) - ) - if len(setups) > 1: - raise DistutilsError( - "Multiple setup scripts in %s" % - os.path.abspath(dist_filename) - ) - setup_script = setups[0] - - # Now run it, and return the result - if self.editable: - log.info(self.report_editable(spec, setup_script)) - return [] - else: - return self.build_and_install(setup_script, setup_base) - - def egg_distribution(self, egg_path): - if os.path.isdir(egg_path): - metadata = PathMetadata(egg_path, os.path.join(egg_path, - 'EGG-INFO')) - else: - metadata = EggMetadata(zipimport.zipimporter(egg_path)) - return Distribution.from_filename(egg_path, metadata=metadata) - - # FIXME: 'easy_install.install_egg' is too complex (11) - def install_egg(self, egg_path, tmpdir): # noqa: C901 - destination = os.path.join( - self.install_dir, - os.path.basename(egg_path), - ) - destination = os.path.abspath(destination) - if not self.dry_run: - ensure_directory(destination) - - dist = self.egg_distribution(egg_path) - if not ( - os.path.exists(destination) and os.path.samefile(egg_path, destination) - ): - if os.path.isdir(destination) and not os.path.islink(destination): - dir_util.remove_tree(destination, dry_run=self.dry_run) - elif os.path.exists(destination): - self.execute( - os.unlink, - (destination,), - "Removing " + destination, - ) - try: - new_dist_is_zipped = False - if os.path.isdir(egg_path): - if egg_path.startswith(tmpdir): - f, m = shutil.move, "Moving" - else: - f, m = shutil.copytree, "Copying" - elif self.should_unzip(dist): - self.mkpath(destination) - f, m = self.unpack_and_compile, "Extracting" - else: - new_dist_is_zipped = True - if egg_path.startswith(tmpdir): - f, m = shutil.move, "Moving" - else: - f, m = shutil.copy2, "Copying" - self.execute( - f, - (egg_path, destination), - (m + " %s to %s") % ( - os.path.basename(egg_path), - os.path.dirname(destination) - ), - ) - update_dist_caches( - destination, - fix_zipimporter_caches=new_dist_is_zipped, - ) - except Exception: - update_dist_caches(destination, fix_zipimporter_caches=False) - raise - - self.add_output(destination) - return self.egg_distribution(destination) - - def install_exe(self, dist_filename, tmpdir): - # See if it's valid, get data - cfg = extract_wininst_cfg(dist_filename) - if cfg is None: - raise DistutilsError( - "%s is not a valid distutils Windows .exe" % dist_filename - ) - # Create a dummy distribution object until we build the real distro - dist = Distribution( - None, - project_name=cfg.get('metadata', 'name'), - version=cfg.get('metadata', 'version'), platform=get_platform(), - ) - - # Convert the .exe to an unpacked egg - egg_path = os.path.join(tmpdir, dist.egg_name() + '.egg') - dist.location = egg_path - egg_tmp = egg_path + '.tmp' - _egg_info = os.path.join(egg_tmp, 'EGG-INFO') - pkg_inf = os.path.join(_egg_info, 'PKG-INFO') - ensure_directory(pkg_inf) # make sure EGG-INFO dir exists - dist._provider = PathMetadata(egg_tmp, _egg_info) # XXX - self.exe_to_egg(dist_filename, egg_tmp) - - # Write EGG-INFO/PKG-INFO - if not os.path.exists(pkg_inf): - f = open(pkg_inf, 'w') - f.write('Metadata-Version: 1.0\n') - for k, v in cfg.items('metadata'): - if k != 'target_version': - f.write('%s: %s\n' % (k.replace('_', '-').title(), v)) - f.close() - script_dir = os.path.join(_egg_info, 'scripts') - # delete entry-point scripts to avoid duping - self.delete_blockers([ - os.path.join(script_dir, args[0]) - for args in ScriptWriter.get_args(dist) - ]) - # Build .egg file from tmpdir - bdist_egg.make_zipfile( - egg_path, egg_tmp, verbose=self.verbose, dry_run=self.dry_run, - ) - # install the .egg - return self.install_egg(egg_path, tmpdir) - - # FIXME: 'easy_install.exe_to_egg' is too complex (12) - def exe_to_egg(self, dist_filename, egg_tmp): # noqa: C901 - """Extract a bdist_wininst to the directories an egg would use""" - # Check for .pth file and set up prefix translations - prefixes = get_exe_prefixes(dist_filename) - to_compile = [] - native_libs = [] - top_level = {} - - def process(src, dst): - s = src.lower() - for old, new in prefixes: - if s.startswith(old): - src = new + src[len(old):] - parts = src.split('/') - dst = os.path.join(egg_tmp, *parts) - dl = dst.lower() - if dl.endswith('.pyd') or dl.endswith('.dll'): - parts[-1] = bdist_egg.strip_module(parts[-1]) - top_level[os.path.splitext(parts[0])[0]] = 1 - native_libs.append(src) - elif dl.endswith('.py') and old != 'SCRIPTS/': - top_level[os.path.splitext(parts[0])[0]] = 1 - to_compile.append(dst) - return dst - if not src.endswith('.pth'): - log.warn("WARNING: can't process %s", src) - return None - - # extract, tracking .pyd/.dll->native_libs and .py -> to_compile - unpack_archive(dist_filename, egg_tmp, process) - stubs = [] - for res in native_libs: - if res.lower().endswith('.pyd'): # create stubs for .pyd's - parts = res.split('/') - resource = parts[-1] - parts[-1] = bdist_egg.strip_module(parts[-1]) + '.py' - pyfile = os.path.join(egg_tmp, *parts) - to_compile.append(pyfile) - stubs.append(pyfile) - bdist_egg.write_stub(resource, pyfile) - self.byte_compile(to_compile) # compile .py's - bdist_egg.write_safety_flag( - os.path.join(egg_tmp, 'EGG-INFO'), - bdist_egg.analyze_egg(egg_tmp, stubs)) # write zip-safety flag - - for name in 'top_level', 'native_libs': - if locals()[name]: - txt = os.path.join(egg_tmp, 'EGG-INFO', name + '.txt') - if not os.path.exists(txt): - f = open(txt, 'w') - f.write('\n'.join(locals()[name]) + '\n') - f.close() - - def install_wheel(self, wheel_path, tmpdir): - wheel = Wheel(wheel_path) - assert wheel.is_compatible() - destination = os.path.join(self.install_dir, wheel.egg_name()) - destination = os.path.abspath(destination) - if not self.dry_run: - ensure_directory(destination) - if os.path.isdir(destination) and not os.path.islink(destination): - dir_util.remove_tree(destination, dry_run=self.dry_run) - elif os.path.exists(destination): - self.execute( - os.unlink, - (destination,), - "Removing " + destination, - ) - try: - self.execute( - wheel.install_as_egg, - (destination,), - ("Installing %s to %s") % ( - os.path.basename(wheel_path), - os.path.dirname(destination) - ), - ) - finally: - update_dist_caches(destination, fix_zipimporter_caches=False) - self.add_output(destination) - return self.egg_distribution(destination) - - __mv_warning = textwrap.dedent(""" - Because this distribution was installed --multi-version, before you can - import modules from this package in an application, you will need to - 'import pkg_resources' and then use a 'require()' call similar to one of - these examples, in order to select the desired version: - - pkg_resources.require("%(name)s") # latest installed version - pkg_resources.require("%(name)s==%(version)s") # this exact version - pkg_resources.require("%(name)s>=%(version)s") # this version or higher - """).lstrip() # noqa - - __id_warning = textwrap.dedent(""" - Note also that the installation directory must be on sys.path at runtime for - this to work. (e.g. by being the application's script directory, by being on - PYTHONPATH, or by being added to sys.path by your code.) - """) # noqa - - def installation_report(self, req, dist, what="Installed"): - """Helpful installation message for display to package users""" - msg = "\n%(what)s %(eggloc)s%(extras)s" - if self.multi_version and not self.no_report: - msg += '\n' + self.__mv_warning - if self.install_dir not in map(normalize_path, sys.path): - msg += '\n' + self.__id_warning - - eggloc = dist.location - name = dist.project_name - version = dist.version - extras = '' # TODO: self.report_extras(req, dist) - return msg % locals() - - __editable_msg = textwrap.dedent(""" - Extracted editable version of %(spec)s to %(dirname)s - - If it uses setuptools in its setup script, you can activate it in - "development" mode by going to that directory and running:: - - %(python)s setup.py develop - - See the setuptools documentation for the "develop" command for more info. - """).lstrip() # noqa - - def report_editable(self, spec, setup_script): - dirname = os.path.dirname(setup_script) - python = sys.executable - return '\n' + self.__editable_msg % locals() - - def run_setup(self, setup_script, setup_base, args): - sys.modules.setdefault('distutils.command.bdist_egg', bdist_egg) - sys.modules.setdefault('distutils.command.egg_info', egg_info) - - args = list(args) - if self.verbose > 2: - v = 'v' * (self.verbose - 1) - args.insert(0, '-' + v) - elif self.verbose < 2: - args.insert(0, '-q') - if self.dry_run: - args.insert(0, '-n') - log.info( - "Running %s %s", setup_script[len(setup_base) + 1:], ' '.join(args) - ) - try: - run_setup(setup_script, args) - except SystemExit as v: - raise DistutilsError( - "Setup script exited with %s" % (v.args[0],) - ) from v - - def build_and_install(self, setup_script, setup_base): - args = ['bdist_egg', '--dist-dir'] - - dist_dir = tempfile.mkdtemp( - prefix='egg-dist-tmp-', dir=os.path.dirname(setup_script) - ) - try: - self._set_fetcher_options(os.path.dirname(setup_script)) - args.append(dist_dir) - - self.run_setup(setup_script, setup_base, args) - all_eggs = Environment([dist_dir]) - eggs = [] - for key in all_eggs: - for dist in all_eggs[key]: - eggs.append(self.install_egg(dist.location, setup_base)) - if not eggs and not self.dry_run: - log.warn("No eggs found in %s (setup script problem?)", - dist_dir) - return eggs - finally: - rmtree(dist_dir) - log.set_verbosity(self.verbose) # restore our log verbosity - - def _set_fetcher_options(self, base): - """ - When easy_install is about to run bdist_egg on a source dist, that - source dist might have 'setup_requires' directives, requiring - additional fetching. Ensure the fetcher options given to easy_install - are available to that command as well. - """ - # find the fetch options from easy_install and write them out - # to the setup.cfg file. - ei_opts = self.distribution.get_option_dict('easy_install').copy() - fetch_directives = ( - 'find_links', 'site_dirs', 'index_url', 'optimize', 'allow_hosts', - ) - fetch_options = {} - for key, val in ei_opts.items(): - if key not in fetch_directives: - continue - fetch_options[key] = val[1] - # create a settings dictionary suitable for `edit_config` - settings = dict(easy_install=fetch_options) - cfg_filename = os.path.join(base, 'setup.cfg') - setopt.edit_config(cfg_filename, settings) - - def update_pth(self, dist): # noqa: C901 # is too complex (11) # FIXME - if self.pth_file is None: - return - - for d in self.pth_file[dist.key]: # drop old entries - if not self.multi_version and d.location == dist.location: - continue - - log.info("Removing %s from easy-install.pth file", d) - self.pth_file.remove(d) - if d.location in self.shadow_path: - self.shadow_path.remove(d.location) - - if not self.multi_version: - if dist.location in self.pth_file.paths: - log.info( - "%s is already the active version in easy-install.pth", - dist, - ) - else: - log.info("Adding %s to easy-install.pth file", dist) - self.pth_file.add(dist) # add new entry - if dist.location not in self.shadow_path: - self.shadow_path.append(dist.location) - - if self.dry_run: - return - - self.pth_file.save() - - if dist.key != 'setuptools': - return - - # Ensure that setuptools itself never becomes unavailable! - # XXX should this check for latest version? - filename = os.path.join(self.install_dir, 'setuptools.pth') - if os.path.islink(filename): - os.unlink(filename) - with open(filename, 'wt') as f: - f.write(self.pth_file.make_relative(dist.location) + '\n') - - def unpack_progress(self, src, dst): - # Progress filter for unpacking - log.debug("Unpacking %s to %s", src, dst) - return dst # only unpack-and-compile skips files for dry run - - def unpack_and_compile(self, egg_path, destination): - to_compile = [] - to_chmod = [] - - def pf(src, dst): - if dst.endswith('.py') and not src.startswith('EGG-INFO/'): - to_compile.append(dst) - elif dst.endswith('.dll') or dst.endswith('.so'): - to_chmod.append(dst) - self.unpack_progress(src, dst) - return not self.dry_run and dst or None - - unpack_archive(egg_path, destination, pf) - self.byte_compile(to_compile) - if not self.dry_run: - for f in to_chmod: - mode = ((os.stat(f)[stat.ST_MODE]) | 0o555) & 0o7755 - chmod(f, mode) - - def byte_compile(self, to_compile): - if sys.dont_write_bytecode: - return - - from distutils.util import byte_compile - - try: - # try to make the byte compile messages quieter - log.set_verbosity(self.verbose - 1) - - byte_compile(to_compile, optimize=0, force=1, dry_run=self.dry_run) - if self.optimize: - byte_compile( - to_compile, optimize=self.optimize, force=1, - dry_run=self.dry_run, - ) - finally: - log.set_verbosity(self.verbose) # restore original verbosity - - __no_default_msg = textwrap.dedent(""" - bad install directory or PYTHONPATH - - You are attempting to install a package to a directory that is not - on PYTHONPATH and which Python does not read ".pth" files from. The - installation directory you specified (via --install-dir, --prefix, or - the distutils default setting) was: - - %s - - and your PYTHONPATH environment variable currently contains: - - %r - - Here are some of your options for correcting the problem: - - * You can choose a different installation directory, i.e., one that is - on PYTHONPATH or supports .pth files - - * You can add the installation directory to the PYTHONPATH environment - variable. (It must then also be on PYTHONPATH whenever you run - Python and want to use the package(s) you are installing.) - - * You can set up the installation directory to support ".pth" files by - using one of the approaches described here: - - https://setuptools.pypa.io/en/latest/deprecated/easy_install.html#custom-installation-locations - - - Please make the appropriate changes for your system and try again. - """).strip() - - def create_home_path(self): - """Create directories under ~.""" - if not self.user: - return - home = convert_path(os.path.expanduser("~")) - for path in only_strs(self.config_vars.values()): - if path.startswith(home) and not os.path.isdir(path): - self.debug_print("os.makedirs('%s', 0o700)" % path) - os.makedirs(path, 0o700) - - INSTALL_SCHEMES = dict( - posix=dict( - install_dir='$base/lib/python$py_version_short/site-packages', - script_dir='$base/bin', - ), - ) - - DEFAULT_SCHEME = dict( - install_dir='$base/Lib/site-packages', - script_dir='$base/Scripts', - ) - - def _expand(self, *attrs): - config_vars = self.get_finalized_command('install').config_vars - - if self.prefix: - # Set default install_dir/scripts from --prefix - config_vars = dict(config_vars) - config_vars['base'] = self.prefix - scheme = self.INSTALL_SCHEMES.get(os.name, self.DEFAULT_SCHEME) - for attr, val in scheme.items(): - if getattr(self, attr, None) is None: - setattr(self, attr, val) - - from distutils.util import subst_vars - - for attr in attrs: - val = getattr(self, attr) - if val is not None: - val = subst_vars(val, config_vars) - if os.name == 'posix': - val = os.path.expanduser(val) - setattr(self, attr, val) - - -def _pythonpath(): - items = os.environ.get('PYTHONPATH', '').split(os.pathsep) - return filter(None, items) - - -def get_site_dirs(): - """ - Return a list of 'site' dirs - """ - - sitedirs = [] - - # start with PYTHONPATH - sitedirs.extend(_pythonpath()) - - prefixes = [sys.prefix] - if sys.exec_prefix != sys.prefix: - prefixes.append(sys.exec_prefix) - for prefix in prefixes: - if not prefix: - continue - - if sys.platform in ('os2emx', 'riscos'): - sitedirs.append(os.path.join(prefix, "Lib", "site-packages")) - elif os.sep == '/': - sitedirs.extend([ - os.path.join( - prefix, - "lib", - "python{}.{}".format(*sys.version_info), - "site-packages", - ), - os.path.join(prefix, "lib", "site-python"), - ]) - else: - sitedirs.extend([ - prefix, - os.path.join(prefix, "lib", "site-packages"), - ]) - if sys.platform != 'darwin': - continue - - # for framework builds *only* we add the standard Apple - # locations. Currently only per-user, but /Library and - # /Network/Library could be added too - if 'Python.framework' not in prefix: - continue - - home = os.environ.get('HOME') - if not home: - continue - - home_sp = os.path.join( - home, - 'Library', - 'Python', - '{}.{}'.format(*sys.version_info), - 'site-packages', - ) - sitedirs.append(home_sp) - lib_paths = get_path('purelib'), get_path('platlib') - - sitedirs.extend(s for s in lib_paths if s not in sitedirs) - - if site.ENABLE_USER_SITE: - sitedirs.append(site.USER_SITE) - - with contextlib.suppress(AttributeError): - sitedirs.extend(site.getsitepackages()) - - sitedirs = list(map(normalize_path, sitedirs)) - - return sitedirs - - -def expand_paths(inputs): # noqa: C901 # is too complex (11) # FIXME - """Yield sys.path directories that might contain "old-style" packages""" - - seen = {} - - for dirname in inputs: - dirname = normalize_path(dirname) - if dirname in seen: - continue - - seen[dirname] = 1 - if not os.path.isdir(dirname): - continue - - files = os.listdir(dirname) - yield dirname, files - - for name in files: - if not name.endswith('.pth'): - # We only care about the .pth files - continue - if name in ('easy-install.pth', 'setuptools.pth'): - # Ignore .pth files that we control - continue - - # Read the .pth file - f = open(os.path.join(dirname, name)) - lines = list(yield_lines(f)) - f.close() - - # Yield existing non-dupe, non-import directory lines from it - for line in lines: - if line.startswith("import"): - continue - - line = normalize_path(line.rstrip()) - if line in seen: - continue - - seen[line] = 1 - if not os.path.isdir(line): - continue - - yield line, os.listdir(line) - - -def extract_wininst_cfg(dist_filename): - """Extract configuration data from a bdist_wininst .exe - - Returns a configparser.RawConfigParser, or None - """ - f = open(dist_filename, 'rb') - try: - endrec = zipfile._EndRecData(f) - if endrec is None: - return None - - prepended = (endrec[9] - endrec[5]) - endrec[6] - if prepended < 12: # no wininst data here - return None - f.seek(prepended - 12) - - tag, cfglen, bmlen = struct.unpack("egg path translations for a given .exe file""" - - prefixes = [ - ('PURELIB/', ''), - ('PLATLIB/pywin32_system32', ''), - ('PLATLIB/', ''), - ('SCRIPTS/', 'EGG-INFO/scripts/'), - ('DATA/lib/site-packages', ''), - ] - z = zipfile.ZipFile(exe_filename) - try: - for info in z.infolist(): - name = info.filename - parts = name.split('/') - if len(parts) == 3 and parts[2] == 'PKG-INFO': - if parts[1].endswith('.egg-info'): - prefixes.insert(0, ('/'.join(parts[:2]), 'EGG-INFO/')) - break - if len(parts) != 2 or not name.endswith('.pth'): - continue - if name.endswith('-nspkg.pth'): - continue - if parts[0].upper() in ('PURELIB', 'PLATLIB'): - contents = z.read(name).decode() - for pth in yield_lines(contents): - pth = pth.strip().replace('\\', '/') - if not pth.startswith('import'): - prefixes.append((('%s/%s/' % (parts[0], pth)), '')) - finally: - z.close() - prefixes = [(x.lower(), y) for x, y in prefixes] - prefixes.sort() - prefixes.reverse() - return prefixes - - -class PthDistributions(Environment): - """A .pth file with Distribution paths in it""" - - dirty = False - - def __init__(self, filename, sitedirs=()): - self.filename = filename - self.sitedirs = list(map(normalize_path, sitedirs)) - self.basedir = normalize_path(os.path.dirname(self.filename)) - self._load() - super().__init__([], None, None) - for path in yield_lines(self.paths): - list(map(self.add, find_distributions(path, True))) - - def _load(self): - self.paths = [] - saw_import = False - seen = dict.fromkeys(self.sitedirs) - if os.path.isfile(self.filename): - f = open(self.filename, 'rt') - for line in f: - if line.startswith('import'): - saw_import = True - continue - path = line.rstrip() - self.paths.append(path) - if not path.strip() or path.strip().startswith('#'): - continue - # skip non-existent paths, in case somebody deleted a package - # manually, and duplicate paths as well - path = self.paths[-1] = normalize_path( - os.path.join(self.basedir, path) - ) - if not os.path.exists(path) or path in seen: - self.paths.pop() # skip it - self.dirty = True # we cleaned up, so we're dirty now :) - continue - seen[path] = 1 - f.close() - - if self.paths and not saw_import: - self.dirty = True # ensure anything we touch has import wrappers - while self.paths and not self.paths[-1].strip(): - self.paths.pop() - - def save(self): - """Write changed .pth file back to disk""" - if not self.dirty: - return - - rel_paths = list(map(self.make_relative, self.paths)) - if rel_paths: - log.debug("Saving %s", self.filename) - lines = self._wrap_lines(rel_paths) - data = '\n'.join(lines) + '\n' - - if os.path.islink(self.filename): - os.unlink(self.filename) - with open(self.filename, 'wt') as f: - f.write(data) - - elif os.path.exists(self.filename): - log.debug("Deleting empty %s", self.filename) - os.unlink(self.filename) - - self.dirty = False - - @staticmethod - def _wrap_lines(lines): - return lines - - def add(self, dist): - """Add `dist` to the distribution map""" - new_path = ( - dist.location not in self.paths and ( - dist.location not in self.sitedirs or - # account for '.' being in PYTHONPATH - dist.location == os.getcwd() - ) - ) - if new_path: - self.paths.append(dist.location) - self.dirty = True - super().add(dist) - - def remove(self, dist): - """Remove `dist` from the distribution map""" - while dist.location in self.paths: - self.paths.remove(dist.location) - self.dirty = True - super().remove(dist) - - def make_relative(self, path): - npath, last = os.path.split(normalize_path(path)) - baselen = len(self.basedir) - parts = [last] - sep = os.altsep == '/' and '/' or os.sep - while len(npath) >= baselen: - if npath == self.basedir: - parts.append(os.curdir) - parts.reverse() - return sep.join(parts) - npath, last = os.path.split(npath) - parts.append(last) - else: - return path - - -class RewritePthDistributions(PthDistributions): - @classmethod - def _wrap_lines(cls, lines): - yield cls.prelude - for line in lines: - yield line - yield cls.postlude - - prelude = _one_liner(""" - import sys - sys.__plen = len(sys.path) - """) - postlude = _one_liner(""" - import sys - new = sys.path[sys.__plen:] - del sys.path[sys.__plen:] - p = getattr(sys, '__egginsert', 0) - sys.path[p:p] = new - sys.__egginsert = p + len(new) - """) - - -if os.environ.get('SETUPTOOLS_SYS_PATH_TECHNIQUE', 'raw') == 'rewrite': - PthDistributions = RewritePthDistributions - - -def _first_line_re(): - """ - Return a regular expression based on first_line_re suitable for matching - strings. - """ - if isinstance(first_line_re.pattern, str): - return first_line_re - - # first_line_re in Python >=3.1.4 and >=3.2.1 is a bytes pattern. - return re.compile(first_line_re.pattern.decode()) - - -def auto_chmod(func, arg, exc): - if func in [os.unlink, os.remove] and os.name == 'nt': - chmod(arg, stat.S_IWRITE) - return func(arg) - et, ev, _ = sys.exc_info() - # TODO: This code doesn't make sense. What is it trying to do? - raise (ev[0], ev[1] + (" %s %s" % (func, arg))) - - -def update_dist_caches(dist_path, fix_zipimporter_caches): - """ - Fix any globally cached `dist_path` related data - - `dist_path` should be a path of a newly installed egg distribution (zipped - or unzipped). - - sys.path_importer_cache contains finder objects that have been cached when - importing data from the original distribution. Any such finders need to be - cleared since the replacement distribution might be packaged differently, - e.g. a zipped egg distribution might get replaced with an unzipped egg - folder or vice versa. Having the old finders cached may then cause Python - to attempt loading modules from the replacement distribution using an - incorrect loader. - - zipimport.zipimporter objects are Python loaders charged with importing - data packaged inside zip archives. If stale loaders referencing the - original distribution, are left behind, they can fail to load modules from - the replacement distribution. E.g. if an old zipimport.zipimporter instance - is used to load data from a new zipped egg archive, it may cause the - operation to attempt to locate the requested data in the wrong location - - one indicated by the original distribution's zip archive directory - information. Such an operation may then fail outright, e.g. report having - read a 'bad local file header', or even worse, it may fail silently & - return invalid data. - - zipimport._zip_directory_cache contains cached zip archive directory - information for all existing zipimport.zipimporter instances and all such - instances connected to the same archive share the same cached directory - information. - - If asked, and the underlying Python implementation allows it, we can fix - all existing zipimport.zipimporter instances instead of having to track - them down and remove them one by one, by updating their shared cached zip - archive directory information. This, of course, assumes that the - replacement distribution is packaged as a zipped egg. - - If not asked to fix existing zipimport.zipimporter instances, we still do - our best to clear any remaining zipimport.zipimporter related cached data - that might somehow later get used when attempting to load data from the new - distribution and thus cause such load operations to fail. Note that when - tracking down such remaining stale data, we can not catch every conceivable - usage from here, and we clear only those that we know of and have found to - cause problems if left alive. Any remaining caches should be updated by - whomever is in charge of maintaining them, i.e. they should be ready to - handle us replacing their zip archives with new distributions at runtime. - - """ - # There are several other known sources of stale zipimport.zipimporter - # instances that we do not clear here, but might if ever given a reason to - # do so: - # * Global setuptools pkg_resources.working_set (a.k.a. 'master working - # set') may contain distributions which may in turn contain their - # zipimport.zipimporter loaders. - # * Several zipimport.zipimporter loaders held by local variables further - # up the function call stack when running the setuptools installation. - # * Already loaded modules may have their __loader__ attribute set to the - # exact loader instance used when importing them. Python 3.4 docs state - # that this information is intended mostly for introspection and so is - # not expected to cause us problems. - normalized_path = normalize_path(dist_path) - _uncache(normalized_path, sys.path_importer_cache) - if fix_zipimporter_caches: - _replace_zip_directory_cache_data(normalized_path) - else: - # Here, even though we do not want to fix existing and now stale - # zipimporter cache information, we still want to remove it. Related to - # Python's zip archive directory information cache, we clear each of - # its stale entries in two phases: - # 1. Clear the entry so attempting to access zip archive information - # via any existing stale zipimport.zipimporter instances fails. - # 2. Remove the entry from the cache so any newly constructed - # zipimport.zipimporter instances do not end up using old stale - # zip archive directory information. - # This whole stale data removal step does not seem strictly necessary, - # but has been left in because it was done before we started replacing - # the zip archive directory information cache content if possible, and - # there are no relevant unit tests that we can depend on to tell us if - # this is really needed. - _remove_and_clear_zip_directory_cache_data(normalized_path) - - -def _collect_zipimporter_cache_entries(normalized_path, cache): - """ - Return zipimporter cache entry keys related to a given normalized path. - - Alternative path spellings (e.g. those using different character case or - those using alternative path separators) related to the same path are - included. Any sub-path entries are included as well, i.e. those - corresponding to zip archives embedded in other zip archives. - - """ - result = [] - prefix_len = len(normalized_path) - for p in cache: - np = normalize_path(p) - if (np.startswith(normalized_path) and - np[prefix_len:prefix_len + 1] in (os.sep, '')): - result.append(p) - return result - - -def _update_zipimporter_cache(normalized_path, cache, updater=None): - """ - Update zipimporter cache data for a given normalized path. - - Any sub-path entries are processed as well, i.e. those corresponding to zip - archives embedded in other zip archives. - - Given updater is a callable taking a cache entry key and the original entry - (after already removing the entry from the cache), and expected to update - the entry and possibly return a new one to be inserted in its place. - Returning None indicates that the entry should not be replaced with a new - one. If no updater is given, the cache entries are simply removed without - any additional processing, the same as if the updater simply returned None. - - """ - for p in _collect_zipimporter_cache_entries(normalized_path, cache): - # N.B. pypy's custom zipimport._zip_directory_cache implementation does - # not support the complete dict interface: - # * Does not support item assignment, thus not allowing this function - # to be used only for removing existing cache entries. - # * Does not support the dict.pop() method, forcing us to use the - # get/del patterns instead. For more detailed information see the - # following links: - # https://github.com/pypa/setuptools/issues/202#issuecomment-202913420 - # http://bit.ly/2h9itJX - old_entry = cache[p] - del cache[p] - new_entry = updater and updater(p, old_entry) - if new_entry is not None: - cache[p] = new_entry - - -def _uncache(normalized_path, cache): - _update_zipimporter_cache(normalized_path, cache) - - -def _remove_and_clear_zip_directory_cache_data(normalized_path): - def clear_and_remove_cached_zip_archive_directory_data(path, old_entry): - old_entry.clear() - - _update_zipimporter_cache( - normalized_path, zipimport._zip_directory_cache, - updater=clear_and_remove_cached_zip_archive_directory_data) - - -# PyPy Python implementation does not allow directly writing to the -# zipimport._zip_directory_cache and so prevents us from attempting to correct -# its content. The best we can do there is clear the problematic cache content -# and have PyPy repopulate it as needed. The downside is that if there are any -# stale zipimport.zipimporter instances laying around, attempting to use them -# will fail due to not having its zip archive directory information available -# instead of being automatically corrected to use the new correct zip archive -# directory information. -if '__pypy__' in sys.builtin_module_names: - _replace_zip_directory_cache_data = \ - _remove_and_clear_zip_directory_cache_data -else: - - def _replace_zip_directory_cache_data(normalized_path): - def replace_cached_zip_archive_directory_data(path, old_entry): - # N.B. In theory, we could load the zip directory information just - # once for all updated path spellings, and then copy it locally and - # update its contained path strings to contain the correct - # spelling, but that seems like a way too invasive move (this cache - # structure is not officially documented anywhere and could in - # theory change with new Python releases) for no significant - # benefit. - old_entry.clear() - zipimport.zipimporter(path) - old_entry.update(zipimport._zip_directory_cache[path]) - return old_entry - - _update_zipimporter_cache( - normalized_path, zipimport._zip_directory_cache, - updater=replace_cached_zip_archive_directory_data) - - -def is_python(text, filename=''): - "Is this string a valid Python script?" - try: - compile(text, filename, 'exec') - except (SyntaxError, TypeError): - return False - else: - return True - - -def is_sh(executable): - """Determine if the specified executable is a .sh (contains a #! line)""" - try: - with io.open(executable, encoding='latin-1') as fp: - magic = fp.read(2) - except (OSError, IOError): - return executable - return magic == '#!' - - -def nt_quote_arg(arg): - """Quote a command line argument according to Windows parsing rules""" - return subprocess.list2cmdline([arg]) - - -def is_python_script(script_text, filename): - """Is this text, as a whole, a Python script? (as opposed to shell/bat/etc. - """ - if filename.endswith('.py') or filename.endswith('.pyw'): - return True # extension says it's Python - if is_python(script_text, filename): - return True # it's syntactically valid Python - if script_text.startswith('#!'): - # It begins with a '#!' line, so check if 'python' is in it somewhere - return 'python' in script_text.splitlines()[0].lower() - - return False # Not any Python I can recognize - - -try: - from os import chmod as _chmod -except ImportError: - # Jython compatibility - def _chmod(*args): - pass - - -def chmod(path, mode): - log.debug("changing mode of %s to %o", path, mode) - try: - _chmod(path, mode) - except os.error as e: - log.debug("chmod failed: %s", e) - - -class CommandSpec(list): - """ - A command spec for a #! header, specified as a list of arguments akin to - those passed to Popen. - """ - - options = [] - split_args = dict() - - @classmethod - def best(cls): - """ - Choose the best CommandSpec class based on environmental conditions. - """ - return cls - - @classmethod - def _sys_executable(cls): - _default = os.path.normpath(sys.executable) - return os.environ.get('__PYVENV_LAUNCHER__', _default) - - @classmethod - def from_param(cls, param): - """ - Construct a CommandSpec from a parameter to build_scripts, which may - be None. - """ - if isinstance(param, cls): - return param - if isinstance(param, list): - return cls(param) - if param is None: - return cls.from_environment() - # otherwise, assume it's a string. - return cls.from_string(param) - - @classmethod - def from_environment(cls): - return cls([cls._sys_executable()]) - - @classmethod - def from_string(cls, string): - """ - Construct a command spec from a simple string representing a command - line parseable by shlex.split. - """ - items = shlex.split(string, **cls.split_args) - return cls(items) - - def install_options(self, script_text): - self.options = shlex.split(self._extract_options(script_text)) - cmdline = subprocess.list2cmdline(self) - if not isascii(cmdline): - self.options[:0] = ['-x'] - - @staticmethod - def _extract_options(orig_script): - """ - Extract any options from the first line of the script. - """ - first = (orig_script + '\n').splitlines()[0] - match = _first_line_re().match(first) - options = match.group(1) or '' if match else '' - return options.strip() - - def as_header(self): - return self._render(self + list(self.options)) - - @staticmethod - def _strip_quotes(item): - _QUOTES = '"\'' - for q in _QUOTES: - if item.startswith(q) and item.endswith(q): - return item[1:-1] - return item - - @staticmethod - def _render(items): - cmdline = subprocess.list2cmdline( - CommandSpec._strip_quotes(item.strip()) for item in items) - return '#!' + cmdline + '\n' - - -# For pbr compat; will be removed in a future version. -sys_executable = CommandSpec._sys_executable() - - -class WindowsCommandSpec(CommandSpec): - split_args = dict(posix=False) - - -class ScriptWriter: - """ - Encapsulates behavior around writing entry point scripts for console and - gui apps. - """ - - template = textwrap.dedent(r""" - # EASY-INSTALL-ENTRY-SCRIPT: %(spec)r,%(group)r,%(name)r - import re - import sys - - # for compatibility with easy_install; see #2198 - __requires__ = %(spec)r - - try: - from importlib.metadata import distribution - except ImportError: - try: - from importlib_metadata import distribution - except ImportError: - from pkg_resources import load_entry_point - - - def importlib_load_entry_point(spec, group, name): - dist_name, _, _ = spec.partition('==') - matches = ( - entry_point - for entry_point in distribution(dist_name).entry_points - if entry_point.group == group and entry_point.name == name - ) - return next(matches).load() - - - globals().setdefault('load_entry_point', importlib_load_entry_point) - - - if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0]) - sys.exit(load_entry_point(%(spec)r, %(group)r, %(name)r)()) - """).lstrip() - - command_spec_class = CommandSpec - - @classmethod - def get_script_args(cls, dist, executable=None, wininst=False): - # for backward compatibility - warnings.warn("Use get_args", EasyInstallDeprecationWarning) - writer = (WindowsScriptWriter if wininst else ScriptWriter).best() - header = cls.get_script_header("", executable, wininst) - return writer.get_args(dist, header) - - @classmethod - def get_script_header(cls, script_text, executable=None, wininst=False): - # for backward compatibility - warnings.warn( - "Use get_header", EasyInstallDeprecationWarning, stacklevel=2) - if wininst: - executable = "python.exe" - return cls.get_header(script_text, executable) - - @classmethod - def get_args(cls, dist, header=None): - """ - Yield write_script() argument tuples for a distribution's - console_scripts and gui_scripts entry points. - """ - if header is None: - header = cls.get_header() - spec = str(dist.as_requirement()) - for type_ in 'console', 'gui': - group = type_ + '_scripts' - for name, ep in dist.get_entry_map(group).items(): - cls._ensure_safe_name(name) - script_text = cls.template % locals() - args = cls._get_script_args(type_, name, header, script_text) - for res in args: - yield res - - @staticmethod - def _ensure_safe_name(name): - """ - Prevent paths in *_scripts entry point names. - """ - has_path_sep = re.search(r'[\\/]', name) - if has_path_sep: - raise ValueError("Path separators not allowed in script names") - - @classmethod - def get_writer(cls, force_windows): - # for backward compatibility - warnings.warn("Use best", EasyInstallDeprecationWarning) - return WindowsScriptWriter.best() if force_windows else cls.best() - - @classmethod - def best(cls): - """ - Select the best ScriptWriter for this environment. - """ - if sys.platform == 'win32' or (os.name == 'java' and os._name == 'nt'): - return WindowsScriptWriter.best() - else: - return cls - - @classmethod - def _get_script_args(cls, type_, name, header, script_text): - # Simply write the stub with no extension. - yield (name, header + script_text) - - @classmethod - def get_header(cls, script_text="", executable=None): - """Create a #! line, getting options (if any) from script_text""" - cmd = cls.command_spec_class.best().from_param(executable) - cmd.install_options(script_text) - return cmd.as_header() - - -class WindowsScriptWriter(ScriptWriter): - command_spec_class = WindowsCommandSpec - - @classmethod - def get_writer(cls): - # for backward compatibility - warnings.warn("Use best", EasyInstallDeprecationWarning) - return cls.best() - - @classmethod - def best(cls): - """ - Select the best ScriptWriter suitable for Windows - """ - writer_lookup = dict( - executable=WindowsExecutableLauncherWriter, - natural=cls, - ) - # for compatibility, use the executable launcher by default - launcher = os.environ.get('SETUPTOOLS_LAUNCHER', 'executable') - return writer_lookup[launcher] - - @classmethod - def _get_script_args(cls, type_, name, header, script_text): - "For Windows, add a .py extension" - ext = dict(console='.pya', gui='.pyw')[type_] - if ext not in os.environ['PATHEXT'].lower().split(';'): - msg = ( - "{ext} not listed in PATHEXT; scripts will not be " - "recognized as executables." - ).format(**locals()) - warnings.warn(msg, UserWarning) - old = ['.pya', '.py', '-script.py', '.pyc', '.pyo', '.pyw', '.exe'] - old.remove(ext) - header = cls._adjust_header(type_, header) - blockers = [name + x for x in old] - yield name + ext, header + script_text, 't', blockers - - @classmethod - def _adjust_header(cls, type_, orig_header): - """ - Make sure 'pythonw' is used for gui and 'python' is used for - console (regardless of what sys.executable is). - """ - pattern = 'pythonw.exe' - repl = 'python.exe' - if type_ == 'gui': - pattern, repl = repl, pattern - pattern_ob = re.compile(re.escape(pattern), re.IGNORECASE) - new_header = pattern_ob.sub(string=orig_header, repl=repl) - return new_header if cls._use_header(new_header) else orig_header - - @staticmethod - def _use_header(new_header): - """ - Should _adjust_header use the replaced header? - - On non-windows systems, always use. On - Windows systems, only use the replaced header if it resolves - to an executable on the system. - """ - clean_header = new_header[2:-1].strip('"') - return sys.platform != 'win32' or find_executable(clean_header) - - -class WindowsExecutableLauncherWriter(WindowsScriptWriter): - @classmethod - def _get_script_args(cls, type_, name, header, script_text): - """ - For Windows, add a .py extension and an .exe launcher - """ - if type_ == 'gui': - launcher_type = 'gui' - ext = '-script.pyw' - old = ['.pyw'] - else: - launcher_type = 'cli' - ext = '-script.py' - old = ['.py', '.pyc', '.pyo'] - hdr = cls._adjust_header(type_, header) - blockers = [name + x for x in old] - yield (name + ext, hdr + script_text, 't', blockers) - yield ( - name + '.exe', get_win_launcher(launcher_type), - 'b' # write in binary mode - ) - if not is_64bit(): - # install a manifest for the launcher to prevent Windows - # from detecting it as an installer (which it will for - # launchers like easy_install.exe). Consider only - # adding a manifest for launchers detected as installers. - # See Distribute #143 for details. - m_name = name + '.exe.manifest' - yield (m_name, load_launcher_manifest(name), 't') - - -# for backward-compatibility -get_script_args = ScriptWriter.get_script_args -get_script_header = ScriptWriter.get_script_header - - -def get_win_launcher(type): - """ - Load the Windows launcher (executable) suitable for launching a script. - - `type` should be either 'cli' or 'gui' - - Returns the executable as a byte string. - """ - launcher_fn = '%s.exe' % type - if is_64bit(): - if get_platform() == "win-arm64": - launcher_fn = launcher_fn.replace(".", "-arm64.") - else: - launcher_fn = launcher_fn.replace(".", "-64.") - else: - launcher_fn = launcher_fn.replace(".", "-32.") - return resource_string('setuptools', launcher_fn) - - -def load_launcher_manifest(name): - manifest = pkg_resources.resource_string(__name__, 'launcher manifest.xml') - return manifest.decode('utf-8') % vars() - - -def rmtree(path, ignore_errors=False, onerror=auto_chmod): - return shutil.rmtree(path, ignore_errors, onerror) - - -def current_umask(): - tmp = os.umask(0o022) - os.umask(tmp) - return tmp - - -def only_strs(values): - """ - Exclude non-str values. Ref #3063. - """ - return filter(lambda val: isinstance(val, str), values) - - -class EasyInstallDeprecationWarning(SetuptoolsDeprecationWarning): - """ - Warning for EasyInstall deprecations, bypassing suppression. - """ diff --git a/spaces/Razkaroth/incidencia-delictiva/README.md b/spaces/Razkaroth/incidencia-delictiva/README.md deleted file mode 100644 index cfbe8a8d07fde3e6b4a062e12799ec04bd02a666..0000000000000000000000000000000000000000 --- a/spaces/Razkaroth/incidencia-delictiva/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Incidencia Delictiva -emoji: 🔥 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RealTimeLiveAIForHealth/WebcamObjectRecognition/loss.py b/spaces/RealTimeLiveAIForHealth/WebcamObjectRecognition/loss.py deleted file mode 100644 index 4675441242d67a211ae1048df865fb006d5ec235..0000000000000000000000000000000000000000 --- a/spaces/RealTimeLiveAIForHealth/WebcamObjectRecognition/loss.py +++ /dev/null @@ -1,212 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -import numpy as np -import math -import tensorflow.keras.backend as K -import tensorflow as tf - - -def xywh_to_x1y1x2y2(boxes): - return tf.concat([boxes[..., :2] - boxes[..., 2:] * 0.5, boxes[..., :2] + boxes[..., 2:] * 0.5], axis=-1) - - -# x,y,w,h -def bbox_iou(boxes1, boxes2): - boxes1_area = boxes1[..., 2] * boxes1[..., 3] # w * h - boxes2_area = boxes2[..., 2] * boxes2[..., 3] - - # (x, y, w, h) -> (x0, y0, x1, y1) - boxes1 = xywh_to_x1y1x2y2(boxes1) - boxes2 = xywh_to_x1y1x2y2(boxes2) - - # coordinates of intersection - top_left = tf.maximum(boxes1[..., :2], boxes2[..., :2]) - bottom_right = tf.minimum(boxes1[..., 2:], boxes2[..., 2:]) - intersection_xy = tf.maximum(bottom_right - top_left, 0.0) - - intersection_area = intersection_xy[..., 0] * intersection_xy[..., 1] - union_area = boxes1_area + boxes2_area - intersection_area - - return 1.0 * intersection_area / (union_area + tf.keras.backend.epsilon()) - - -def bbox_giou(boxes1, boxes2): - boxes1_area = boxes1[..., 2] * boxes1[..., 3] # w*h - boxes2_area = boxes2[..., 2] * boxes2[..., 3] - - # (x, y, w, h) -> (x0, y0, x1, y1) - boxes1 = xywh_to_x1y1x2y2(boxes1) - boxes2 = xywh_to_x1y1x2y2(boxes2) - - top_left = tf.maximum(boxes1[..., :2], boxes2[..., :2]) - bottom_right = tf.minimum(boxes1[..., 2:], boxes2[..., 2:]) - - intersection_xy = tf.maximum(bottom_right - top_left, 0.0) - intersection_area = intersection_xy[..., 0] * intersection_xy[..., 1] - - union_area = boxes1_area + boxes2_area - intersection_area - - iou = 1.0 * intersection_area / (union_area + tf.keras.backend.epsilon()) - - enclose_top_left = tf.minimum(boxes1[..., :2], boxes2[..., :2]) - enclose_bottom_right = tf.maximum(boxes1[..., 2:], boxes2[..., 2:]) - - enclose_xy = enclose_bottom_right - enclose_top_left - enclose_area = enclose_xy[..., 0] * enclose_xy[..., 1] - - giou = iou - tf.math.divide_no_nan(enclose_area - union_area, enclose_area) - - return giou - - -def bbox_ciou(boxes1, boxes2): - ''' - ciou = iou - p2/c2 - av - :param boxes1: (8, 13, 13, 3, 4) pred_xywh - :param boxes2: (8, 13, 13, 3, 4) label_xywh - :return: - ''' - boxes1_x0y0x1y1 = tf.concat([boxes1[..., :2] - boxes1[..., 2:] * 0.5, - boxes1[..., :2] + boxes1[..., 2:] * 0.5], axis=-1) - boxes2_x0y0x1y1 = tf.concat([boxes2[..., :2] - boxes2[..., 2:] * 0.5, - boxes2[..., :2] + boxes2[..., 2:] * 0.5], axis=-1) - boxes1_x0y0x1y1 = tf.concat([tf.minimum(boxes1_x0y0x1y1[..., :2], boxes1_x0y0x1y1[..., 2:]), - tf.maximum(boxes1_x0y0x1y1[..., :2], boxes1_x0y0x1y1[..., 2:])], axis=-1) - boxes2_x0y0x1y1 = tf.concat([tf.minimum(boxes2_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., 2:]), - tf.maximum(boxes2_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., 2:])], axis=-1) - - # area - boxes1_area = (boxes1_x0y0x1y1[..., 2] - boxes1_x0y0x1y1[..., 0]) * ( - boxes1_x0y0x1y1[..., 3] - boxes1_x0y0x1y1[..., 1]) - boxes2_area = (boxes2_x0y0x1y1[..., 2] - boxes2_x0y0x1y1[..., 0]) * ( - boxes2_x0y0x1y1[..., 3] - boxes2_x0y0x1y1[..., 1]) - - # top-left and bottom-right coord, shape: (8, 13, 13, 3, 2) - left_up = tf.maximum(boxes1_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., :2]) - right_down = tf.minimum(boxes1_x0y0x1y1[..., 2:], boxes2_x0y0x1y1[..., 2:]) - - # intersection area and iou - inter_section = tf.maximum(right_down - left_up, 0.0) - inter_area = inter_section[..., 0] * inter_section[..., 1] - union_area = boxes1_area + boxes2_area - inter_area - iou = inter_area / (union_area + 1e-9) - - # top-left and bottom-right coord of the enclosing rectangle, shape: (8, 13, 13, 3, 2) - enclose_left_up = tf.minimum(boxes1_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., :2]) - enclose_right_down = tf.maximum(boxes1_x0y0x1y1[..., 2:], boxes2_x0y0x1y1[..., 2:]) - - # diagnal ** 2 - enclose_wh = enclose_right_down - enclose_left_up - enclose_c2 = K.pow(enclose_wh[..., 0], 2) + K.pow(enclose_wh[..., 1], 2) - - # center distances between two rectangles - p2 = K.pow(boxes1[..., 0] - boxes2[..., 0], 2) + K.pow(boxes1[..., 1] - boxes2[..., 1], 2) - - # add av - atan1 = tf.atan(boxes1[..., 2] / (boxes1[..., 3] + 1e-9)) - atan2 = tf.atan(boxes2[..., 2] / (boxes2[..., 3] + 1e-9)) - v = 4.0 * K.pow(atan1 - atan2, 2) / (math.pi ** 2) - a = v / (1 - iou + v) - - ciou = iou - 1.0 * p2 / enclose_c2 - 1.0 * a * v - return ciou - - -def yolo_loss(args, num_classes, iou_loss_thresh, anchors): - conv_lbbox = args[2] # (?, ?, ?, 3*(num_classes+5)) - conv_mbbox = args[1] # (?, ?, ?, 3*(num_classes+5)) - conv_sbbox = args[0] # (?, ?, ?, 3*(num_classes+5)) - label_sbbox = args[3] # (?, ?, ?, 3, num_classes+5) - label_mbbox = args[4] # (?, ?, ?, 3, num_classes+5) - label_lbbox = args[5] # (?, ?, ?, 3, num_classes+5) - true_bboxes = args[6] # (?, 50, 4) - pred_sbbox = decode(conv_sbbox, anchors[0], 8, num_classes) - pred_mbbox = decode(conv_mbbox, anchors[1], 16, num_classes) - pred_lbbox = decode(conv_lbbox, anchors[2], 32, num_classes) - sbbox_ciou_loss, sbbox_conf_loss, sbbox_prob_loss = loss_layer(conv_sbbox, pred_sbbox, label_sbbox, true_bboxes, 8, num_classes, iou_loss_thresh) - mbbox_ciou_loss, mbbox_conf_loss, mbbox_prob_loss = loss_layer(conv_mbbox, pred_mbbox, label_mbbox, true_bboxes, 16, num_classes, iou_loss_thresh) - lbbox_ciou_loss, lbbox_conf_loss, lbbox_prob_loss = loss_layer(conv_lbbox, pred_lbbox, label_lbbox, true_bboxes, 32, num_classes, iou_loss_thresh) - - ciou_loss = (lbbox_ciou_loss + sbbox_ciou_loss + mbbox_ciou_loss) * 3.54 - conf_loss = (lbbox_conf_loss + sbbox_conf_loss + mbbox_conf_loss) * 64.3 - prob_loss = (lbbox_prob_loss + sbbox_prob_loss + mbbox_prob_loss) * 1 - - return ciou_loss+conf_loss+prob_loss - - -def loss_layer(conv, pred, label, bboxes, stride, num_class, iou_loss_thresh): - conv_shape = tf.shape(conv) - batch_size = conv_shape[0] - output_size = conv_shape[1] - input_size = stride * output_size - conv = tf.reshape(conv, (batch_size, output_size, output_size, - 3, 5 + num_class)) - conv_raw_prob = conv[:, :, :, :, 5:] - conv_raw_conf = conv[:, :, :, :, 4:5] - - pred_xywh = pred[:, :, :, :, 0:4] - pred_conf = pred[:, :, :, :, 4:5] - - label_xywh = label[:, :, :, :, 0:4] - respond_bbox = label[:, :, :, :, 4:5] - label_prob = label[:, :, :, :, 5:] - - # Coordinate loss - ciou = tf.expand_dims(bbox_giou(pred_xywh, label_xywh), axis=-1) # (8, 13, 13, 3, 1) - # ciou = tf.expand_dims(bbox_ciou(pred_xywh, label_xywh), axis=-1) # (8, 13, 13, 3, 1) - input_size = tf.cast(input_size, tf.float32) - - # loss weight of the gt bbox: 2-(gt area/img area) - bbox_loss_scale = 2.0 - 1.0 * label_xywh[:, :, :, :, 2:3] * label_xywh[:, :, :, :, 3:4] / (input_size ** 2) - ciou_loss = respond_bbox * bbox_loss_scale * (1 - ciou) # iou loss for respond bbox - - # Classification loss for respond bbox - prob_loss = respond_bbox * tf.nn.sigmoid_cross_entropy_with_logits(labels=label_prob, logits=conv_raw_prob) - - expand_pred_xywh = pred_xywh[:, :, :, :, np.newaxis, :] # (?, grid_h, grid_w, 3, 1, 4) - expand_bboxes = bboxes[:, np.newaxis, np.newaxis, np.newaxis, :, :] # (?, 1, 1, 1, 70, 4) - iou = bbox_iou(expand_pred_xywh, expand_bboxes) # IoU between all pred bbox and all gt (?, grid_h, grid_w, 3, 70) - max_iou = tf.expand_dims(tf.reduce_max(iou, axis=-1), axis=-1) # max iou: (?, grid_h, grid_w, 3, 1) - - # ignore the bbox which is not respond bbox and max iou < threshold - respond_bgd = (1.0 - respond_bbox) * tf.cast(max_iou < iou_loss_thresh, tf.float32) - - # Confidence loss - conf_focal = tf.pow(respond_bbox - pred_conf, 2) - - conf_loss = conf_focal * ( - respond_bbox * tf.nn.sigmoid_cross_entropy_with_logits(labels=respond_bbox, logits=conv_raw_conf) - + - respond_bgd * tf.nn.sigmoid_cross_entropy_with_logits(labels=respond_bbox, logits=conv_raw_conf) - ) - - ciou_loss = tf.reduce_mean(tf.reduce_sum(ciou_loss, axis=[1, 2, 3, 4])) - conf_loss = tf.reduce_mean(tf.reduce_sum(conf_loss, axis=[1, 2, 3, 4])) - prob_loss = tf.reduce_mean(tf.reduce_sum(prob_loss, axis=[1, 2, 3, 4])) - - return ciou_loss, conf_loss, prob_loss - - -def decode(conv_output, anchors, stride, num_class): - conv_shape = tf.shape(conv_output) - batch_size = conv_shape[0] - output_size = conv_shape[1] - anchor_per_scale = len(anchors) - conv_output = tf.reshape(conv_output, (batch_size, output_size, output_size, anchor_per_scale, 5 + num_class)) - conv_raw_dxdy = conv_output[:, :, :, :, 0:2] - conv_raw_dwdh = conv_output[:, :, :, :, 2:4] - conv_raw_conf = conv_output[:, :, :, :, 4:5] - conv_raw_prob = conv_output[:, :, :, :, 5:] - y = tf.tile(tf.range(output_size, dtype=tf.int32)[:, tf.newaxis], [1, output_size]) - x = tf.tile(tf.range(output_size, dtype=tf.int32)[tf.newaxis, :], [output_size, 1]) - xy_grid = tf.concat([x[:, :, tf.newaxis], y[:, :, tf.newaxis]], axis=-1) - xy_grid = tf.tile(xy_grid[tf.newaxis, :, :, tf.newaxis, :], [batch_size, 1, 1, anchor_per_scale, 1]) - xy_grid = tf.cast(xy_grid, tf.float32) - pred_xy = (tf.sigmoid(conv_raw_dxdy) + xy_grid) * stride - pred_wh = (tf.exp(conv_raw_dwdh) * anchors) - pred_xywh = tf.concat([pred_xy, pred_wh], axis=-1) - pred_conf = tf.sigmoid(conv_raw_conf) - pred_prob = tf.sigmoid(conv_raw_prob) - return tf.concat([pred_xywh, pred_conf, pred_prob], axis=-1) - diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/pipeline_loftr.py b/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/pipeline_loftr.py deleted file mode 100644 index ccfd62d1c04bbd58ec5e6e55ceab7626976d3dbc..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/pipeline_loftr.py +++ /dev/null @@ -1,94 +0,0 @@ -from pathlib import Path -from pprint import pformat -import argparse - -from ... import extract_features, match_dense, triangulation -from ... import pairs_from_covisibility, pairs_from_retrieval, localize_sfm - - -parser = argparse.ArgumentParser() -parser.add_argument( - "--dataset", - type=Path, - default="datasets/aachen_v1.1", - help="Path to the dataset, default: %(default)s", -) -parser.add_argument( - "--outputs", - type=Path, - default="outputs/aachen_v1.1", - help="Path to the output directory, default: %(default)s", -) -parser.add_argument( - "--num_covis", - type=int, - default=20, - help="Number of image pairs for SfM, default: %(default)s", -) -parser.add_argument( - "--num_loc", - type=int, - default=50, - help="Number of image pairs for loc, default: %(default)s", -) -args = parser.parse_args() - -# Setup the paths -dataset = args.dataset -images = dataset / "images/images_upright/" -sift_sfm = dataset / "3D-models/aachen_v_1_1" - -outputs = args.outputs # where everything will be saved -outputs.mkdir() -reference_sfm = outputs / "sfm_loftr" # the SfM model we will build -sfm_pairs = ( - outputs / f"pairs-db-covis{args.num_covis}.txt" -) # top-k most covisible in SIFT model -loc_pairs = ( - outputs / f"pairs-query-netvlad{args.num_loc}.txt" -) # top-k retrieved by NetVLAD -results = outputs / f"Aachen-v1.1_hloc_loftr_netvlad{args.num_loc}.txt" - -# list the standard configurations available -print(f"Configs for dense feature matchers:\n{pformat(match_dense.confs)}") - -# pick one of the configurations for extraction and matching -retrieval_conf = extract_features.confs["netvlad"] -matcher_conf = match_dense.confs["loftr_aachen"] - -pairs_from_covisibility.main(sift_sfm, sfm_pairs, num_matched=args.num_covis) -features, sfm_matches = match_dense.main( - matcher_conf, sfm_pairs, images, outputs, max_kps=8192, overwrite=False -) - -triangulation.main( - reference_sfm, sift_sfm, images, sfm_pairs, features, sfm_matches -) - -global_descriptors = extract_features.main(retrieval_conf, images, outputs) -pairs_from_retrieval.main( - global_descriptors, - loc_pairs, - args.num_loc, - query_prefix="query", - db_model=reference_sfm, -) -features, loc_matches = match_dense.main( - matcher_conf, - loc_pairs, - images, - outputs, - features=features, - max_kps=None, - matches=sfm_matches, -) - -localize_sfm.main( - reference_sfm, - dataset / "queries/*_time_queries_with_intrinsics.txt", - loc_pairs, - features, - loc_matches, - results, - covisibility_clustering=False, -) # not required with loftr diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/configs/data/__init__.py b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/configs/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/visualization.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/visualization.py deleted file mode 100644 index 73ec7dd74e21ac72204484cf8d4f3c6fd56a72a2..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/visualization.py +++ /dev/null @@ -1,153 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -import os, glob, cv2 -import argparse -from argparse import Namespace -import yaml -from tqdm import tqdm -import torch -from torch.utils.data import Dataset, DataLoader, SequentialSampler - -from src.datasets.custom_dataloader import TestDataLoader -from src.utils.dataset import read_img_gray -from configs.data.base import cfg as data_cfg -import viz - - -def get_model_config(method_name, dataset_name, root_dir="viz"): - config_file = f"{root_dir}/configs/{method_name}.yml" - with open(config_file, "r") as f: - model_conf = yaml.load(f, Loader=yaml.FullLoader)[dataset_name] - return model_conf - - -class DemoDataset(Dataset): - def __init__(self, dataset_dir, img_file=None, resize=0, down_factor=16): - self.dataset_dir = dataset_dir - if img_file is None: - self.list_img_files = glob.glob(os.path.join(dataset_dir, "*.*")) - self.list_img_files.sort() - else: - with open(img_file) as f: - self.list_img_files = [ - os.path.join(dataset_dir, img_file.strip()) - for img_file in f.readlines() - ] - self.resize = resize - self.down_factor = down_factor - - def __len__(self): - return len(self.list_img_files) - - def __getitem__(self, idx): - img_path = self.list_img_files[ - idx - ] # os.path.join(self.dataset_dir, self.list_img_files[idx]) - img, scale = read_img_gray( - img_path, resize=self.resize, down_factor=self.down_factor - ) - return {"img": img, "id": idx, "img_path": img_path} - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Visualize matches") - parser.add_argument("--gpu", "-gpu", type=str, default="0") - parser.add_argument("--method", type=str, default=None) - parser.add_argument("--dataset_dir", type=str, default="data/aachen-day-night") - parser.add_argument("--pair_dir", type=str, default=None) - parser.add_argument( - "--dataset_name", - type=str, - choices=["megadepth", "scannet", "aachen_v1.1", "inloc"], - default="megadepth", - ) - parser.add_argument("--measure_time", action="store_true") - parser.add_argument("--no_viz", action="store_true") - parser.add_argument("--compute_eval_metrics", action="store_true") - parser.add_argument("--run_demo", action="store_true") - - args = parser.parse_args() - - model_cfg = get_model_config(args.method, args.dataset_name) - class_name = model_cfg["class"] - model = viz.__dict__[class_name](model_cfg) - # all_args = Namespace(**vars(args), **model_cfg) - if not args.run_demo: - if args.dataset_name == "megadepth": - from configs.data.megadepth_test_1500 import cfg - - data_cfg.merge_from_other_cfg(cfg) - elif args.dataset_name == "scannet": - from configs.data.scannet_test_1500 import cfg - - data_cfg.merge_from_other_cfg(cfg) - elif args.dataset_name == "aachen_v1.1": - data_cfg.merge_from_list( - [ - "DATASET.TEST_DATA_SOURCE", - "aachen_v1.1", - "DATASET.TEST_DATA_ROOT", - os.path.join(args.dataset_dir, "images/images_upright"), - "DATASET.TEST_LIST_PATH", - args.pair_dir, - "DATASET.TEST_IMGSIZE", - model_cfg["imsize"], - ] - ) - elif args.dataset_name == "inloc": - data_cfg.merge_from_list( - [ - "DATASET.TEST_DATA_SOURCE", - "inloc", - "DATASET.TEST_DATA_ROOT", - args.dataset_dir, - "DATASET.TEST_LIST_PATH", - args.pair_dir, - "DATASET.TEST_IMGSIZE", - model_cfg["imsize"], - ] - ) - - has_ground_truth = str(data_cfg.DATASET.TEST_DATA_SOURCE).lower() in [ - "megadepth", - "scannet", - ] - dataloader = TestDataLoader(data_cfg) - with torch.no_grad(): - for data_dict in tqdm(dataloader): - for k, v in data_dict.items(): - if isinstance(v, torch.Tensor): - data_dict[k] = v.cuda() if torch.cuda.is_available() else v - img_root_dir = data_cfg.DATASET.TEST_DATA_ROOT - model.match_and_draw( - data_dict, - root_dir=img_root_dir, - ground_truth=has_ground_truth, - measure_time=args.measure_time, - viz_matches=(not args.no_viz), - ) - - if args.measure_time: - print( - "Running time for each image is {} miliseconds".format( - model.measure_time() - ) - ) - if args.compute_eval_metrics and has_ground_truth: - model.compute_eval_metrics() - else: - demo_dataset = DemoDataset(args.dataset_dir, img_file=args.pair_dir, resize=640) - sampler = SequentialSampler(demo_dataset) - dataloader = DataLoader(demo_dataset, batch_size=1, sampler=sampler) - - writer = cv2.VideoWriter( - "topicfm_demo.mp4", - cv2.VideoWriter_fourcc(*"mp4v"), - 15, - (640 * 2 + 5, 480 * 2 + 10), - ) - - model.run_demo( - iter(dataloader), writer - ) # , output_dir="demo", no_display=True) diff --git a/spaces/Reeve/Ohayou_Face/torch_utils/training_stats.py b/spaces/Reeve/Ohayou_Face/torch_utils/training_stats.py deleted file mode 100644 index 26f467f9eaa074ee13de1cf2625cd7da44880847..0000000000000000000000000000000000000000 --- a/spaces/Reeve/Ohayou_Face/torch_utils/training_stats.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict() # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -#---------------------------------------------------------------------------- - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -#---------------------------------------------------------------------------- - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num(name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -#---------------------------------------------------------------------------- - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -#---------------------------------------------------------------------------- diff --git a/spaces/RichardMB1217/blip/README.md b/spaces/RichardMB1217/blip/README.md deleted file mode 100644 index c6e5e637439a26c705b1ae4da9083326deb54423..0000000000000000000000000000000000000000 --- a/spaces/RichardMB1217/blip/README.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: BLIP -emoji: 🦀 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.0.17 -app_file: app.py -pinned: false -license: bsd-3-clause ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Rii12/Test03/Dockerfile b/spaces/Rii12/Test03/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Rii12/Test03/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/SRankChatGpt/Presentation-Assistant/presentation_assistant/__init__.py b/spaces/SRankChatGpt/Presentation-Assistant/presentation_assistant/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SeViLA/SeViLA/app/calculate_coco_features.py b/spaces/SeViLA/SeViLA/app/calculate_coco_features.py deleted file mode 100644 index 168e8503e943b715fbc3e010444bfc57901b8ffc..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/app/calculate_coco_features.py +++ /dev/null @@ -1,87 +0,0 @@ -""" - # Copyright (c) 2022, salesforce.com, inc. - # All rights reserved. - # SPDX-License-Identifier: BSD-3-Clause - # For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from PIL import Image -import requests -import torch - -import os - -from lavis.common.registry import registry -from lavis.processors import * -from lavis.models import * -from lavis.common.utils import build_default_model - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -def load_demo_image(): - img_url = ( - "https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg" - ) - raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") - - return raw_image - - -def read_img(filepath): - raw_image = Image.open(filepath).convert("RGB") - - return raw_image - - -# model -model_url = "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth" -feature_extractor = BlipFeatureExtractor(pretrained=model_url) - -feature_extractor.eval() -feature_extractor = feature_extractor.to(device) - -# preprocessors -vis_processor = BlipImageEvalProcessor(image_size=224) -text_processor = BlipCaptionProcessor() - -# files to process -# file_root = "/export/home/.cache/lavis/coco/images/val2014" -file_root = "/export/home/.cache/lavis/coco/images/train2014" -filepaths = os.listdir(file_root) - -print(len(filepaths)) - -caption = "dummy" - -path2feat = dict() -bsz = 256 - -images_in_batch = [] -filepaths_in_batch = [] - -for i, filename in enumerate(filepaths): - if i % bsz == 0 and i > 0: - images_in_batch = torch.cat(images_in_batch, dim=0).to(device) - with torch.no_grad(): - image_features = feature_extractor( - images_in_batch, caption, mode="image", normalized=True - )[:, 0] - - for filepath, image_feat in zip(filepaths_in_batch, image_features): - path2feat[os.path.basename(filepath)] = image_feat.detach().cpu() - - images_in_batch = [] - filepaths_in_batch = [] - - print(len(path2feat), image_features.shape) - else: - filepath = os.path.join(file_root, filename) - - image = read_img(filepath) - image = vis_processor(image).unsqueeze(0) - - images_in_batch.append(image) - filepaths_in_batch.append(filepath) - -torch.save(path2feat, "path2feat_coco_train2014.pth") diff --git a/spaces/SeViLA/SeViLA/lavis/models/clip_models/utils.py b/spaces/SeViLA/SeViLA/lavis/models/clip_models/utils.py deleted file mode 100644 index 9ba9191a8c8043ed07d96144b4c10fcffb08cc9c..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/clip_models/utils.py +++ /dev/null @@ -1,49 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause - - Based on https://github.com/mlfoundations/open_clip -""" - -from torch import nn as nn -from torchvision.ops.misc import FrozenBatchNorm2d - - -def freeze_batch_norm_2d(module, module_match={}, name=""): - """ - Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is - itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and - returned. Otherwise, the module is walked recursively and submodules are converted in place. - Args: - module (torch.nn.Module): Any PyTorch module. - module_match (dict): Dictionary of full module names to freeze (all if empty) - name (str): Full module name (prefix) - Returns: - torch.nn.Module: Resulting module - Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762 - """ - res = module - is_match = True - if module_match: - is_match = name in module_match - if is_match and isinstance( - module, (nn.modules.batchnorm.BatchNorm2d, nn.modules.batchnorm.SyncBatchNorm) - ): - res = FrozenBatchNorm2d(module.num_features) - res.num_features = module.num_features - res.affine = module.affine - if module.affine: - res.weight.data = module.weight.data.clone().detach() - res.bias.data = module.bias.data.clone().detach() - res.running_mean.data = module.running_mean.data - res.running_var.data = module.running_var.data - res.eps = module.eps - else: - for child_name, child in module.named_children(): - full_child_name = ".".join([name, child_name]) if name else child_name - new_child = freeze_batch_norm_2d(child, module_match, full_child_name) - if new_child is not child: - res.add_module(child_name, new_child) - return res diff --git a/spaces/Shad0ws/STORYGPT/README.md b/spaces/Shad0ws/STORYGPT/README.md deleted file mode 100644 index da1ac10bb0908d32296317ded46406a0b7d80acc..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/STORYGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: STORYGPT -emoji: ⚡ -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ShawnAI/Milvus-Embedding-Client/README.md b/spaces/ShawnAI/Milvus-Embedding-Client/README.md deleted file mode 100644 index 958ad0fde8c77759dda9ea5db5f6b789aa520457..0000000000000000000000000000000000000000 --- a/spaces/ShawnAI/Milvus-Embedding-Client/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Milvus Embedding Client -emoji: 📚 -colorFrom: gray -colorTo: red -sdk: docker -app_port: 7860 -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Silentlin/DiffSinger/utils/text_encoder.py b/spaces/Silentlin/DiffSinger/utils/text_encoder.py deleted file mode 100644 index d9e0758abc7b4e1f452481cba9715df08ceab543..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/utils/text_encoder.py +++ /dev/null @@ -1,304 +0,0 @@ -import re -import six -from six.moves import range # pylint: disable=redefined-builtin - -PAD = "" -EOS = "" -UNK = "" -SEG = "|" -RESERVED_TOKENS = [PAD, EOS, UNK] -NUM_RESERVED_TOKENS = len(RESERVED_TOKENS) -PAD_ID = RESERVED_TOKENS.index(PAD) # Normally 0 -EOS_ID = RESERVED_TOKENS.index(EOS) # Normally 1 -UNK_ID = RESERVED_TOKENS.index(UNK) # Normally 2 - -if six.PY2: - RESERVED_TOKENS_BYTES = RESERVED_TOKENS -else: - RESERVED_TOKENS_BYTES = [bytes(PAD, "ascii"), bytes(EOS, "ascii")] - -# Regular expression for unescaping token strings. -# '\u' is converted to '_' -# '\\' is converted to '\' -# '\213;' is converted to unichr(213) -_UNESCAPE_REGEX = re.compile(r"\\u|\\\\|\\([0-9]+);") -_ESCAPE_CHARS = set(u"\\_u;0123456789") - - -def strip_ids(ids, ids_to_strip): - """Strip ids_to_strip from the end ids.""" - ids = list(ids) - while ids and ids[-1] in ids_to_strip: - ids.pop() - return ids - - -class TextEncoder(object): - """Base class for converting from ints to/from human readable strings.""" - - def __init__(self, num_reserved_ids=NUM_RESERVED_TOKENS): - self._num_reserved_ids = num_reserved_ids - - @property - def num_reserved_ids(self): - return self._num_reserved_ids - - def encode(self, s): - """Transform a human-readable string into a sequence of int ids. - - The ids should be in the range [num_reserved_ids, vocab_size). Ids [0, - num_reserved_ids) are reserved. - - EOS is not appended. - - Args: - s: human-readable string to be converted. - - Returns: - ids: list of integers - """ - return [int(w) + self._num_reserved_ids for w in s.split()] - - def decode(self, ids, strip_extraneous=False): - """Transform a sequence of int ids into a human-readable string. - - EOS is not expected in ids. - - Args: - ids: list of integers to be converted. - strip_extraneous: bool, whether to strip off extraneous tokens - (EOS and PAD). - - Returns: - s: human-readable string. - """ - if strip_extraneous: - ids = strip_ids(ids, list(range(self._num_reserved_ids or 0))) - return " ".join(self.decode_list(ids)) - - def decode_list(self, ids): - """Transform a sequence of int ids into a their string versions. - - This method supports transforming individual input/output ids to their - string versions so that sequence to/from text conversions can be visualized - in a human readable format. - - Args: - ids: list of integers to be converted. - - Returns: - strs: list of human-readable string. - """ - decoded_ids = [] - for id_ in ids: - if 0 <= id_ < self._num_reserved_ids: - decoded_ids.append(RESERVED_TOKENS[int(id_)]) - else: - decoded_ids.append(id_ - self._num_reserved_ids) - return [str(d) for d in decoded_ids] - - @property - def vocab_size(self): - raise NotImplementedError() - - -class ByteTextEncoder(TextEncoder): - """Encodes each byte to an id. For 8-bit strings only.""" - - def encode(self, s): - numres = self._num_reserved_ids - if six.PY2: - if isinstance(s, unicode): - s = s.encode("utf-8") - return [ord(c) + numres for c in s] - # Python3: explicitly convert to UTF-8 - return [c + numres for c in s.encode("utf-8")] - - def decode(self, ids, strip_extraneous=False): - if strip_extraneous: - ids = strip_ids(ids, list(range(self._num_reserved_ids or 0))) - numres = self._num_reserved_ids - decoded_ids = [] - int2byte = six.int2byte - for id_ in ids: - if 0 <= id_ < numres: - decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)]) - else: - decoded_ids.append(int2byte(id_ - numres)) - if six.PY2: - return "".join(decoded_ids) - # Python3: join byte arrays and then decode string - return b"".join(decoded_ids).decode("utf-8", "replace") - - def decode_list(self, ids): - numres = self._num_reserved_ids - decoded_ids = [] - int2byte = six.int2byte - for id_ in ids: - if 0 <= id_ < numres: - decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)]) - else: - decoded_ids.append(int2byte(id_ - numres)) - # Python3: join byte arrays and then decode string - return decoded_ids - - @property - def vocab_size(self): - return 2**8 + self._num_reserved_ids - - -class ByteTextEncoderWithEos(ByteTextEncoder): - """Encodes each byte to an id and appends the EOS token.""" - - def encode(self, s): - return super(ByteTextEncoderWithEos, self).encode(s) + [EOS_ID] - - -class TokenTextEncoder(TextEncoder): - """Encoder based on a user-supplied vocabulary (file or list).""" - - def __init__(self, - vocab_filename, - reverse=False, - vocab_list=None, - replace_oov=None, - num_reserved_ids=NUM_RESERVED_TOKENS): - """Initialize from a file or list, one token per line. - - Handling of reserved tokens works as follows: - - When initializing from a list, we add reserved tokens to the vocab. - - When initializing from a file, we do not add reserved tokens to the vocab. - - When saving vocab files, we save reserved tokens to the file. - - Args: - vocab_filename: If not None, the full filename to read vocab from. If this - is not None, then vocab_list should be None. - reverse: Boolean indicating if tokens should be reversed during encoding - and decoding. - vocab_list: If not None, a list of elements of the vocabulary. If this is - not None, then vocab_filename should be None. - replace_oov: If not None, every out-of-vocabulary token seen when - encoding will be replaced by this string (which must be in vocab). - num_reserved_ids: Number of IDs to save for reserved tokens like . - """ - super(TokenTextEncoder, self).__init__(num_reserved_ids=num_reserved_ids) - self._reverse = reverse - self._replace_oov = replace_oov - if vocab_filename: - self._init_vocab_from_file(vocab_filename) - else: - assert vocab_list is not None - self._init_vocab_from_list(vocab_list) - self.pad_index = self._token_to_id[PAD] - self.eos_index = self._token_to_id[EOS] - self.unk_index = self._token_to_id[UNK] - self.seg_index = self._token_to_id[SEG] if SEG in self._token_to_id else self.eos_index - - def encode(self, s): - """Converts a space-separated string of tokens to a list of ids.""" - sentence = s - tokens = sentence.strip().split() - if self._replace_oov is not None: - tokens = [t if t in self._token_to_id else self._replace_oov - for t in tokens] - ret = [self._token_to_id[tok] for tok in tokens] - return ret[::-1] if self._reverse else ret - - def decode(self, ids, strip_eos=False, strip_padding=False): - if strip_padding and self.pad() in list(ids): - pad_pos = list(ids).index(self.pad()) - ids = ids[:pad_pos] - if strip_eos and self.eos() in list(ids): - eos_pos = list(ids).index(self.eos()) - ids = ids[:eos_pos] - return " ".join(self.decode_list(ids)) - - def decode_list(self, ids): - seq = reversed(ids) if self._reverse else ids - return [self._safe_id_to_token(i) for i in seq] - - @property - def vocab_size(self): - return len(self._id_to_token) - - def __len__(self): - return self.vocab_size - - def _safe_id_to_token(self, idx): - return self._id_to_token.get(idx, "ID_%d" % idx) - - def _init_vocab_from_file(self, filename): - """Load vocab from a file. - - Args: - filename: The file to load vocabulary from. - """ - with open(filename) as f: - tokens = [token.strip() for token in f.readlines()] - - def token_gen(): - for token in tokens: - yield token - - self._init_vocab(token_gen(), add_reserved_tokens=False) - - def _init_vocab_from_list(self, vocab_list): - """Initialize tokens from a list of tokens. - - It is ok if reserved tokens appear in the vocab list. They will be - removed. The set of tokens in vocab_list should be unique. - - Args: - vocab_list: A list of tokens. - """ - def token_gen(): - for token in vocab_list: - if token not in RESERVED_TOKENS: - yield token - - self._init_vocab(token_gen()) - - def _init_vocab(self, token_generator, add_reserved_tokens=True): - """Initialize vocabulary with tokens from token_generator.""" - - self._id_to_token = {} - non_reserved_start_index = 0 - - if add_reserved_tokens: - self._id_to_token.update(enumerate(RESERVED_TOKENS)) - non_reserved_start_index = len(RESERVED_TOKENS) - - self._id_to_token.update( - enumerate(token_generator, start=non_reserved_start_index)) - - # _token_to_id is the reverse of _id_to_token - self._token_to_id = dict((v, k) - for k, v in six.iteritems(self._id_to_token)) - - def pad(self): - return self.pad_index - - def eos(self): - return self.eos_index - - def unk(self): - return self.unk_index - - def seg(self): - return self.seg_index - - def store_to_file(self, filename): - """Write vocab file to disk. - - Vocab files have one token per line. The file ends in a newline. Reserved - tokens are written to the vocab file as well. - - Args: - filename: Full path of the file to store the vocab to. - """ - with open(filename, "w") as f: - for i in range(len(self._id_to_token)): - f.write(self._id_to_token[i] + "\n") - - def sil_phonemes(self): - return [p for p in self._id_to_token.values() if not p[0].isalpha()] diff --git a/spaces/SpacesExamples/Gradio-Docker-Template-nvidia-cuda/README.md b/spaces/SpacesExamples/Gradio-Docker-Template-nvidia-cuda/README.md deleted file mode 100644 index 2db2000df6b4397d9e2e46fb53d80901717c5ff0..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/Gradio-Docker-Template-nvidia-cuda/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Gradio Docker Template nvidia/cuda -emoji: 🐨 -colorFrom: purple -colorTo: pink -sdk: docker -pinned: false -duplicated_from: DockerTemplates/Gradio-Docker-Template ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tempdir.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tempdir.py deleted file mode 100644 index 9191b97acb8dc3283910e9c4d3fa749f22906704..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tempdir.py +++ /dev/null @@ -1,29 +0,0 @@ -#----------------------------------------------------------------------------- -# Copyright (C) 2012- The IPython Development Team -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#----------------------------------------------------------------------------- - -from pathlib import Path - -from IPython.utils.tempdir import NamedFileInTemporaryDirectory -from IPython.utils.tempdir import TemporaryWorkingDirectory - - -def test_named_file_in_temporary_directory(): - with NamedFileInTemporaryDirectory('filename') as file: - name = file.name - assert not file.closed - assert Path(name).exists() - file.write(b'test') - assert file.closed - assert not Path(name).exists() - -def test_temporary_working_directory(): - with TemporaryWorkingDirectory() as directory: - directory_path = Path(directory).resolve() - assert directory_path.exists() - assert Path.cwd().resolve() == directory_path - assert not directory_path.exists() - assert Path.cwd().resolve() != directory_path diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/tz/tz.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/tz/tz.py deleted file mode 100644 index c67f56d4659f17aab4540dfd42511bb850871a77..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/tz/tz.py +++ /dev/null @@ -1,1849 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers timezone implementations subclassing the abstract -:py:class:`datetime.tzinfo` type. There are classes to handle tzfile format -files (usually are in :file:`/etc/localtime`, :file:`/usr/share/zoneinfo`, -etc), TZ environment string (in all known formats), given ranges (with help -from relative deltas), local machine timezone, fixed offset timezone, and UTC -timezone. -""" -import datetime -import struct -import time -import sys -import os -import bisect -import weakref -from collections import OrderedDict - -import six -from six import string_types -from six.moves import _thread -from ._common import tzname_in_python2, _tzinfo -from ._common import tzrangebase, enfold -from ._common import _validate_fromutc_inputs - -from ._factories import _TzSingleton, _TzOffsetFactory -from ._factories import _TzStrFactory -try: - from .win import tzwin, tzwinlocal -except ImportError: - tzwin = tzwinlocal = None - -# For warning about rounding tzinfo -from warnings import warn - -ZERO = datetime.timedelta(0) -EPOCH = datetime.datetime.utcfromtimestamp(0) -EPOCHORDINAL = EPOCH.toordinal() - - -@six.add_metaclass(_TzSingleton) -class tzutc(datetime.tzinfo): - """ - This is a tzinfo object that represents the UTC time zone. - - **Examples:** - - .. doctest:: - - >>> from datetime import * - >>> from dateutil.tz import * - - >>> datetime.now() - datetime.datetime(2003, 9, 27, 9, 40, 1, 521290) - - >>> datetime.now(tzutc()) - datetime.datetime(2003, 9, 27, 12, 40, 12, 156379, tzinfo=tzutc()) - - >>> datetime.now(tzutc()).tzname() - 'UTC' - - .. versionchanged:: 2.7.0 - ``tzutc()`` is now a singleton, so the result of ``tzutc()`` will - always return the same object. - - .. doctest:: - - >>> from dateutil.tz import tzutc, UTC - >>> tzutc() is tzutc() - True - >>> tzutc() is UTC - True - """ - def utcoffset(self, dt): - return ZERO - - def dst(self, dt): - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return "UTC" - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - return False - - @_validate_fromutc_inputs - def fromutc(self, dt): - """ - Fast track version of fromutc() returns the original ``dt`` object for - any valid :py:class:`datetime.datetime` object. - """ - return dt - - def __eq__(self, other): - if not isinstance(other, (tzutc, tzoffset)): - return NotImplemented - - return (isinstance(other, tzutc) or - (isinstance(other, tzoffset) and other._offset == ZERO)) - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s()" % self.__class__.__name__ - - __reduce__ = object.__reduce__ - - -#: Convenience constant providing a :class:`tzutc()` instance -#: -#: .. versionadded:: 2.7.0 -UTC = tzutc() - - -@six.add_metaclass(_TzOffsetFactory) -class tzoffset(datetime.tzinfo): - """ - A simple class for representing a fixed offset from UTC. - - :param name: - The timezone name, to be returned when ``tzname()`` is called. - :param offset: - The time zone offset in seconds, or (since version 2.6.0, represented - as a :py:class:`datetime.timedelta` object). - """ - def __init__(self, name, offset): - self._name = name - - try: - # Allow a timedelta - offset = offset.total_seconds() - except (TypeError, AttributeError): - pass - - self._offset = datetime.timedelta(seconds=_get_supported_offset(offset)) - - def utcoffset(self, dt): - return self._offset - - def dst(self, dt): - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return self._name - - @_validate_fromutc_inputs - def fromutc(self, dt): - return dt + self._offset - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - return False - - def __eq__(self, other): - if not isinstance(other, tzoffset): - return NotImplemented - - return self._offset == other._offset - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s(%s, %s)" % (self.__class__.__name__, - repr(self._name), - int(self._offset.total_seconds())) - - __reduce__ = object.__reduce__ - - -class tzlocal(_tzinfo): - """ - A :class:`tzinfo` subclass built around the ``time`` timezone functions. - """ - def __init__(self): - super(tzlocal, self).__init__() - - self._std_offset = datetime.timedelta(seconds=-time.timezone) - if time.daylight: - self._dst_offset = datetime.timedelta(seconds=-time.altzone) - else: - self._dst_offset = self._std_offset - - self._dst_saved = self._dst_offset - self._std_offset - self._hasdst = bool(self._dst_saved) - self._tznames = tuple(time.tzname) - - def utcoffset(self, dt): - if dt is None and self._hasdst: - return None - - if self._isdst(dt): - return self._dst_offset - else: - return self._std_offset - - def dst(self, dt): - if dt is None and self._hasdst: - return None - - if self._isdst(dt): - return self._dst_offset - self._std_offset - else: - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return self._tznames[self._isdst(dt)] - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - naive_dst = self._naive_is_dst(dt) - return (not naive_dst and - (naive_dst != self._naive_is_dst(dt - self._dst_saved))) - - def _naive_is_dst(self, dt): - timestamp = _datetime_to_timestamp(dt) - return time.localtime(timestamp + time.timezone).tm_isdst - - def _isdst(self, dt, fold_naive=True): - # We can't use mktime here. It is unstable when deciding if - # the hour near to a change is DST or not. - # - # timestamp = time.mktime((dt.year, dt.month, dt.day, dt.hour, - # dt.minute, dt.second, dt.weekday(), 0, -1)) - # return time.localtime(timestamp).tm_isdst - # - # The code above yields the following result: - # - # >>> import tz, datetime - # >>> t = tz.tzlocal() - # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() - # 'BRDT' - # >>> datetime.datetime(2003,2,16,0,tzinfo=t).tzname() - # 'BRST' - # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() - # 'BRST' - # >>> datetime.datetime(2003,2,15,22,tzinfo=t).tzname() - # 'BRDT' - # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() - # 'BRDT' - # - # Here is a more stable implementation: - # - if not self._hasdst: - return False - - # Check for ambiguous times: - dstval = self._naive_is_dst(dt) - fold = getattr(dt, 'fold', None) - - if self.is_ambiguous(dt): - if fold is not None: - return not self._fold(dt) - else: - return True - - return dstval - - def __eq__(self, other): - if isinstance(other, tzlocal): - return (self._std_offset == other._std_offset and - self._dst_offset == other._dst_offset) - elif isinstance(other, tzutc): - return (not self._hasdst and - self._tznames[0] in {'UTC', 'GMT'} and - self._std_offset == ZERO) - elif isinstance(other, tzoffset): - return (not self._hasdst and - self._tznames[0] == other._name and - self._std_offset == other._offset) - else: - return NotImplemented - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s()" % self.__class__.__name__ - - __reduce__ = object.__reduce__ - - -class _ttinfo(object): - __slots__ = ["offset", "delta", "isdst", "abbr", - "isstd", "isgmt", "dstoffset"] - - def __init__(self): - for attr in self.__slots__: - setattr(self, attr, None) - - def __repr__(self): - l = [] - for attr in self.__slots__: - value = getattr(self, attr) - if value is not None: - l.append("%s=%s" % (attr, repr(value))) - return "%s(%s)" % (self.__class__.__name__, ", ".join(l)) - - def __eq__(self, other): - if not isinstance(other, _ttinfo): - return NotImplemented - - return (self.offset == other.offset and - self.delta == other.delta and - self.isdst == other.isdst and - self.abbr == other.abbr and - self.isstd == other.isstd and - self.isgmt == other.isgmt and - self.dstoffset == other.dstoffset) - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __getstate__(self): - state = {} - for name in self.__slots__: - state[name] = getattr(self, name, None) - return state - - def __setstate__(self, state): - for name in self.__slots__: - if name in state: - setattr(self, name, state[name]) - - -class _tzfile(object): - """ - Lightweight class for holding the relevant transition and time zone - information read from binary tzfiles. - """ - attrs = ['trans_list', 'trans_list_utc', 'trans_idx', 'ttinfo_list', - 'ttinfo_std', 'ttinfo_dst', 'ttinfo_before', 'ttinfo_first'] - - def __init__(self, **kwargs): - for attr in self.attrs: - setattr(self, attr, kwargs.get(attr, None)) - - -class tzfile(_tzinfo): - """ - This is a ``tzinfo`` subclass that allows one to use the ``tzfile(5)`` - format timezone files to extract current and historical zone information. - - :param fileobj: - This can be an opened file stream or a file name that the time zone - information can be read from. - - :param filename: - This is an optional parameter specifying the source of the time zone - information in the event that ``fileobj`` is a file object. If omitted - and ``fileobj`` is a file stream, this parameter will be set either to - ``fileobj``'s ``name`` attribute or to ``repr(fileobj)``. - - See `Sources for Time Zone and Daylight Saving Time Data - `_ for more information. - Time zone files can be compiled from the `IANA Time Zone database files - `_ with the `zic time zone compiler - `_ - - .. note:: - - Only construct a ``tzfile`` directly if you have a specific timezone - file on disk that you want to read into a Python ``tzinfo`` object. - If you want to get a ``tzfile`` representing a specific IANA zone, - (e.g. ``'America/New_York'``), you should call - :func:`dateutil.tz.gettz` with the zone identifier. - - - **Examples:** - - Using the US Eastern time zone as an example, we can see that a ``tzfile`` - provides time zone information for the standard Daylight Saving offsets: - - .. testsetup:: tzfile - - from dateutil.tz import gettz - from datetime import datetime - - .. doctest:: tzfile - - >>> NYC = gettz('America/New_York') - >>> NYC - tzfile('/usr/share/zoneinfo/America/New_York') - - >>> print(datetime(2016, 1, 3, tzinfo=NYC)) # EST - 2016-01-03 00:00:00-05:00 - - >>> print(datetime(2016, 7, 7, tzinfo=NYC)) # EDT - 2016-07-07 00:00:00-04:00 - - - The ``tzfile`` structure contains a fully history of the time zone, - so historical dates will also have the right offsets. For example, before - the adoption of the UTC standards, New York used local solar mean time: - - .. doctest:: tzfile - - >>> print(datetime(1901, 4, 12, tzinfo=NYC)) # LMT - 1901-04-12 00:00:00-04:56 - - And during World War II, New York was on "Eastern War Time", which was a - state of permanent daylight saving time: - - .. doctest:: tzfile - - >>> print(datetime(1944, 2, 7, tzinfo=NYC)) # EWT - 1944-02-07 00:00:00-04:00 - - """ - - def __init__(self, fileobj, filename=None): - super(tzfile, self).__init__() - - file_opened_here = False - if isinstance(fileobj, string_types): - self._filename = fileobj - fileobj = open(fileobj, 'rb') - file_opened_here = True - elif filename is not None: - self._filename = filename - elif hasattr(fileobj, "name"): - self._filename = fileobj.name - else: - self._filename = repr(fileobj) - - if fileobj is not None: - if not file_opened_here: - fileobj = _nullcontext(fileobj) - - with fileobj as file_stream: - tzobj = self._read_tzfile(file_stream) - - self._set_tzdata(tzobj) - - def _set_tzdata(self, tzobj): - """ Set the time zone data of this object from a _tzfile object """ - # Copy the relevant attributes over as private attributes - for attr in _tzfile.attrs: - setattr(self, '_' + attr, getattr(tzobj, attr)) - - def _read_tzfile(self, fileobj): - out = _tzfile() - - # From tzfile(5): - # - # The time zone information files used by tzset(3) - # begin with the magic characters "TZif" to identify - # them as time zone information files, followed by - # sixteen bytes reserved for future use, followed by - # six four-byte values of type long, written in a - # ``standard'' byte order (the high-order byte - # of the value is written first). - if fileobj.read(4).decode() != "TZif": - raise ValueError("magic not found") - - fileobj.read(16) - - ( - # The number of UTC/local indicators stored in the file. - ttisgmtcnt, - - # The number of standard/wall indicators stored in the file. - ttisstdcnt, - - # The number of leap seconds for which data is - # stored in the file. - leapcnt, - - # The number of "transition times" for which data - # is stored in the file. - timecnt, - - # The number of "local time types" for which data - # is stored in the file (must not be zero). - typecnt, - - # The number of characters of "time zone - # abbreviation strings" stored in the file. - charcnt, - - ) = struct.unpack(">6l", fileobj.read(24)) - - # The above header is followed by tzh_timecnt four-byte - # values of type long, sorted in ascending order. - # These values are written in ``standard'' byte order. - # Each is used as a transition time (as returned by - # time(2)) at which the rules for computing local time - # change. - - if timecnt: - out.trans_list_utc = list(struct.unpack(">%dl" % timecnt, - fileobj.read(timecnt*4))) - else: - out.trans_list_utc = [] - - # Next come tzh_timecnt one-byte values of type unsigned - # char; each one tells which of the different types of - # ``local time'' types described in the file is associated - # with the same-indexed transition time. These values - # serve as indices into an array of ttinfo structures that - # appears next in the file. - - if timecnt: - out.trans_idx = struct.unpack(">%dB" % timecnt, - fileobj.read(timecnt)) - else: - out.trans_idx = [] - - # Each ttinfo structure is written as a four-byte value - # for tt_gmtoff of type long, in a standard byte - # order, followed by a one-byte value for tt_isdst - # and a one-byte value for tt_abbrind. In each - # structure, tt_gmtoff gives the number of - # seconds to be added to UTC, tt_isdst tells whether - # tm_isdst should be set by localtime(3), and - # tt_abbrind serves as an index into the array of - # time zone abbreviation characters that follow the - # ttinfo structure(s) in the file. - - ttinfo = [] - - for i in range(typecnt): - ttinfo.append(struct.unpack(">lbb", fileobj.read(6))) - - abbr = fileobj.read(charcnt).decode() - - # Then there are tzh_leapcnt pairs of four-byte - # values, written in standard byte order; the - # first value of each pair gives the time (as - # returned by time(2)) at which a leap second - # occurs; the second gives the total number of - # leap seconds to be applied after the given time. - # The pairs of values are sorted in ascending order - # by time. - - # Not used, for now (but seek for correct file position) - if leapcnt: - fileobj.seek(leapcnt * 8, os.SEEK_CUR) - - # Then there are tzh_ttisstdcnt standard/wall - # indicators, each stored as a one-byte value; - # they tell whether the transition times associated - # with local time types were specified as standard - # time or wall clock time, and are used when - # a time zone file is used in handling POSIX-style - # time zone environment variables. - - if ttisstdcnt: - isstd = struct.unpack(">%db" % ttisstdcnt, - fileobj.read(ttisstdcnt)) - - # Finally, there are tzh_ttisgmtcnt UTC/local - # indicators, each stored as a one-byte value; - # they tell whether the transition times associated - # with local time types were specified as UTC or - # local time, and are used when a time zone file - # is used in handling POSIX-style time zone envi- - # ronment variables. - - if ttisgmtcnt: - isgmt = struct.unpack(">%db" % ttisgmtcnt, - fileobj.read(ttisgmtcnt)) - - # Build ttinfo list - out.ttinfo_list = [] - for i in range(typecnt): - gmtoff, isdst, abbrind = ttinfo[i] - gmtoff = _get_supported_offset(gmtoff) - tti = _ttinfo() - tti.offset = gmtoff - tti.dstoffset = datetime.timedelta(0) - tti.delta = datetime.timedelta(seconds=gmtoff) - tti.isdst = isdst - tti.abbr = abbr[abbrind:abbr.find('\x00', abbrind)] - tti.isstd = (ttisstdcnt > i and isstd[i] != 0) - tti.isgmt = (ttisgmtcnt > i and isgmt[i] != 0) - out.ttinfo_list.append(tti) - - # Replace ttinfo indexes for ttinfo objects. - out.trans_idx = [out.ttinfo_list[idx] for idx in out.trans_idx] - - # Set standard, dst, and before ttinfos. before will be - # used when a given time is before any transitions, - # and will be set to the first non-dst ttinfo, or to - # the first dst, if all of them are dst. - out.ttinfo_std = None - out.ttinfo_dst = None - out.ttinfo_before = None - if out.ttinfo_list: - if not out.trans_list_utc: - out.ttinfo_std = out.ttinfo_first = out.ttinfo_list[0] - else: - for i in range(timecnt-1, -1, -1): - tti = out.trans_idx[i] - if not out.ttinfo_std and not tti.isdst: - out.ttinfo_std = tti - elif not out.ttinfo_dst and tti.isdst: - out.ttinfo_dst = tti - - if out.ttinfo_std and out.ttinfo_dst: - break - else: - if out.ttinfo_dst and not out.ttinfo_std: - out.ttinfo_std = out.ttinfo_dst - - for tti in out.ttinfo_list: - if not tti.isdst: - out.ttinfo_before = tti - break - else: - out.ttinfo_before = out.ttinfo_list[0] - - # Now fix transition times to become relative to wall time. - # - # I'm not sure about this. In my tests, the tz source file - # is setup to wall time, and in the binary file isstd and - # isgmt are off, so it should be in wall time. OTOH, it's - # always in gmt time. Let me know if you have comments - # about this. - lastdst = None - lastoffset = None - lastdstoffset = None - lastbaseoffset = None - out.trans_list = [] - - for i, tti in enumerate(out.trans_idx): - offset = tti.offset - dstoffset = 0 - - if lastdst is not None: - if tti.isdst: - if not lastdst: - dstoffset = offset - lastoffset - - if not dstoffset and lastdstoffset: - dstoffset = lastdstoffset - - tti.dstoffset = datetime.timedelta(seconds=dstoffset) - lastdstoffset = dstoffset - - # If a time zone changes its base offset during a DST transition, - # then you need to adjust by the previous base offset to get the - # transition time in local time. Otherwise you use the current - # base offset. Ideally, I would have some mathematical proof of - # why this is true, but I haven't really thought about it enough. - baseoffset = offset - dstoffset - adjustment = baseoffset - if (lastbaseoffset is not None and baseoffset != lastbaseoffset - and tti.isdst != lastdst): - # The base DST has changed - adjustment = lastbaseoffset - - lastdst = tti.isdst - lastoffset = offset - lastbaseoffset = baseoffset - - out.trans_list.append(out.trans_list_utc[i] + adjustment) - - out.trans_idx = tuple(out.trans_idx) - out.trans_list = tuple(out.trans_list) - out.trans_list_utc = tuple(out.trans_list_utc) - - return out - - def _find_last_transition(self, dt, in_utc=False): - # If there's no list, there are no transitions to find - if not self._trans_list: - return None - - timestamp = _datetime_to_timestamp(dt) - - # Find where the timestamp fits in the transition list - if the - # timestamp is a transition time, it's part of the "after" period. - trans_list = self._trans_list_utc if in_utc else self._trans_list - idx = bisect.bisect_right(trans_list, timestamp) - - # We want to know when the previous transition was, so subtract off 1 - return idx - 1 - - def _get_ttinfo(self, idx): - # For no list or after the last transition, default to _ttinfo_std - if idx is None or (idx + 1) >= len(self._trans_list): - return self._ttinfo_std - - # If there is a list and the time is before it, return _ttinfo_before - if idx < 0: - return self._ttinfo_before - - return self._trans_idx[idx] - - def _find_ttinfo(self, dt): - idx = self._resolve_ambiguous_time(dt) - - return self._get_ttinfo(idx) - - def fromutc(self, dt): - """ - The ``tzfile`` implementation of :py:func:`datetime.tzinfo.fromutc`. - - :param dt: - A :py:class:`datetime.datetime` object. - - :raises TypeError: - Raised if ``dt`` is not a :py:class:`datetime.datetime` object. - - :raises ValueError: - Raised if this is called with a ``dt`` which does not have this - ``tzinfo`` attached. - - :return: - Returns a :py:class:`datetime.datetime` object representing the - wall time in ``self``'s time zone. - """ - # These isinstance checks are in datetime.tzinfo, so we'll preserve - # them, even if we don't care about duck typing. - if not isinstance(dt, datetime.datetime): - raise TypeError("fromutc() requires a datetime argument") - - if dt.tzinfo is not self: - raise ValueError("dt.tzinfo is not self") - - # First treat UTC as wall time and get the transition we're in. - idx = self._find_last_transition(dt, in_utc=True) - tti = self._get_ttinfo(idx) - - dt_out = dt + datetime.timedelta(seconds=tti.offset) - - fold = self.is_ambiguous(dt_out, idx=idx) - - return enfold(dt_out, fold=int(fold)) - - def is_ambiguous(self, dt, idx=None): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - if idx is None: - idx = self._find_last_transition(dt) - - # Calculate the difference in offsets from current to previous - timestamp = _datetime_to_timestamp(dt) - tti = self._get_ttinfo(idx) - - if idx is None or idx <= 0: - return False - - od = self._get_ttinfo(idx - 1).offset - tti.offset - tt = self._trans_list[idx] # Transition time - - return timestamp < tt + od - - def _resolve_ambiguous_time(self, dt): - idx = self._find_last_transition(dt) - - # If we have no transitions, return the index - _fold = self._fold(dt) - if idx is None or idx == 0: - return idx - - # If it's ambiguous and we're in a fold, shift to a different index. - idx_offset = int(not _fold and self.is_ambiguous(dt, idx)) - - return idx - idx_offset - - def utcoffset(self, dt): - if dt is None: - return None - - if not self._ttinfo_std: - return ZERO - - return self._find_ttinfo(dt).delta - - def dst(self, dt): - if dt is None: - return None - - if not self._ttinfo_dst: - return ZERO - - tti = self._find_ttinfo(dt) - - if not tti.isdst: - return ZERO - - # The documentation says that utcoffset()-dst() must - # be constant for every dt. - return tti.dstoffset - - @tzname_in_python2 - def tzname(self, dt): - if not self._ttinfo_std or dt is None: - return None - return self._find_ttinfo(dt).abbr - - def __eq__(self, other): - if not isinstance(other, tzfile): - return NotImplemented - return (self._trans_list == other._trans_list and - self._trans_idx == other._trans_idx and - self._ttinfo_list == other._ttinfo_list) - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s(%s)" % (self.__class__.__name__, repr(self._filename)) - - def __reduce__(self): - return self.__reduce_ex__(None) - - def __reduce_ex__(self, protocol): - return (self.__class__, (None, self._filename), self.__dict__) - - -class tzrange(tzrangebase): - """ - The ``tzrange`` object is a time zone specified by a set of offsets and - abbreviations, equivalent to the way the ``TZ`` variable can be specified - in POSIX-like systems, but using Python delta objects to specify DST - start, end and offsets. - - :param stdabbr: - The abbreviation for standard time (e.g. ``'EST'``). - - :param stdoffset: - An integer or :class:`datetime.timedelta` object or equivalent - specifying the base offset from UTC. - - If unspecified, +00:00 is used. - - :param dstabbr: - The abbreviation for DST / "Summer" time (e.g. ``'EDT'``). - - If specified, with no other DST information, DST is assumed to occur - and the default behavior or ``dstoffset``, ``start`` and ``end`` is - used. If unspecified and no other DST information is specified, it - is assumed that this zone has no DST. - - If this is unspecified and other DST information is *is* specified, - DST occurs in the zone but the time zone abbreviation is left - unchanged. - - :param dstoffset: - A an integer or :class:`datetime.timedelta` object or equivalent - specifying the UTC offset during DST. If unspecified and any other DST - information is specified, it is assumed to be the STD offset +1 hour. - - :param start: - A :class:`relativedelta.relativedelta` object or equivalent specifying - the time and time of year that daylight savings time starts. To - specify, for example, that DST starts at 2AM on the 2nd Sunday in - March, pass: - - ``relativedelta(hours=2, month=3, day=1, weekday=SU(+2))`` - - If unspecified and any other DST information is specified, the default - value is 2 AM on the first Sunday in April. - - :param end: - A :class:`relativedelta.relativedelta` object or equivalent - representing the time and time of year that daylight savings time - ends, with the same specification method as in ``start``. One note is - that this should point to the first time in the *standard* zone, so if - a transition occurs at 2AM in the DST zone and the clocks are set back - 1 hour to 1AM, set the ``hours`` parameter to +1. - - - **Examples:** - - .. testsetup:: tzrange - - from dateutil.tz import tzrange, tzstr - - .. doctest:: tzrange - - >>> tzstr('EST5EDT') == tzrange("EST", -18000, "EDT") - True - - >>> from dateutil.relativedelta import * - >>> range1 = tzrange("EST", -18000, "EDT") - >>> range2 = tzrange("EST", -18000, "EDT", -14400, - ... relativedelta(hours=+2, month=4, day=1, - ... weekday=SU(+1)), - ... relativedelta(hours=+1, month=10, day=31, - ... weekday=SU(-1))) - >>> tzstr('EST5EDT') == range1 == range2 - True - - """ - def __init__(self, stdabbr, stdoffset=None, - dstabbr=None, dstoffset=None, - start=None, end=None): - - global relativedelta - from dateutil import relativedelta - - self._std_abbr = stdabbr - self._dst_abbr = dstabbr - - try: - stdoffset = stdoffset.total_seconds() - except (TypeError, AttributeError): - pass - - try: - dstoffset = dstoffset.total_seconds() - except (TypeError, AttributeError): - pass - - if stdoffset is not None: - self._std_offset = datetime.timedelta(seconds=stdoffset) - else: - self._std_offset = ZERO - - if dstoffset is not None: - self._dst_offset = datetime.timedelta(seconds=dstoffset) - elif dstabbr and stdoffset is not None: - self._dst_offset = self._std_offset + datetime.timedelta(hours=+1) - else: - self._dst_offset = ZERO - - if dstabbr and start is None: - self._start_delta = relativedelta.relativedelta( - hours=+2, month=4, day=1, weekday=relativedelta.SU(+1)) - else: - self._start_delta = start - - if dstabbr and end is None: - self._end_delta = relativedelta.relativedelta( - hours=+1, month=10, day=31, weekday=relativedelta.SU(-1)) - else: - self._end_delta = end - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = bool(self._start_delta) - - def transitions(self, year): - """ - For a given year, get the DST on and off transition times, expressed - always on the standard time side. For zones with no transitions, this - function returns ``None``. - - :param year: - The year whose transitions you would like to query. - - :return: - Returns a :class:`tuple` of :class:`datetime.datetime` objects, - ``(dston, dstoff)`` for zones with an annual DST transition, or - ``None`` for fixed offset zones. - """ - if not self.hasdst: - return None - - base_year = datetime.datetime(year, 1, 1) - - start = base_year + self._start_delta - end = base_year + self._end_delta - - return (start, end) - - def __eq__(self, other): - if not isinstance(other, tzrange): - return NotImplemented - - return (self._std_abbr == other._std_abbr and - self._dst_abbr == other._dst_abbr and - self._std_offset == other._std_offset and - self._dst_offset == other._dst_offset and - self._start_delta == other._start_delta and - self._end_delta == other._end_delta) - - @property - def _dst_base_offset(self): - return self._dst_base_offset_ - - -@six.add_metaclass(_TzStrFactory) -class tzstr(tzrange): - """ - ``tzstr`` objects are time zone objects specified by a time-zone string as - it would be passed to a ``TZ`` variable on POSIX-style systems (see - the `GNU C Library: TZ Variable`_ for more details). - - There is one notable exception, which is that POSIX-style time zones use an - inverted offset format, so normally ``GMT+3`` would be parsed as an offset - 3 hours *behind* GMT. The ``tzstr`` time zone object will parse this as an - offset 3 hours *ahead* of GMT. If you would like to maintain the POSIX - behavior, pass a ``True`` value to ``posix_offset``. - - The :class:`tzrange` object provides the same functionality, but is - specified using :class:`relativedelta.relativedelta` objects. rather than - strings. - - :param s: - A time zone string in ``TZ`` variable format. This can be a - :class:`bytes` (2.x: :class:`str`), :class:`str` (2.x: - :class:`unicode`) or a stream emitting unicode characters - (e.g. :class:`StringIO`). - - :param posix_offset: - Optional. If set to ``True``, interpret strings such as ``GMT+3`` or - ``UTC+3`` as being 3 hours *behind* UTC rather than ahead, per the - POSIX standard. - - .. caution:: - - Prior to version 2.7.0, this function also supported time zones - in the format: - - * ``EST5EDT,4,0,6,7200,10,0,26,7200,3600`` - * ``EST5EDT,4,1,0,7200,10,-1,0,7200,3600`` - - This format is non-standard and has been deprecated; this function - will raise a :class:`DeprecatedTZFormatWarning` until - support is removed in a future version. - - .. _`GNU C Library: TZ Variable`: - https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html - """ - def __init__(self, s, posix_offset=False): - global parser - from dateutil.parser import _parser as parser - - self._s = s - - res = parser._parsetz(s) - if res is None or res.any_unused_tokens: - raise ValueError("unknown string format") - - # Here we break the compatibility with the TZ variable handling. - # GMT-3 actually *means* the timezone -3. - if res.stdabbr in ("GMT", "UTC") and not posix_offset: - res.stdoffset *= -1 - - # We must initialize it first, since _delta() needs - # _std_offset and _dst_offset set. Use False in start/end - # to avoid building it two times. - tzrange.__init__(self, res.stdabbr, res.stdoffset, - res.dstabbr, res.dstoffset, - start=False, end=False) - - if not res.dstabbr: - self._start_delta = None - self._end_delta = None - else: - self._start_delta = self._delta(res.start) - if self._start_delta: - self._end_delta = self._delta(res.end, isend=1) - - self.hasdst = bool(self._start_delta) - - def _delta(self, x, isend=0): - from dateutil import relativedelta - kwargs = {} - if x.month is not None: - kwargs["month"] = x.month - if x.weekday is not None: - kwargs["weekday"] = relativedelta.weekday(x.weekday, x.week) - if x.week > 0: - kwargs["day"] = 1 - else: - kwargs["day"] = 31 - elif x.day: - kwargs["day"] = x.day - elif x.yday is not None: - kwargs["yearday"] = x.yday - elif x.jyday is not None: - kwargs["nlyearday"] = x.jyday - if not kwargs: - # Default is to start on first sunday of april, and end - # on last sunday of october. - if not isend: - kwargs["month"] = 4 - kwargs["day"] = 1 - kwargs["weekday"] = relativedelta.SU(+1) - else: - kwargs["month"] = 10 - kwargs["day"] = 31 - kwargs["weekday"] = relativedelta.SU(-1) - if x.time is not None: - kwargs["seconds"] = x.time - else: - # Default is 2AM. - kwargs["seconds"] = 7200 - if isend: - # Convert to standard time, to follow the documented way - # of working with the extra hour. See the documentation - # of the tzinfo class. - delta = self._dst_offset - self._std_offset - kwargs["seconds"] -= delta.seconds + delta.days * 86400 - return relativedelta.relativedelta(**kwargs) - - def __repr__(self): - return "%s(%s)" % (self.__class__.__name__, repr(self._s)) - - -class _tzicalvtzcomp(object): - def __init__(self, tzoffsetfrom, tzoffsetto, isdst, - tzname=None, rrule=None): - self.tzoffsetfrom = datetime.timedelta(seconds=tzoffsetfrom) - self.tzoffsetto = datetime.timedelta(seconds=tzoffsetto) - self.tzoffsetdiff = self.tzoffsetto - self.tzoffsetfrom - self.isdst = isdst - self.tzname = tzname - self.rrule = rrule - - -class _tzicalvtz(_tzinfo): - def __init__(self, tzid, comps=[]): - super(_tzicalvtz, self).__init__() - - self._tzid = tzid - self._comps = comps - self._cachedate = [] - self._cachecomp = [] - self._cache_lock = _thread.allocate_lock() - - def _find_comp(self, dt): - if len(self._comps) == 1: - return self._comps[0] - - dt = dt.replace(tzinfo=None) - - try: - with self._cache_lock: - return self._cachecomp[self._cachedate.index( - (dt, self._fold(dt)))] - except ValueError: - pass - - lastcompdt = None - lastcomp = None - - for comp in self._comps: - compdt = self._find_compdt(comp, dt) - - if compdt and (not lastcompdt or lastcompdt < compdt): - lastcompdt = compdt - lastcomp = comp - - if not lastcomp: - # RFC says nothing about what to do when a given - # time is before the first onset date. We'll look for the - # first standard component, or the first component, if - # none is found. - for comp in self._comps: - if not comp.isdst: - lastcomp = comp - break - else: - lastcomp = comp[0] - - with self._cache_lock: - self._cachedate.insert(0, (dt, self._fold(dt))) - self._cachecomp.insert(0, lastcomp) - - if len(self._cachedate) > 10: - self._cachedate.pop() - self._cachecomp.pop() - - return lastcomp - - def _find_compdt(self, comp, dt): - if comp.tzoffsetdiff < ZERO and self._fold(dt): - dt -= comp.tzoffsetdiff - - compdt = comp.rrule.before(dt, inc=True) - - return compdt - - def utcoffset(self, dt): - if dt is None: - return None - - return self._find_comp(dt).tzoffsetto - - def dst(self, dt): - comp = self._find_comp(dt) - if comp.isdst: - return comp.tzoffsetdiff - else: - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return self._find_comp(dt).tzname - - def __repr__(self): - return "" % repr(self._tzid) - - __reduce__ = object.__reduce__ - - -class tzical(object): - """ - This object is designed to parse an iCalendar-style ``VTIMEZONE`` structure - as set out in `RFC 5545`_ Section 4.6.5 into one or more `tzinfo` objects. - - :param `fileobj`: - A file or stream in iCalendar format, which should be UTF-8 encoded - with CRLF endings. - - .. _`RFC 5545`: https://tools.ietf.org/html/rfc5545 - """ - def __init__(self, fileobj): - global rrule - from dateutil import rrule - - if isinstance(fileobj, string_types): - self._s = fileobj - # ical should be encoded in UTF-8 with CRLF - fileobj = open(fileobj, 'r') - else: - self._s = getattr(fileobj, 'name', repr(fileobj)) - fileobj = _nullcontext(fileobj) - - self._vtz = {} - - with fileobj as fobj: - self._parse_rfc(fobj.read()) - - def keys(self): - """ - Retrieves the available time zones as a list. - """ - return list(self._vtz.keys()) - - def get(self, tzid=None): - """ - Retrieve a :py:class:`datetime.tzinfo` object by its ``tzid``. - - :param tzid: - If there is exactly one time zone available, omitting ``tzid`` - or passing :py:const:`None` value returns it. Otherwise a valid - key (which can be retrieved from :func:`keys`) is required. - - :raises ValueError: - Raised if ``tzid`` is not specified but there are either more - or fewer than 1 zone defined. - - :returns: - Returns either a :py:class:`datetime.tzinfo` object representing - the relevant time zone or :py:const:`None` if the ``tzid`` was - not found. - """ - if tzid is None: - if len(self._vtz) == 0: - raise ValueError("no timezones defined") - elif len(self._vtz) > 1: - raise ValueError("more than one timezone available") - tzid = next(iter(self._vtz)) - - return self._vtz.get(tzid) - - def _parse_offset(self, s): - s = s.strip() - if not s: - raise ValueError("empty offset") - if s[0] in ('+', '-'): - signal = (-1, +1)[s[0] == '+'] - s = s[1:] - else: - signal = +1 - if len(s) == 4: - return (int(s[:2]) * 3600 + int(s[2:]) * 60) * signal - elif len(s) == 6: - return (int(s[:2]) * 3600 + int(s[2:4]) * 60 + int(s[4:])) * signal - else: - raise ValueError("invalid offset: " + s) - - def _parse_rfc(self, s): - lines = s.splitlines() - if not lines: - raise ValueError("empty string") - - # Unfold - i = 0 - while i < len(lines): - line = lines[i].rstrip() - if not line: - del lines[i] - elif i > 0 and line[0] == " ": - lines[i-1] += line[1:] - del lines[i] - else: - i += 1 - - tzid = None - comps = [] - invtz = False - comptype = None - for line in lines: - if not line: - continue - name, value = line.split(':', 1) - parms = name.split(';') - if not parms: - raise ValueError("empty property name") - name = parms[0].upper() - parms = parms[1:] - if invtz: - if name == "BEGIN": - if value in ("STANDARD", "DAYLIGHT"): - # Process component - pass - else: - raise ValueError("unknown component: "+value) - comptype = value - founddtstart = False - tzoffsetfrom = None - tzoffsetto = None - rrulelines = [] - tzname = None - elif name == "END": - if value == "VTIMEZONE": - if comptype: - raise ValueError("component not closed: "+comptype) - if not tzid: - raise ValueError("mandatory TZID not found") - if not comps: - raise ValueError( - "at least one component is needed") - # Process vtimezone - self._vtz[tzid] = _tzicalvtz(tzid, comps) - invtz = False - elif value == comptype: - if not founddtstart: - raise ValueError("mandatory DTSTART not found") - if tzoffsetfrom is None: - raise ValueError( - "mandatory TZOFFSETFROM not found") - if tzoffsetto is None: - raise ValueError( - "mandatory TZOFFSETFROM not found") - # Process component - rr = None - if rrulelines: - rr = rrule.rrulestr("\n".join(rrulelines), - compatible=True, - ignoretz=True, - cache=True) - comp = _tzicalvtzcomp(tzoffsetfrom, tzoffsetto, - (comptype == "DAYLIGHT"), - tzname, rr) - comps.append(comp) - comptype = None - else: - raise ValueError("invalid component end: "+value) - elif comptype: - if name == "DTSTART": - # DTSTART in VTIMEZONE takes a subset of valid RRULE - # values under RFC 5545. - for parm in parms: - if parm != 'VALUE=DATE-TIME': - msg = ('Unsupported DTSTART param in ' + - 'VTIMEZONE: ' + parm) - raise ValueError(msg) - rrulelines.append(line) - founddtstart = True - elif name in ("RRULE", "RDATE", "EXRULE", "EXDATE"): - rrulelines.append(line) - elif name == "TZOFFSETFROM": - if parms: - raise ValueError( - "unsupported %s parm: %s " % (name, parms[0])) - tzoffsetfrom = self._parse_offset(value) - elif name == "TZOFFSETTO": - if parms: - raise ValueError( - "unsupported TZOFFSETTO parm: "+parms[0]) - tzoffsetto = self._parse_offset(value) - elif name == "TZNAME": - if parms: - raise ValueError( - "unsupported TZNAME parm: "+parms[0]) - tzname = value - elif name == "COMMENT": - pass - else: - raise ValueError("unsupported property: "+name) - else: - if name == "TZID": - if parms: - raise ValueError( - "unsupported TZID parm: "+parms[0]) - tzid = value - elif name in ("TZURL", "LAST-MODIFIED", "COMMENT"): - pass - else: - raise ValueError("unsupported property: "+name) - elif name == "BEGIN" and value == "VTIMEZONE": - tzid = None - comps = [] - invtz = True - - def __repr__(self): - return "%s(%s)" % (self.__class__.__name__, repr(self._s)) - - -if sys.platform != "win32": - TZFILES = ["/etc/localtime", "localtime"] - TZPATHS = ["/usr/share/zoneinfo", - "/usr/lib/zoneinfo", - "/usr/share/lib/zoneinfo", - "/etc/zoneinfo"] -else: - TZFILES = [] - TZPATHS = [] - - -def __get_gettz(): - tzlocal_classes = (tzlocal,) - if tzwinlocal is not None: - tzlocal_classes += (tzwinlocal,) - - class GettzFunc(object): - """ - Retrieve a time zone object from a string representation - - This function is intended to retrieve the :py:class:`tzinfo` subclass - that best represents the time zone that would be used if a POSIX - `TZ variable`_ were set to the same value. - - If no argument or an empty string is passed to ``gettz``, local time - is returned: - - .. code-block:: python3 - - >>> gettz() - tzfile('/etc/localtime') - - This function is also the preferred way to map IANA tz database keys - to :class:`tzfile` objects: - - .. code-block:: python3 - - >>> gettz('Pacific/Kiritimati') - tzfile('/usr/share/zoneinfo/Pacific/Kiritimati') - - On Windows, the standard is extended to include the Windows-specific - zone names provided by the operating system: - - .. code-block:: python3 - - >>> gettz('Egypt Standard Time') - tzwin('Egypt Standard Time') - - Passing a GNU ``TZ`` style string time zone specification returns a - :class:`tzstr` object: - - .. code-block:: python3 - - >>> gettz('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3') - tzstr('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3') - - :param name: - A time zone name (IANA, or, on Windows, Windows keys), location of - a ``tzfile(5)`` zoneinfo file or ``TZ`` variable style time zone - specifier. An empty string, no argument or ``None`` is interpreted - as local time. - - :return: - Returns an instance of one of ``dateutil``'s :py:class:`tzinfo` - subclasses. - - .. versionchanged:: 2.7.0 - - After version 2.7.0, any two calls to ``gettz`` using the same - input strings will return the same object: - - .. code-block:: python3 - - >>> tz.gettz('America/Chicago') is tz.gettz('America/Chicago') - True - - In addition to improving performance, this ensures that - `"same zone" semantics`_ are used for datetimes in the same zone. - - - .. _`TZ variable`: - https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html - - .. _`"same zone" semantics`: - https://blog.ganssle.io/articles/2018/02/aware-datetime-arithmetic.html - """ - def __init__(self): - - self.__instances = weakref.WeakValueDictionary() - self.__strong_cache_size = 8 - self.__strong_cache = OrderedDict() - self._cache_lock = _thread.allocate_lock() - - def __call__(self, name=None): - with self._cache_lock: - rv = self.__instances.get(name, None) - - if rv is None: - rv = self.nocache(name=name) - if not (name is None - or isinstance(rv, tzlocal_classes) - or rv is None): - # tzlocal is slightly more complicated than the other - # time zone providers because it depends on environment - # at construction time, so don't cache that. - # - # We also cannot store weak references to None, so we - # will also not store that. - self.__instances[name] = rv - else: - # No need for strong caching, return immediately - return rv - - self.__strong_cache[name] = self.__strong_cache.pop(name, rv) - - if len(self.__strong_cache) > self.__strong_cache_size: - self.__strong_cache.popitem(last=False) - - return rv - - def set_cache_size(self, size): - with self._cache_lock: - self.__strong_cache_size = size - while len(self.__strong_cache) > size: - self.__strong_cache.popitem(last=False) - - def cache_clear(self): - with self._cache_lock: - self.__instances = weakref.WeakValueDictionary() - self.__strong_cache.clear() - - @staticmethod - def nocache(name=None): - """A non-cached version of gettz""" - tz = None - if not name: - try: - name = os.environ["TZ"] - except KeyError: - pass - if name is None or name in ("", ":"): - for filepath in TZFILES: - if not os.path.isabs(filepath): - filename = filepath - for path in TZPATHS: - filepath = os.path.join(path, filename) - if os.path.isfile(filepath): - break - else: - continue - if os.path.isfile(filepath): - try: - tz = tzfile(filepath) - break - except (IOError, OSError, ValueError): - pass - else: - tz = tzlocal() - else: - try: - if name.startswith(":"): - name = name[1:] - except TypeError as e: - if isinstance(name, bytes): - new_msg = "gettz argument should be str, not bytes" - six.raise_from(TypeError(new_msg), e) - else: - raise - if os.path.isabs(name): - if os.path.isfile(name): - tz = tzfile(name) - else: - tz = None - else: - for path in TZPATHS: - filepath = os.path.join(path, name) - if not os.path.isfile(filepath): - filepath = filepath.replace(' ', '_') - if not os.path.isfile(filepath): - continue - try: - tz = tzfile(filepath) - break - except (IOError, OSError, ValueError): - pass - else: - tz = None - if tzwin is not None: - try: - tz = tzwin(name) - except (WindowsError, UnicodeEncodeError): - # UnicodeEncodeError is for Python 2.7 compat - tz = None - - if not tz: - from dateutil.zoneinfo import get_zonefile_instance - tz = get_zonefile_instance().get(name) - - if not tz: - for c in name: - # name is not a tzstr unless it has at least - # one offset. For short values of "name", an - # explicit for loop seems to be the fastest way - # To determine if a string contains a digit - if c in "0123456789": - try: - tz = tzstr(name) - except ValueError: - pass - break - else: - if name in ("GMT", "UTC"): - tz = UTC - elif name in time.tzname: - tz = tzlocal() - return tz - - return GettzFunc() - - -gettz = __get_gettz() -del __get_gettz - - -def datetime_exists(dt, tz=None): - """ - Given a datetime and a time zone, determine whether or not a given datetime - would fall in a gap. - - :param dt: - A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` - is provided.) - - :param tz: - A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If - ``None`` or not provided, the datetime's own time zone will be used. - - :return: - Returns a boolean value whether or not the "wall time" exists in - ``tz``. - - .. versionadded:: 2.7.0 - """ - if tz is None: - if dt.tzinfo is None: - raise ValueError('Datetime is naive and no time zone provided.') - tz = dt.tzinfo - - dt = dt.replace(tzinfo=None) - - # This is essentially a test of whether or not the datetime can survive - # a round trip to UTC. - dt_rt = dt.replace(tzinfo=tz).astimezone(UTC).astimezone(tz) - dt_rt = dt_rt.replace(tzinfo=None) - - return dt == dt_rt - - -def datetime_ambiguous(dt, tz=None): - """ - Given a datetime and a time zone, determine whether or not a given datetime - is ambiguous (i.e if there are two times differentiated only by their DST - status). - - :param dt: - A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` - is provided.) - - :param tz: - A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If - ``None`` or not provided, the datetime's own time zone will be used. - - :return: - Returns a boolean value whether or not the "wall time" is ambiguous in - ``tz``. - - .. versionadded:: 2.6.0 - """ - if tz is None: - if dt.tzinfo is None: - raise ValueError('Datetime is naive and no time zone provided.') - - tz = dt.tzinfo - - # If a time zone defines its own "is_ambiguous" function, we'll use that. - is_ambiguous_fn = getattr(tz, 'is_ambiguous', None) - if is_ambiguous_fn is not None: - try: - return tz.is_ambiguous(dt) - except Exception: - pass - - # If it doesn't come out and tell us it's ambiguous, we'll just check if - # the fold attribute has any effect on this particular date and time. - dt = dt.replace(tzinfo=tz) - wall_0 = enfold(dt, fold=0) - wall_1 = enfold(dt, fold=1) - - same_offset = wall_0.utcoffset() == wall_1.utcoffset() - same_dst = wall_0.dst() == wall_1.dst() - - return not (same_offset and same_dst) - - -def resolve_imaginary(dt): - """ - Given a datetime that may be imaginary, return an existing datetime. - - This function assumes that an imaginary datetime represents what the - wall time would be in a zone had the offset transition not occurred, so - it will always fall forward by the transition's change in offset. - - .. doctest:: - - >>> from dateutil import tz - >>> from datetime import datetime - >>> NYC = tz.gettz('America/New_York') - >>> print(tz.resolve_imaginary(datetime(2017, 3, 12, 2, 30, tzinfo=NYC))) - 2017-03-12 03:30:00-04:00 - - >>> KIR = tz.gettz('Pacific/Kiritimati') - >>> print(tz.resolve_imaginary(datetime(1995, 1, 1, 12, 30, tzinfo=KIR))) - 1995-01-02 12:30:00+14:00 - - As a note, :func:`datetime.astimezone` is guaranteed to produce a valid, - existing datetime, so a round-trip to and from UTC is sufficient to get - an extant datetime, however, this generally "falls back" to an earlier time - rather than falling forward to the STD side (though no guarantees are made - about this behavior). - - :param dt: - A :class:`datetime.datetime` which may or may not exist. - - :return: - Returns an existing :class:`datetime.datetime`. If ``dt`` was not - imaginary, the datetime returned is guaranteed to be the same object - passed to the function. - - .. versionadded:: 2.7.0 - """ - if dt.tzinfo is not None and not datetime_exists(dt): - - curr_offset = (dt + datetime.timedelta(hours=24)).utcoffset() - old_offset = (dt - datetime.timedelta(hours=24)).utcoffset() - - dt += curr_offset - old_offset - - return dt - - -def _datetime_to_timestamp(dt): - """ - Convert a :class:`datetime.datetime` object to an epoch timestamp in - seconds since January 1, 1970, ignoring the time zone. - """ - return (dt.replace(tzinfo=None) - EPOCH).total_seconds() - - -if sys.version_info >= (3, 6): - def _get_supported_offset(second_offset): - return second_offset -else: - def _get_supported_offset(second_offset): - # For python pre-3.6, round to full-minutes if that's not the case. - # Python's datetime doesn't accept sub-minute timezones. Check - # http://python.org/sf/1447945 or https://bugs.python.org/issue5288 - # for some information. - old_offset = second_offset - calculated_offset = 60 * ((second_offset + 30) // 60) - return calculated_offset - - -try: - # Python 3.7 feature - from contextlib import nullcontext as _nullcontext -except ImportError: - class _nullcontext(object): - """ - Class for wrapping contexts so that they are passed through in a - with statement. - """ - def __init__(self, context): - self.context = context - - def __enter__(self): - return self.context - - def __exit__(*args, **kwargs): - pass - -# vim:ts=4:sw=4:et diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/parallel/data_parallel.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/parallel/data_parallel.py deleted file mode 100644 index 79b5f69b654cf647dc7ae9174223781ab5c607d2..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/parallel/data_parallel.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from itertools import chain - -from torch.nn.parallel import DataParallel - -from .scatter_gather import scatter_kwargs - - -class MMDataParallel(DataParallel): - """The DataParallel module that supports DataContainer. - - MMDataParallel has two main differences with PyTorch DataParallel: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data during both GPU and CPU inference. - - It implement two more APIs ``train_step()`` and ``val_step()``. - - Args: - module (:class:`nn.Module`): Module to be encapsulated. - device_ids (list[int]): Device IDS of modules to be scattered to. - Defaults to None when GPU is not available. - output_device (str | int): Device ID for output. Defaults to None. - dim (int): Dimension used to scatter the data. Defaults to 0. - """ - - def __init__(self, *args, dim=0, **kwargs): - super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs) - self.dim = dim - - def forward(self, *inputs, **kwargs): - """Override the original forward function. - - The main difference lies in the CPU inference where the data in - :class:`DataContainers` will still be gathered. - """ - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module(*inputs[0], **kwargs[0]) - else: - return super().forward(*inputs, **kwargs) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.train_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - 'instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.train_step(*inputs[0], **kwargs[0]) - - def val_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.val_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - ' instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.val_step(*inputs[0], **kwargs[0]) diff --git a/spaces/TEnngal/bingo/src/components/chat-message.tsx b/spaces/TEnngal/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
-
- {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

{children}

- }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
-
-
- {message.author === 'bot' && } - {message.author === 'bot' && } -
-
- ) : null -} diff --git a/spaces/Tahnik/spreadsight-demo/app.py b/spaces/Tahnik/spreadsight-demo/app.py deleted file mode 100644 index 682f8cbef2bea263de6065f8696dafa224a693fb..0000000000000000000000000000000000000000 --- a/spaces/Tahnik/spreadsight-demo/app.py +++ /dev/null @@ -1,287 +0,0 @@ -import gradio as gr -import os -import time - -from langchain.document_loaders import OnlinePDFLoader -from langchain.text_splitter import CharacterTextSplitter -from langchain.llms import OpenAI -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import Chroma -from langchain.chains import ConversationalRetrievalChain -from langchain import PromptTemplate -from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor -import requests -from PIL import Image -import torch - - - -# _template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. -# Chat History: -# {chat_history} -# Follow Up Input: {question} -# Standalone question:""" - -# CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template) - -# template = """ -# You are given the following extracted parts of a long document and a question. Provide a short structured answer. -# If you don't know the answer, look on the web. Don't try to make up an answer. -# Question: {question} -# ========= -# {context} -# ========= -# Answer in Markdown:""" - -torch.hub.download_url_to_file('https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png', 'chart_example.png') -torch.hub.download_url_to_file('https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/test/png/multi_col_1081.png', 'chart_example_2.png') -torch.hub.download_url_to_file('https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/test/png/18143564004789.png', 'chart_example_3.png') -torch.hub.download_url_to_file('https://sharkcoder.com/files/article/matplotlib-bar-plot.png', 'chart_example_4.png') - - -model_name = "google/matcha-chartqa" -model = Pix2StructForConditionalGeneration.from_pretrained(model_name) -processor = Pix2StructProcessor.from_pretrained(model_name) -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model.to(device) - -def filter_output(output): - return output.replace("<0x0A>", "") - -def chart_qa(image, question): - inputs = processor(images=image, text=question, return_tensors="pt").to(device) - predictions = model.generate(**inputs, max_new_tokens=512) - return filter_output(processor.decode(predictions[0], skip_special_tokens=True)) - -def loading_pdf(): - return "Loading..." - - -def pdf_changes(pdf_doc, open_ai_key): - if open_ai_key is not None: - os.environ['OPENAI_API_KEY'] = open_ai_key - loader = OnlinePDFLoader(pdf_doc.name) - documents = loader.load() - text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - texts = text_splitter.split_documents(documents) - embeddings = OpenAIEmbeddings() - db = Chroma.from_documents(texts, embeddings) - retriever = db.as_retriever() - global qa - qa = ConversationalRetrievalChain.from_llm( - llm=OpenAI(temperature=0.5), - retriever=retriever, - return_source_documents=True) - return "Ready" - else: - return "You forgot OpenAI API key" - -def add_text(history, text): - history = history + [(text, None)] - return history, "" - -def bot(history): - response = infer(history[-1][0], history) - history[-1][1] = "" - - for character in response: - history[-1][1] += character - time.sleep(0.05) - yield history - - -def infer(question, history): - res = [] - for human, ai in history[:-1]: - pair = (human, ai) - res.append(pair) - - chat_history = res - #print(chat_history) - query = question - result = qa({"question": query, "chat_history": chat_history}) - #print(result) - return result["answer"] - -css=""" -#col-container {max-width: 700px; margin-left: auto; margin-right: auto;} -""" - -title = """ -
-

SpreadSight Demo

-

Please specify OpenAI Key before use

-
-""" - - -# with gr.Blocks(css=css) as demo: -# with gr.Column(elem_id="col-container"): -# gr.HTML(title) - -# with gr.Column(): -# openai_key = gr.Textbox(label="You OpenAI API key", type="password") -# pdf_doc = gr.File(label="Load a pdf", file_types=['.pdf'], type="file") -# with gr.Row(): -# langchain_status = gr.Textbox(label="Status", placeholder="", interactive=False) -# load_pdf = gr.Button("Load pdf to langchain") - -# chatbot = gr.Chatbot([], elem_id="chatbot").style(height=350) -# question = gr.Textbox(label="Question", placeholder="Type your question and hit Enter ") -# submit_btn = gr.Button("Send Message") - -# load_pdf.click(loading_pdf, None, langchain_status, queue=False) -# load_pdf.click(pdf_changes, inputs=[pdf_doc, openai_key], outputs=[langchain_status], queue=False) -# question.submit(add_text, [chatbot, question], [chatbot, question]).then( -# bot, chatbot, chatbot -# ) -# submit_btn.click(add_text, [chatbot, question], [chatbot, question]).then( -# bot, chatbot, chatbot) - -# demo.launch() - - -"""functions""" - -def load_file(): - return "Loading..." - -def load_xlsx(name): - import pandas as pd - - xls_file = rf'{name}' - data = pd.read_excel(xls_file) - return data - -def table_loader(table_file, open_ai_key): - import os - from langchain.llms import OpenAI - from langchain.agents import create_pandas_dataframe_agent - from pandas import read_csv - - global agent - if open_ai_key is not None: - os.environ['OPENAI_API_KEY'] = open_ai_key - else: - return "Enter API" - - if table_file.name.endswith('.xlsx') or table_file.name.endswith('.xls'): - data = load_xlsx(table_file.name) - agent = create_pandas_dataframe_agent(OpenAI(temperature=0), data) - return "Ready!" - elif table_file.name.endswith('.csv'): - data = read_csv(table_file.name) - agent = create_pandas_dataframe_agent(OpenAI(temperature=0), data) - return "Ready!" - else: - return "Wrong file format! Upload excel file or csv!" - -def run(query): - from langchain.callbacks import get_openai_callback - - with get_openai_callback() as cb: - response = (agent.run(query)) - costs = (f"Total Cost (USD): ${cb.total_cost}") - output = f'{response} \n {costs}' - return output - -def respond(message, chat_history): - import time - - bot_message = run(message) - chat_history.append((message, bot_message)) - time.sleep(0.5) - return "", chat_history - - -with gr.Blocks() as demo: - with gr.Column(elem_id="col-container"): - gr.HTML(title) - key = gr.Textbox( - show_label=False, - placeholder="Your OpenAI key", - type = 'password', - ).style(container=False) - - # PDF processing tab - with gr.Tab("Files"): - - with gr.Row(): - - with gr.Column(scale=0.5): - langchain_status = gr.Textbox(label="Status", placeholder="", interactive=False) - load_pdf = gr.Button("Load pdf to Spreadsight") - - with gr.Column(scale=0.5): - pdf_doc = gr.File(label="Load a pdf", file_types=['.pdf'], type="file") - - - with gr.Row(): - - with gr.Column(scale=1): - chatbot = gr.Chatbot([], elem_id="chatbot").style(height=350) - - with gr.Row(): - - with gr.Column(scale=0.85): - question = gr.Textbox( - show_label=False, - placeholder="Enter text and press enter, or upload an image", - ).style(container=False) - - with gr.Column(scale=0.15, min_width=0): - clr_btn = gr.Button("Clear!") - - load_pdf.click(loading_pdf, None, langchain_status, queue=False) - load_pdf.click(pdf_changes, inputs=[pdf_doc, key], outputs=[langchain_status], queue=True) - question.submit(add_text, [chatbot, question], [chatbot, question]).then( - bot, chatbot, chatbot - ) - - # XLSX and CSV processing tab - with gr.Tab("Spreadsheets"): - with gr.Row(): - - with gr.Column(scale=0.5): - status_sh = gr.Textbox(label="Status", placeholder="", interactive=False) - load_table = gr.Button("Load csv|xlsx to langchain") - - with gr.Column(scale=0.5): - raw_table = gr.File(label="Load a table file (xls or csv)", file_types=['.csv, xlsx, xls'], type="file") - - - with gr.Row(): - - with gr.Column(scale=1): - chatbot_sh = gr.Chatbot([], elem_id="chatbot").style(height=350) - - - with gr.Row(): - - with gr.Column(scale=0.85): - question_sh = gr.Textbox( - show_label=False, - placeholder="Enter text and press enter, or upload an image", - ).style(container=False) - - with gr.Column(scale=0.15, min_width=0): - clr_btn = gr.Button("Clear!") - - load_table.click(load_file, None, status_sh, queue=False) - load_table.click(table_loader, inputs=[raw_table, key], outputs=[status_sh], queue=False) - - question_sh.submit(respond, [question_sh, chatbot_sh], [question_sh, chatbot_sh]) - clr_btn.click(lambda: None, None, chatbot_sh, queue=False) - - - with gr.Tab("Charts"): - image = gr.Image(type="pil", label="Chart") - question = gr.Textbox(label="Question") - load_chart = gr.Button("Load chart and question!") - answer = gr.Textbox(label="Model Output") - - load_chart.click(chart_qa, [image, question], answer) - - -demo.queue(concurrency_count=3) -demo.launch() \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/euctwprober.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/euctwprober.py deleted file mode 100644 index a37ab18995822ad6b3372d56366becdccf9a4c26..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/euctwprober.py +++ /dev/null @@ -1,47 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .chardistribution import EUCTWDistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import EUCTW_SM_MODEL - - -class EUCTWProber(MultiByteCharSetProber): - def __init__(self) -> None: - super().__init__() - self.coding_sm = CodingStateMachine(EUCTW_SM_MODEL) - self.distribution_analyzer = EUCTWDistributionAnalysis() - self.reset() - - @property - def charset_name(self) -> str: - return "EUC-TW" - - @property - def language(self) -> str: - return "Taiwan" diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/mask_head.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/mask_head.py deleted file mode 100644 index 5ac5c4b9aaa34653d6c50e512a5a4300da450c7f..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/mask_head.py +++ /dev/null @@ -1,292 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import List -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ConvTranspose2d, ShapeSpec, cat, get_norm -from detectron2.structures import Instances -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry - -__all__ = [ - "BaseMaskRCNNHead", - "MaskRCNNConvUpsampleHead", - "build_mask_head", - "ROI_MASK_HEAD_REGISTRY", -] - - -ROI_MASK_HEAD_REGISTRY = Registry("ROI_MASK_HEAD") -ROI_MASK_HEAD_REGISTRY.__doc__ = """ -Registry for mask heads, which predicts instance masks given -per-region features. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -@torch.jit.unused -def mask_rcnn_loss(pred_mask_logits: torch.Tensor, instances: List[Instances], vis_period: int = 0): - """ - Compute the mask prediction loss defined in the Mask R-CNN paper. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. These instances are in 1:1 - correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask, - ...) associated with each instance are stored in fields. - vis_period (int): the period (in steps) to dump visualization. - - Returns: - mask_loss (Tensor): A scalar tensor containing the loss. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - total_num_masks = pred_mask_logits.size(0) - mask_side_len = pred_mask_logits.size(2) - assert pred_mask_logits.size(2) == pred_mask_logits.size(3), "Mask prediction must be square!" - - gt_classes = [] - gt_masks = [] - for instances_per_image in instances: - if len(instances_per_image) == 0: - continue - if not cls_agnostic_mask: - gt_classes_per_image = instances_per_image.gt_classes.to(dtype=torch.int64) - gt_classes.append(gt_classes_per_image) - - gt_masks_per_image = instances_per_image.gt_masks.crop_and_resize( - instances_per_image.proposal_boxes.tensor, mask_side_len - ).to(device=pred_mask_logits.device) - # A tensor of shape (N, M, M), N=#instances in the image; M=mask_side_len - gt_masks.append(gt_masks_per_image) - - if len(gt_masks) == 0: - return pred_mask_logits.sum() * 0 - - gt_masks = cat(gt_masks, dim=0) - - if cls_agnostic_mask: - pred_mask_logits = pred_mask_logits[:, 0] - else: - indices = torch.arange(total_num_masks) - gt_classes = cat(gt_classes, dim=0) - pred_mask_logits = pred_mask_logits[indices, gt_classes] - - if gt_masks.dtype == torch.bool: - gt_masks_bool = gt_masks - else: - # Here we allow gt_masks to be float as well (depend on the implementation of rasterize()) - gt_masks_bool = gt_masks > 0.5 - gt_masks = gt_masks.to(dtype=torch.float32) - - # Log the training accuracy (using gt classes and 0.5 threshold) - mask_incorrect = (pred_mask_logits > 0.0) != gt_masks_bool - mask_accuracy = 1 - (mask_incorrect.sum().item() / max(mask_incorrect.numel(), 1.0)) - num_positive = gt_masks_bool.sum().item() - false_positive = (mask_incorrect & ~gt_masks_bool).sum().item() / max( - gt_masks_bool.numel() - num_positive, 1.0 - ) - false_negative = (mask_incorrect & gt_masks_bool).sum().item() / max(num_positive, 1.0) - - storage = get_event_storage() - storage.put_scalar("mask_rcnn/accuracy", mask_accuracy) - storage.put_scalar("mask_rcnn/false_positive", false_positive) - storage.put_scalar("mask_rcnn/false_negative", false_negative) - if vis_period > 0 and storage.iter % vis_period == 0: - pred_masks = pred_mask_logits.sigmoid() - vis_masks = torch.cat([pred_masks, gt_masks], axis=2) - name = "Left: mask prediction; Right: mask GT" - for idx, vis_mask in enumerate(vis_masks): - vis_mask = torch.stack([vis_mask] * 3, axis=0) - storage.put_image(name + f" ({idx})", vis_mask) - - mask_loss = F.binary_cross_entropy_with_logits(pred_mask_logits, gt_masks, reduction="mean") - return mask_loss - - -def mask_rcnn_inference(pred_mask_logits: torch.Tensor, pred_instances: List[Instances]): - """ - Convert pred_mask_logits to estimated foreground probability masks while also - extracting only the masks for the predicted classes in pred_instances. For each - predicted box, the mask of the same class is attached to the instance by adding a - new "pred_masks" field to pred_instances. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - pred_instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. Each Instances must have field "pred_classes". - - Returns: - None. pred_instances will contain an extra "pred_masks" field storing a mask of size (Hmask, - Wmask) for predicted class. Note that the masks are returned as a soft (non-quantized) - masks the resolution predicted by the network; post-processing steps, such as resizing - the predicted masks to the original image resolution and/or binarizing them, is left - to the caller. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - - if cls_agnostic_mask: - mask_probs_pred = pred_mask_logits.sigmoid() - else: - # Select masks corresponding to the predicted classes - num_masks = pred_mask_logits.shape[0] - class_pred = cat([i.pred_classes for i in pred_instances]) - indices = torch.arange(num_masks, device=class_pred.device) - mask_probs_pred = pred_mask_logits[indices, class_pred][:, None].sigmoid() - # mask_probs_pred.shape: (B, 1, Hmask, Wmask) - - num_boxes_per_image = [len(i) for i in pred_instances] - mask_probs_pred = mask_probs_pred.split(num_boxes_per_image, dim=0) - - for prob, instances in zip(mask_probs_pred, pred_instances): - instances.pred_masks = prob # (1, Hmask, Wmask) - - -class BaseMaskRCNNHead(nn.Module): - """ - Implement the basic Mask R-CNN losses and inference logic described in :paper:`Mask R-CNN` - """ - - @configurable - def __init__(self, *, loss_weight: float = 1.0, vis_period: int = 0): - """ - NOTE: this interface is experimental. - - Args: - loss_weight (float): multiplier of the loss - vis_period (int): visualization period - """ - super().__init__() - self.vis_period = vis_period - self.loss_weight = loss_weight - - @classmethod - def from_config(cls, cfg, input_shape): - return {"vis_period": cfg.VIS_PERIOD} - - def forward(self, x, instances: List[Instances]): - """ - Args: - x: input region feature(s) provided by :class:`ROIHeads`. - instances (list[Instances]): contains the boxes & labels corresponding - to the input features. - Exact format is up to its caller to decide. - Typically, this is the foreground instances in training, with - "proposal_boxes" field and other gt annotations. - In inference, it contains boxes that are already predicted. - - Returns: - A dict of losses in training. The predicted "instances" in inference. - """ - x = self.layers(x) - if self.training: - return {"loss_mask": mask_rcnn_loss(x, instances, self.vis_period) * self.loss_weight} - else: - mask_rcnn_inference(x, instances) - return instances - - def layers(self, x): - """ - Neural network layers that makes predictions from input features. - """ - raise NotImplementedError - - -# To get torchscript support, we make the head a subclass of `nn.Sequential`. -# Therefore, to add new layers in this head class, please make sure they are -# added in the order they will be used in forward(). -@ROI_MASK_HEAD_REGISTRY.register() -class MaskRCNNConvUpsampleHead(BaseMaskRCNNHead, nn.Sequential): - """ - A mask head with several conv layers, plus an upsample layer (with `ConvTranspose2d`). - Predictions are made with a final 1x1 conv layer. - """ - - @configurable - def __init__(self, input_shape: ShapeSpec, *, num_classes, conv_dims, conv_norm="", **kwargs): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature - num_classes (int): the number of foreground classes (i.e. background is not - included). 1 if using class agnostic prediction. - conv_dims (list[int]): a list of N>0 integers representing the output dimensions - of N-1 conv layers and the last upsample layer. - conv_norm (str or callable): normalization for the conv layers. - See :func:`detectron2.layers.get_norm` for supported types. - """ - super().__init__(**kwargs) - assert len(conv_dims) >= 1, "conv_dims have to be non-empty!" - - self.conv_norm_relus = [] - - cur_channels = input_shape.channels - for k, conv_dim in enumerate(conv_dims[:-1]): - conv = Conv2d( - cur_channels, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=not conv_norm, - norm=get_norm(conv_norm, conv_dim), - activation=nn.ReLU(), - ) - self.add_module("mask_fcn{}".format(k + 1), conv) - self.conv_norm_relus.append(conv) - cur_channels = conv_dim - - self.deconv = ConvTranspose2d( - cur_channels, conv_dims[-1], kernel_size=2, stride=2, padding=0 - ) - self.add_module("deconv_relu", nn.ReLU()) - cur_channels = conv_dims[-1] - - self.predictor = Conv2d(cur_channels, num_classes, kernel_size=1, stride=1, padding=0) - - for layer in self.conv_norm_relus + [self.deconv]: - weight_init.c2_msra_fill(layer) - # use normal distribution initialization for mask prediction layer - nn.init.normal_(self.predictor.weight, std=0.001) - if self.predictor.bias is not None: - nn.init.constant_(self.predictor.bias, 0) - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM - num_conv = cfg.MODEL.ROI_MASK_HEAD.NUM_CONV - ret.update( - conv_dims=[conv_dim] * (num_conv + 1), # +1 for ConvTranspose - conv_norm=cfg.MODEL.ROI_MASK_HEAD.NORM, - input_shape=input_shape, - ) - if cfg.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK: - ret["num_classes"] = 1 - else: - ret["num_classes"] = cfg.MODEL.ROI_HEADS.NUM_CLASSES - return ret - - def layers(self, x): - for layer in self: - x = layer(x) - return x - - -def build_mask_head(cfg, input_shape): - """ - Build a mask head defined by `cfg.MODEL.ROI_MASK_HEAD.NAME`. - """ - name = cfg.MODEL.ROI_MASK_HEAD.NAME - return ROI_MASK_HEAD_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/Toraong/color_textual_inversion/README.md b/spaces/Toraong/color_textual_inversion/README.md deleted file mode 100644 index 2e0189939b3f30ac47c4808b483763520504f6b8..0000000000000000000000000000000000000000 --- a/spaces/Toraong/color_textual_inversion/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: color_textual_inversion -emoji: 🖌️ -sdk: streamlit -python_version: 3.9 -sdk_version: 1.10.0 -app_file: app.py -duplicated_from: Bingsu/color_textual_inversion ---- - -# color_textual_inversion diff --git a/spaces/TusharNautiyal/BTC-Prediction/build_model.py b/spaces/TusharNautiyal/BTC-Prediction/build_model.py deleted file mode 100644 index 70940a616b0cdb381de080ac60e8860d368207e1..0000000000000000000000000000000000000000 --- a/spaces/TusharNautiyal/BTC-Prediction/build_model.py +++ /dev/null @@ -1,40 +0,0 @@ -import numpy as np -import pandas as pd -from tensorflow import keras -from keras import Sequential -from keras.layers import Dense,Dropout,LSTM - - -class model_builder: - def __init__(self): - self.model = None - - def preprocess_data(self,dataset,time_step): - dataX, dataY = [], [] - for i in range(len(dataset)-time_step-1): - a = dataset[i:(i+time_step), 0] - dataX.append(a) - dataY.append(dataset[i + time_step, 0]) - return np.array(dataX), np.array(dataY) - - def create_model(self,X_train): - model = Sequential() - model.add(LSTM(150,return_sequences=True,input_shape=(X_train.shape[1],X_train.shape[2]))) - model.add(Dropout(0.2)) - - model.add(LSTM(150,return_sequences=True)) - model.add(Dropout(0.2)) - - model.add(LSTM(150)) - model.add(Dropout(0.2)) - - model.add(Dense(1)) - model.compile(loss='mean_squared_error' , metrics = ['mse', 'mae'],optimizer='adam') - return model - - def fit(self,X_train,y_train): - model = self.create_model(X_train) - model.fit(X_train,y_train,epochs=300,batch_size=64,verbose=1) - self.model = model - return model - \ No newline at end of file diff --git a/spaces/TwoCH4/White-box-Cartoonization/wbc/guided_filter.py b/spaces/TwoCH4/White-box-Cartoonization/wbc/guided_filter.py deleted file mode 100644 index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000 --- a/spaces/TwoCH4/White-box-Cartoonization/wbc/guided_filter.py +++ /dev/null @@ -1,87 +0,0 @@ -import tensorflow as tf -import numpy as np - - - - -def tf_box_filter(x, r): - k_size = int(2*r+1) - ch = x.get_shape().as_list()[-1] - weight = 1/(k_size**2) - box_kernel = weight*np.ones((k_size, k_size, ch, 1)) - box_kernel = np.array(box_kernel).astype(np.float32) - output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME') - return output - - - -def guided_filter(x, y, r, eps=1e-2): - - x_shape = tf.shape(x) - #y_shape = tf.shape(y) - - N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r) - - mean_x = tf_box_filter(x, r) / N - mean_y = tf_box_filter(y, r) / N - cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y - var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf_box_filter(A, r) / N - mean_b = tf_box_filter(b, r) / N - - output = mean_A * x + mean_b - - return output - - - -def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8): - - #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4 - - lr_x_shape = tf.shape(lr_x) - #lr_y_shape = tf.shape(lr_y) - hr_x_shape = tf.shape(hr_x) - - N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r) - - mean_x = tf_box_filter(lr_x, r) / N - mean_y = tf_box_filter(lr_y, r) / N - cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y - var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf.image.resize_images(A, hr_x_shape[1: 3]) - mean_b = tf.image.resize_images(b, hr_x_shape[1: 3]) - - output = mean_A * hr_x + mean_b - - return output - - -if __name__ == '__main__': - import cv2 - from tqdm import tqdm - - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3]) - output = guided_filter(input_photo, input_photo, 5, eps=1) - image = cv2.imread('output_figure1/cartoon2.jpg') - image = image/127.5 - 1 - image = np.expand_dims(image, axis=0) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - sess.run(tf.global_variables_initializer()) - - out = sess.run(output, feed_dict={input_photo: image}) - out = (np.squeeze(out)+1)*127.5 - out = np.clip(out, 0, 255).astype(np.uint8) - cv2.imwrite('output_figure1/cartoon2_filter.jpg', out) diff --git a/spaces/Username47337/key/Dockerfile b/spaces/Username47337/key/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Username47337/key/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Username47337/key/README.md b/spaces/Username47337/key/README.md deleted file mode 100644 index 49ff53326b3046eef3019d2d75505ecc18c88345..0000000000000000000000000000000000000000 --- a/spaces/Username47337/key/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Key -emoji: 🐢 -colorFrom: yellow -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Wayben/ChatGPT/utils.py b/spaces/Wayben/ChatGPT/utils.py deleted file mode 100644 index 8eeabfe5bfc3a80e4c875c778426608f66ce41da..0000000000000000000000000000000000000000 --- a/spaces/Wayben/ChatGPT/utils.py +++ /dev/null @@ -1,389 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter - -from presets import * - -# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
{highlighted_code}
' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - return result - -def convert_user(userinput): - userinput = userinput.replace("\n", "
") - return f"
{userinput}
" - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def construct_token_message(token, stream=False): - return f"Token 计数: {token}" - - -def delete_last_conversation(chatbot, history, previous_token_count): - if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]: - logging.info("由于包含报错信息,只删除chatbot记录") - chatbot.pop() - return chatbot, history - if len(history) > 0: - logging.info("删除了一组对话历史") - history.pop() - history.pop() - if len(chatbot) > 0: - logging.info("删除了一组chatbot对话") - chatbot.pop() - if len(previous_token_count) > 0: - logging.info("删除了一组对话的token计数记录") - previous_token_count.pop() - return ( - chatbot, - history, - previous_token_count, - construct_token_message(sum(previous_token_count)), - ) - - -def save_file(filename, system, history, chatbot): - logging.info("保存对话历史中……") - os.makedirs(HISTORY_DIR, exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.info("保存对话历史完毕") - return os.path.join(HISTORY_DIR, filename) - - -def save_chat_history(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, system, history, chatbot) - - -def export_markdown(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, system, history, chatbot) - - -def load_chat_history(filename, system, history, chatbot): - logging.info("加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.info("加载对话历史完毕") - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - except FileNotFoundError: - logging.info("没有找到对话历史文件,不执行任何操作") - return filename, system, history, chatbot - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False): - logging.info("获取历史记录文件名列表") - return get_file_names(HISTORY_DIR, plain) - - -def load_template(filename, mode=0): - logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - logging.info("Loading template...") - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices, value=choices[0] - ) - - -def get_template_names(plain=False): - logging.info("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_state(): - logging.info("重置状态") - return [], [], [], construct_token_message(0) - - -def reset_textbox(): - return gr.update(value="") - - -def reset_default(): - global API_URL - API_URL = "https://api.openai.com/v1/chat/completions" - os.environ.pop("HTTPS_PROXY", None) - os.environ.pop("https_proxy", None) - return gr.update(value=API_URL), gr.update(value=""), "API URL 和代理已重置" - - -def change_api_url(url): - global API_URL - API_URL = url - msg = f"API地址更改为了{url}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def sha1sum(filename): - sha1 = hashlib.sha1() - sha1.update(filename.encode("utf-8")) - return sha1.hexdigest() - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - response = requests.get("https://ipapi.co/json/", timeout=5) - try: - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用,但请注意,如果您的IP地址在不受支持的地区,您可能会遇到问题。" - ) - else: - return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。" - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = f"您的IP区域:{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i -1 - total = total - lst[i] - return 1 diff --git a/spaces/Xenos14/XenoEngine-SD-webui/on_start.sh b/spaces/Xenos14/XenoEngine-SD-webui/on_start.sh deleted file mode 100644 index 516dfb23148f095309ab46d3d8aa6e8a0697aac1..0000000000000000000000000000000000000000 --- a/spaces/Xenos14/XenoEngine-SD-webui/on_start.sh +++ /dev/null @@ -1,286 +0,0 @@ -#!/bin/bash -set -euo pipefail - -function download-model() { - local _option=$1 - local _filename=$2 - local _url=$3 - local _dir -#!/bin/bash -set -euo pipefail - -function download-model() { - local _option=$1 - local _filename=$2 - local _url=$3 - local _dir - - ! [ $# -eq 3 ] && (echo "usage: "; for o in checkpoint lora vae control-net embedding; do echo " \$ download-model --$o "; done) || true - [ $# -eq 0 ] && return 0 || ! [ $# -eq 3 ] && (echo ""; echo "error - invalid number of arguments (expected 3, received $#)"; echo -n "\$ download-model $1"; (for arg in "${@: 2}"; do echo -n " \"${arg//\"/\\\"}\""; done) && echo "") && return 1 || true - - case ${_option,,} in - --checkpoint) _dir="/app/stable-diffusion-webui/models/Stable-diffusion";; - --lora) _dir="/app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/LoRA";; - --vae) _dir="/app/stable-diffusion-webui/models/VAE";; - --control-net) _dir="/app/stable-diffusion-webui/models/ControlNet";; - --embedding) _dir="/app/stable-diffusion-webui/embeddings";; - - *) echo "error - unknown first argument: '$1' (valid options are --checkpoint, --lora, --vae, --control-net or --embedding):"; echo "\$ download-model $1 \"$2\" \"$3\""; return 1;; - esac - - echo "\$ download-model $_option \"$2\" \"$3\"" ; echo "" - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $_url -d $_dir -o $_filename && echo "" -} - -## ---------------------------- - -## Adds a header to the webui on Hugging Face Spaces. -sed -i -e '/demo:/r /app/stable-diffusion-webui/header_patch.py' /app/stable-diffusion-webui/modules/ui.py - -## ---------------------------- - -## Installing less models if $IS_SHARED_UI environment variable is set. -if [ ${IS_SHARED_UI:-0} != 0 ]; then - download-model --checkpoint "v1-5-pruned-emaonly.safetensors" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-5-pruned-emaonly.safetensors" - download-model --checkpoint "v1-5-pruned-emaonly.yaml" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-inference.yaml" - download-model --control-net "cldm_v15.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v15.yaml" - download-model --control-net "control_canny-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_canny-fp16.safetensors" - download-model --control-net "control_depth-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_depth-fp16.safetensors" - download-model --control-net "control_normal-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_normal-fp16.safetensors" - download-model --control-net "control_openpose-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_openpose-fp16.safetensors" - download-model --control-net "control_scribble-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_scribble-fp16.safetensors" - download-model --checkpoint "AtoZovyaRPGArtistTools15_sd15V1.safetensors" "https://civitai.com/api/download/models/10185" - download-model --embedding "bad_prompt_version2.pt" "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/72fd9d6011c2ba87b5847b7e45e6603917e3cbed/bad_prompt_version2.pt" - sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /app/stable-diffusion-webui/modules/ui.py - sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /app/stable-diffusion-webui/modules/ui.py - sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /app/stable-diffusion-webui/modules/ui.py - sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /app/stable-diffusion-webui/modules/ui.py - rm -rf /app/stable-diffusion-webui/scripts /app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser /app/stable-diffusion-webui/extensions/sd-civitai-browser /app/stable-diffusion-webui/extensions/sd-webui-additional-networks - cp -f shared-config.json config.json - cp -f shared-ui-config.json ui-config.json - exit 0 -fi -## End of lightweight installation for $IS_SHARED_UI setup. - -## ---------------------------- -## env $IS_SHARED_UI is not set -## ---------------------------- - -## Stable Diffusion 2.1 · 768 base model: -#download-model --checkpoint "v2-1_768-ema-pruned.safetensors" "https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/36a01dc742066de2e8c91e7cf0b8f6b53ef53da1/v2-1_768-ema-pruned.safetensors" -#download-model --checkpoint "v2-1_768-ema-pruned.yaml" "https://raw.githubusercontent.com/Stability-AI/stablediffusion/fc1488421a2761937b9d54784194157882cbc3b1/configs/stable-diffusion/v2-inference-v.yaml" - -## Stable Diffusion 1.5 · 512 base model: -#download-model --checkpoint "v1-5-pruned-emaonly.safetensors" "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" -#download-model --checkpoint "v1-5-pruned-emaonly.yaml" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-inference.yaml" - -## ---------------------------- - -## LoRA (low-rank adaptation) · epi_noiseoffset v2: -#download-model --lora "hyperfusion100k_v4.safetensors" "https://civitai.com/api/download/models/19987" -#download-model --lora "jimLeeDCComicsMarvel_offset.safetensors" "https://civitai.com/api/download/models/10580" -#download-model --lora "epi_noiseoffset2.safetensors" "https://civitai.com/api/download/models/16576" -#download-model --lora "agneseInnocente_1.safetensors" "https://civitai.com/api/download/models/34144" -#download-model --lora "seethru_v10.safetensors" "https://civitai.com/api/download/models/32083" -#download-model --lora "CommunityLoraExtract_lora320comicbabesV1.safetensors" "https://civitai.com/api/download/models/33744" -#download-model --lora "CommunityLoraExtract_lora320revanimated.safetensors" "https://civitai.com/api/download/models/33787" - -## ---------------------------- - -## VAE (variational autoencoder) · VAE 840k EMA: -download-model --vae "vae-ft-mse-840000-ema-pruned.safetensors" "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" -download-model --vae "Grapefruit.vae.pt" "https://huggingface.co/iZELX1/Grapefruit/resolve/main/Grapefruit.vae.pt" -download-model --vae "kl-f8-anime.ckpt" "https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime.ckpt" - -## ---------------------------- - -## ControlNet · Pre-extracted models: -download-model --control-net "cldm_v15.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v15.yaml" -download-model --control-net "cldm_v21.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v21.yaml" -download-model --control-net "control_canny-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_canny-fp16.safetensors" -download-model --control-net "control_depth-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_depth-fp16.safetensors" -download-model --control-net "control_hed-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_hed-fp16.safetensors" -download-model --control-net "control_normal-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_normal-fp16.safetensors" -download-model --control-net "control_openpose-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_openpose-fp16.safetensors" -download-model --control-net "control_scribble-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_scribble-fp16.safetensors" -download-model --control-net "control_v1p_sd15_qrcode.safetensors" "https://huggingface.co/DionTimmer/controlnet_qrcode/resolve/main/control_v1p_sd15_qrcode.safetensors" -download-model --control-net "control_v1p_sd15_qrcode.yaml" "https://huggingface.co/DionTimmer/controlnet_qrcode/resolve/main/control_v1p_sd15_qrcode.yaml" - - - -## ---------------------------- - -## Embedding -#download-model --embedding "bad_prompt_version2.pt" "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/72fd9d6011c2ba87b5847b7e45e6603917e3cbed/bad_prompt_version2.pt" -#download-model --embedding "easynegative.safetensors" "https://huggingface.co/embed/EasyNegative/resolve/main/EasyNegative.safetensors" -#download-model --embedding "microwaist_01bEmbedding.pt" "https://civitai.com/api/download/models/5246" - -## ---------------------------- - -## Checkpoints: -#download-model --checkpoint "babes_20.safetensors" "https://huggingface.co/Xenos14/zMine-TestModel/resolve/main/babes_20.safetensors" -#download-model --checkpoint "icbinpICantBelieveIts_final.safetensors" "https://huggingface.co/Xenos14/zMine-TestModel/resolve/main/icbinpICantBelieveIts_final.safetensors" -download-model --checkpoint "Kitsch-in-Sync-v1.safetensors" "https://huggingface.co/Xenos14/zMine-TestModel/resolve/main/Kitsch-in-Sync-v1.safetensors" -download-model --checkpoint "XenoGASM-v3.safetensors" "https://huggingface.co/Xenos14/XenoREALITY/resolve/main/XenoGASM-v3.safetensors" -download-model --checkpoint "XenoPHOBIA-v1.safetensors" "https://huggingface.co/Xenos14/XenoREALITY/resolve/main/XenoPHOBIA-v1.safetensors" -download-model --checkpoint "blankCanvas_v10.safetensors" "https://civitai.com/api/download/models/114414" -download-model --checkpoint "XenoENGINE-Artstyle.v4.8.safetensors" "https://huggingface.co/Xenos14/XenoREALITY/resolve/main/XenoENGINE-Artstyle.v4.8.safetensors" -download-model --checkpoint "XenoREALITY-v2.safetensors" "https://huggingface.co/Xenos14/XenoREALITY/resolve/main/XenoREALITY-v2.safetensors" - -## ---------------------------- - -## Add additional models that you want to install on startup. Replace URL and FILENAME from the examples below with your values. - -## Usage: -## download-model --checkpoint -## download-model --lora -## download-model --vae -## download-model --control-net -## download-model --embedding - -## ---------------------------- - - -## Checkpoint · Example: -# download-model --checkpoint "FILENAME" "URL" - -## LORA (low-rank adaptation) · Example: -# download-model --lora "FILENAME" "URL" - -## VAE (variational autoencoder) · Example: -# download-model --vae "FILENAME" "URL" - - ! [ $# -eq 3 ] && (echo "usage: "; for o in checkpoint lora vae control-net embedding; do echo " \$ download-model --$o "; done) || true - [ $# -eq 0 ] && return 0 || ! [ $# -eq 3 ] && (echo ""; echo "error - invalid number of arguments (expected 3, received $#)"; echo -n "\$ download-model $1"; (for arg in "${@: 2}"; do echo -n " \"${arg//\"/\\\"}\""; done) && echo "") && return 1 || true - - case ${_option,,} in - --checkpoint) _dir="/app/stable-diffusion-webui/models/Stable-diffusion";; - --lora) _dir="/app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/LoRA";; - --vae) _dir="/app/stable-diffusion-webui/models/VAE";; - --control-net) _dir="/app/stable-diffusion-webui/models/ControlNet";; - --embedding) _dir="/app/stable-diffusion-webui/embeddings";; - - *) echo "error - unknown first argument: '$1' (valid options are --checkpoint, --lora, --vae, --control-net or --embedding):"; echo "\$ download-model $1 \"$2\" \"$3\""; return 1;; - esac - - echo "\$ download-model $_option \"$2\" \"$3\"" ; echo "" - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $_url -d $_dir -o $_filename && echo "" -} - -## ---------------------------- - -## Adds a header to the webui on Hugging Face Spaces. -sed -i -e '/demo:/r /app/stable-diffusion-webui/header_patch.py' /app/stable-diffusion-webui/modules/ui.py - -## ---------------------------- - -## Installing less models if $IS_SHARED_UI environment variable is set. -if [ ${IS_SHARED_UI:-0} != 0 ]; then - download-model --checkpoint "v1-5-pruned-emaonly.safetensors" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-5-pruned-emaonly.safetensors" - download-model --checkpoint "v1-5-pruned-emaonly.yaml" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-inference.yaml" - download-model --control-net "cldm_v15.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v15.yaml" - download-model --control-net "control_canny-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_canny-fp16.safetensors" - download-model --control-net "control_depth-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_depth-fp16.safetensors" - download-model --control-net "control_normal-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_normal-fp16.safetensors" - download-model --control-net "control_openpose-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_openpose-fp16.safetensors" - download-model --control-net "control_scribble-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_scribble-fp16.safetensors" - download-model --checkpoint "AtoZovyaRPGArtistTools15_sd15V1.safetensors" "https://civitai.com/api/download/models/10185" - download-model --embedding "bad_prompt_version2.pt" "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/72fd9d6011c2ba87b5847b7e45e6603917e3cbed/bad_prompt_version2.pt" - sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /app/stable-diffusion-webui/modules/ui.py - sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /app/stable-diffusion-webui/modules/ui.py - sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /app/stable-diffusion-webui/modules/ui.py - sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /app/stable-diffusion-webui/modules/ui.py - rm -rf /app/stable-diffusion-webui/scripts /app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser /app/stable-diffusion-webui/extensions/sd-civitai-browser /app/stable-diffusion-webui/extensions/sd-webui-additional-networks - cp -f shared-config.json config.json - cp -f shared-ui-config.json ui-config.json - exit 0 -fi -## End of lightweight installation for $IS_SHARED_UI setup. - -## ---------------------------- -## env $IS_SHARED_UI is not set -## ---------------------------- - -## Stable Diffusion 2.1 · 768 base model: -#download-model --checkpoint "v2-1_768-ema-pruned.safetensors" "https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/36a01dc742066de2e8c91e7cf0b8f6b53ef53da1/v2-1_768-ema-pruned.safetensors" -#download-model --checkpoint "v2-1_768-ema-pruned.yaml" "https://raw.githubusercontent.com/Stability-AI/stablediffusion/fc1488421a2761937b9d54784194157882cbc3b1/configs/stable-diffusion/v2-inference-v.yaml" - -## Stable Diffusion 1.5 · 512 base model: -#download-model --checkpoint "v1-5-pruned-emaonly.safetensors" "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" -#download-model --checkpoint "v1-5-pruned-emaonly.yaml" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-inference.yaml" - -## ---------------------------- - -## LoRA (low-rank adaptation) · epi_noiseoffset v2: -#download-model --lora "hyperfusion100k_v4.safetensors" "https://civitai.com/api/download/models/19987" -#download-model --lora "jimLeeDCComicsMarvel_offset.safetensors" "https://civitai.com/api/download/models/10580" -#download-model --lora "epi_noiseoffset2.safetensors" "https://civitai.com/api/download/models/16576" -#download-model --lora "agneseInnocente_1.safetensors" "https://civitai.com/api/download/models/34144" -#download-model --lora "seethru_v10.safetensors" "https://civitai.com/api/download/models/32083" -#download-model --lora "CommunityLoraExtract_lora320comicbabesV1.safetensors" "https://civitai.com/api/download/models/33744" -#download-model --lora "CommunityLoraExtract_lora320revanimated.safetensors" "https://civitai.com/api/download/models/33787" - -## ---------------------------- - -## VAE (variational autoencoder) · VAE 840k EMA: -download-model --vae "vae-ft-mse-840000-ema-pruned.safetensors" "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" -download-model --vae "Grapefruit.vae.pt" "https://huggingface.co/iZELX1/Grapefruit/resolve/main/Grapefruit.vae.pt" -download-model --vae "kl-f8-anime.ckpt" "https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime.ckpt" - -## ---------------------------- - -## ControlNet · Pre-extracted models: -download-model --control-net "cldm_v15.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v15.yaml" -download-model --control-net "cldm_v21.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v21.yaml" -download-model --control-net "control_canny-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_canny-fp16.safetensors" -download-model --control-net "control_depth-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_depth-fp16.safetensors" -download-model --control-net "control_hed-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_hed-fp16.safetensors" -download-model --control-net "control_normal-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_normal-fp16.safetensors" -download-model --control-net "control_openpose-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_openpose-fp16.safetensors" -download-model --control-net "control_scribble-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_scribble-fp16.safetensors" -download-model --control-net "control_v1p_sd15_qrcode.safetensors" "https://huggingface.co/DionTimmer/controlnet_qrcode/resolve/main/control_v1p_sd15_qrcode.safetensors" -download-model --control-net "control_v1p_sd15_qrcode.yaml" "https://huggingface.co/DionTimmer/controlnet_qrcode/resolve/main/control_v1p_sd15_qrcode.yaml" - - - -## ---------------------------- - -## Embedding -download-model --embedding "bad_prompt_version2.pt" "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/72fd9d6011c2ba87b5847b7e45e6603917e3cbed/bad_prompt_version2.pt" -download-model --embedding "easynegative.safetensors" "https://huggingface.co/embed/EasyNegative/resolve/main/EasyNegative.safetensors" -#download-model --embedding "microwaist_01bEmbedding.pt" "https://civitai.com/api/download/models/5246" - -## ---------------------------- - -## Checkpoints: -download-model --checkpoint "babes_20.safetensors" "https://huggingface.co/Xenos14/zMine-TestModel/resolve/main/babes_20.safetensors" -download-model --checkpoint "icbinpICantBelieveIts_final.safetensors" "https://huggingface.co/Xenos14/zMine-TestModel/resolve/main/icbinpICantBelieveIts_final.safetensors" -#download-model --checkpoint "XenoGasmArt.safetensors" "https://huggingface.co/Xenos14/zMine-TestModel/resolve/main/XenoGasmArt.safetensors" -download-model --checkpoint "XenoGASM-v21.safetensors" "https://huggingface.co/Xenos14/XenoREALITY/resolve/main/XenoGASM-v21.safetensors" -#download-model --checkpoint "XenoEngine5th.1.safetensors" "https://huggingface.co/Xenos14/zMine-TestModel/resolve/main/XenoEngine5th.1.safetensors" -download-model --checkpoint "blankCanvas_v10.safetensors" "https://civitai.com/api/download/models/114414" -download-model --checkpoint "XenoENGINE-ArtStyle-v4.7.safetensors" "https://huggingface.co/Xenos14/XenoREALITY/resolve/main/XenoENGINE-Artstyle-v4.7.safetensors" -download-model --checkpoint "XenoREALITY-v1.safetensors" "https://huggingface.co/Xenos14/XenoREALITY/resolve/main/XenoREALITY-v1.safetensors" - -## ---------------------------- - -## Add additional models that you want to install on startup. Replace URL and FILENAME from the examples below with your values. - -## Usage: -## download-model --checkpoint -## download-model --lora -## download-model --vae -## download-model --control-net -## download-model --embedding - -## ---------------------------- - - -## Checkpoint · Example: -# download-model --checkpoint "FILENAME" "URL" - -## LORA (low-rank adaptation) · Example: -# download-model --lora "FILENAME" "URL" - -## VAE (variational autoencoder) · Example: -# download-model --vae "FILENAME" "URL" diff --git a/spaces/XzJosh/Gun-Bert-VITS2/utils.py b/spaces/XzJosh/Gun-Bert-VITS2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Gun-Bert-VITS2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/monotonic_align/core.py b/spaces/XzJosh/ShanBao-Bert-VITS2/monotonic_align/core.py deleted file mode 100644 index dddc688d76172b880054e544b7a217acd013f14f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/monotonic_align/core.py +++ /dev/null @@ -1,35 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:,:,::1], numba.float32[:,:,::1], numba.int32[::1], numba.int32[::1]), nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val=-1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y-1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y-1, x-1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - index = index - 1 diff --git a/spaces/XzJosh/maimai-Bert-VITS2/bert_gen.py b/spaces/XzJosh/maimai-Bert-VITS2/bert_gen.py deleted file mode 100644 index 44814715396ffc3abe84a12c74d66293c356eb4f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/maimai-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with Pool(processes=12) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/XzJosh/maimai-Bert-VITS2/commons.py b/spaces/XzJosh/maimai-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/maimai-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py deleted file mode 100644 index 0e903cb836a32c85f442f30ccdea08cfc67425dd..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py +++ /dev/null @@ -1,711 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import List, Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.utils.checkpoint - -from transformers.activations import ACT2FN -from transformers.configuration_utils import PretrainedConfig -from transformers.modeling_outputs import BaseModelOutput -from transformers.modeling_utils import PreTrainedModel -from transformers.tokenization_utils import PreTrainedTokenizer -from transformers.utils import logging - -from ...models import AutoencoderKL, UNet2DConditionModel, UNet2DModel, VQModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler - - -class LDMTextToImagePipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - vqvae ([`VQModel`]): - Vector-quantized (VQ) Model to encode and decode images to and from latent representations. - bert ([`LDMBertModel`]): - Text-encoder model based on [BERT](https://huggingface.co/docs/transformers/model_doc/bert) architecture. - tokenizer (`transformers.BertTokenizer`): - Tokenizer of class - [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - - def __init__( - self, - vqvae: Union[VQModel, AutoencoderKL], - bert: PreTrainedModel, - tokenizer: PreTrainedTokenizer, - unet: Union[UNet2DModel, UNet2DConditionModel], - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - ): - super().__init__() - self.register_modules(vqvae=vqvae, bert=bert, tokenizer=tokenizer, unet=unet, scheduler=scheduler) - self.vae_scale_factor = 2 ** (len(self.vqvae.config.block_out_channels) - 1) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 1.0, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ) -> Union[Tuple, ImagePipelineOutput]: - r""" - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 1.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt` at - the, usually at the expense of lower image quality. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*): - Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. - - Returns: - [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if - `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the - generated images. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - # get unconditional embeddings for classifier free guidance - if guidance_scale != 1.0: - uncond_input = self.tokenizer([""] * batch_size, padding="max_length", max_length=77, return_tensors="pt") - uncond_embeddings = self.bert(uncond_input.input_ids.to(self.device))[0] - - # get prompt text embeddings - text_input = self.tokenizer(prompt, padding="max_length", max_length=77, return_tensors="pt") - text_embeddings = self.bert(text_input.input_ids.to(self.device))[0] - - latents = torch.randn( - (batch_size, self.unet.in_channels, height // 8, width // 8), - generator=generator, - ) - latents = latents.to(self.device) - - self.scheduler.set_timesteps(num_inference_steps) - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - - extra_kwargs = {} - if accepts_eta: - extra_kwargs["eta"] = eta - - for t in self.progress_bar(self.scheduler.timesteps): - if guidance_scale == 1.0: - # guidance_scale of 1 means no guidance - latents_input = latents - context = text_embeddings - else: - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - latents_input = torch.cat([latents] * 2) - context = torch.cat([uncond_embeddings, text_embeddings]) - - # predict the noise residual - noise_pred = self.unet(latents_input, t, encoder_hidden_states=context).sample - # perform guidance - if guidance_scale != 1.0: - noise_pred_uncond, noise_prediction_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_prediction_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_kwargs).prev_sample - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vqvae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) - - -################################################################################ -# Code for the text transformer model -################################################################################ -""" PyTorch LDMBERT model.""" - - -logger = logging.get_logger(__name__) - -LDMBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "ldm-bert", - # See all LDMBert models at https://huggingface.co/models?filter=ldmbert -] - - -LDMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "ldm-bert": "https://huggingface.co/valhalla/ldm-bert/blob/main/config.json", -} - - -""" LDMBERT model configuration""" - - -class LDMBertConfig(PretrainedConfig): - model_type = "ldmbert" - keys_to_ignore_at_inference = ["past_key_values"] - attribute_map = {"num_attention_heads": "encoder_attention_heads", "hidden_size": "d_model"} - - def __init__( - self, - vocab_size=30522, - max_position_embeddings=77, - encoder_layers=32, - encoder_ffn_dim=5120, - encoder_attention_heads=8, - head_dim=64, - encoder_layerdrop=0.0, - activation_function="gelu", - d_model=1280, - dropout=0.1, - attention_dropout=0.0, - activation_dropout=0.0, - init_std=0.02, - classifier_dropout=0.0, - scale_embedding=False, - use_cache=True, - pad_token_id=0, - **kwargs, - ): - self.vocab_size = vocab_size - self.max_position_embeddings = max_position_embeddings - self.d_model = d_model - self.encoder_ffn_dim = encoder_ffn_dim - self.encoder_layers = encoder_layers - self.encoder_attention_heads = encoder_attention_heads - self.head_dim = head_dim - self.dropout = dropout - self.attention_dropout = attention_dropout - self.activation_dropout = activation_dropout - self.activation_function = activation_function - self.init_std = init_std - self.encoder_layerdrop = encoder_layerdrop - self.classifier_dropout = classifier_dropout - self.use_cache = use_cache - self.num_hidden_layers = encoder_layers - self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True - - super().__init__(pad_token_id=pad_token_id, **kwargs) - - -def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): - """ - Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. - """ - bsz, src_len = mask.size() - tgt_len = tgt_len if tgt_len is not None else src_len - - expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) - - inverted_mask = 1.0 - expanded_mask - - return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) - - -# Copied from transformers.models.bart.modeling_bart.BartAttention with Bart->LDMBert -class LDMBertAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__( - self, - embed_dim: int, - num_heads: int, - head_dim: int, - dropout: float = 0.0, - is_decoder: bool = False, - bias: bool = False, - ): - super().__init__() - self.embed_dim = embed_dim - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = head_dim - self.inner_dim = head_dim * num_heads - - self.scaling = self.head_dim**-0.5 - self.is_decoder = is_decoder - - self.k_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias) - self.v_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias) - self.q_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias) - self.out_proj = nn.Linear(self.inner_dim, embed_dim) - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - key_value_states: Optional[torch.Tensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel""" - - # if key_value_states are provided this layer is used as a cross-attention layer - # for the decoder - is_cross_attention = key_value_states is not None - - bsz, tgt_len, _ = hidden_states.size() - - # get query proj - query_states = self.q_proj(hidden_states) * self.scaling - # get key, value proj - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_states = past_key_value[0] - value_states = past_key_value[1] - elif is_cross_attention: - # cross_attentions - key_states = self._shape(self.k_proj(key_value_states), -1, bsz) - value_states = self._shape(self.v_proj(key_value_states), -1, bsz) - elif past_key_value is not None: - # reuse k, v, self_attention - key_states = self._shape(self.k_proj(hidden_states), -1, bsz) - value_states = self._shape(self.v_proj(hidden_states), -1, bsz) - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - else: - # self_attention - key_states = self._shape(self.k_proj(hidden_states), -1, bsz) - value_states = self._shape(self.v_proj(hidden_states), -1, bsz) - - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_states, value_states) - - proj_shape = (bsz * self.num_heads, -1, self.head_dim) - query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) - key_states = key_states.view(*proj_shape) - value_states = value_states.view(*proj_shape) - - src_len = key_states.size(1) - attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) - - if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, tgt_len, src_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if output_attentions: - # this operation is a bit awkward, but it's required to - # make sure that attn_weights keeps its gradient. - # In order to do so, attn_weights have to be reshaped - # twice and have to be reused in the following - attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len) - else: - attn_weights_reshaped = None - - attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - - attn_output = torch.bmm(attn_probs, value_states) - - if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) - attn_output = attn_output.transpose(1, 2) - - # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be - # partitioned across GPUs when using tensor-parallelism. - attn_output = attn_output.reshape(bsz, tgt_len, self.inner_dim) - - attn_output = self.out_proj(attn_output) - - return attn_output, attn_weights_reshaped, past_key_value - - -class LDMBertEncoderLayer(nn.Module): - def __init__(self, config: LDMBertConfig): - super().__init__() - self.embed_dim = config.d_model - self.self_attn = LDMBertAttention( - embed_dim=self.embed_dim, - num_heads=config.encoder_attention_heads, - head_dim=config.head_dim, - dropout=config.attention_dropout, - ) - self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) - self.dropout = config.dropout - self.activation_fn = ACT2FN[config.activation_function] - self.activation_dropout = config.activation_dropout - self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim) - self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim) - self.final_layer_norm = nn.LayerNorm(self.embed_dim) - - def forward( - self, - hidden_states: torch.FloatTensor, - attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - residual = hidden_states - hidden_states = self.self_attn_layer_norm(hidden_states) - hidden_states, attn_weights, _ = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - layer_head_mask=layer_head_mask, - output_attentions=output_attentions, - ) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - - residual = hidden_states - hidden_states = self.final_layer_norm(hidden_states) - hidden_states = self.activation_fn(self.fc1(hidden_states)) - hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) - hidden_states = self.fc2(hidden_states) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - - if hidden_states.dtype == torch.float16 and ( - torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any() - ): - clamp_value = torch.finfo(hidden_states.dtype).max - 1000 - hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attn_weights,) - - return outputs - - -# Copied from transformers.models.bart.modeling_bart.BartPretrainedModel with Bart->LDMBert -class LDMBertPreTrainedModel(PreTrainedModel): - config_class = LDMBertConfig - base_model_prefix = "model" - _supports_gradient_checkpointing = True - _keys_to_ignore_on_load_unexpected = [r"encoder\.version", r"decoder\.version"] - - def _init_weights(self, module): - std = self.config.init_std - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=std) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=std) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (LDMBertEncoder,)): - module.gradient_checkpointing = value - - @property - def dummy_inputs(self): - pad_token = self.config.pad_token_id - input_ids = torch.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]], device=self.device) - dummy_inputs = { - "attention_mask": input_ids.ne(pad_token), - "input_ids": input_ids, - } - return dummy_inputs - - -class LDMBertEncoder(LDMBertPreTrainedModel): - """ - Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a - [`LDMBertEncoderLayer`]. - - Args: - config: LDMBertConfig - embed_tokens (nn.Embedding): output embedding - """ - - def __init__(self, config: LDMBertConfig): - super().__init__(config) - - self.dropout = config.dropout - - embed_dim = config.d_model - self.padding_idx = config.pad_token_id - self.max_source_positions = config.max_position_embeddings - - self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim) - self.embed_positions = nn.Embedding(config.max_position_embeddings, embed_dim) - self.layers = nn.ModuleList([LDMBertEncoderLayer(config) for _ in range(config.encoder_layers)]) - self.layer_norm = nn.LayerNorm(embed_dim) - - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutput]: - r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - - Indices can be obtained using [`BartTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. - This is useful if you want more control over how to convert `input_ids` indices into associated vectors - than the model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.BaseModelOutput`] instead of a plain tuple. - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) - - seq_len = input_shape[1] - if position_ids is None: - position_ids = torch.arange(seq_len, dtype=torch.long, device=inputs_embeds.device).expand((1, -1)) - embed_pos = self.embed_positions(position_ids) - - hidden_states = inputs_embeds + embed_pos - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - - # expand attention_mask - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype) - - encoder_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - - for idx, encoder_layer in enumerate(self.layers): - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(encoder_layer), - hidden_states, - attention_mask, - (head_mask[idx] if head_mask is not None else None), - ) - else: - layer_outputs = encoder_layer( - hidden_states, - attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - output_attentions=output_attentions, - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - hidden_states = self.layer_norm(hidden_states) - - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions - ) - - -class LDMBertModel(LDMBertPreTrainedModel): - _no_split_modules = [] - - def __init__(self, config: LDMBertConfig): - super().__init__(config) - self.model = LDMBertEncoder(config) - self.to_logits = nn.Linear(config.hidden_size, config.vocab_size) - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - outputs = self.model( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - return outputs diff --git a/spaces/Zulqrnain/NewsSummarizer/app.py b/spaces/Zulqrnain/NewsSummarizer/app.py deleted file mode 100644 index f007d81dbbdb65b216f152f61a1f4c076d00d492..0000000000000000000000000000000000000000 --- a/spaces/Zulqrnain/NewsSummarizer/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import pickle -from transformers import (T5TokenizerFast as T5Tokenizer) - -import gradio as gr - - -MODEL_NAME = 't5-base' -tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) - - -model = pickle.load(open('text_summarization_model.pkl', 'rb')) - -def summarizeText(text): - text_encoding = tokenizer( - text, - max_length=512, - padding='max_length', - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors='pt' - ) - generated_ids = model.generate( - input_ids=text_encoding['input_ids'], - attention_mask=text_encoding['attention_mask'], - max_length=150, - num_beams=2, - repetition_penalty=2.5, - length_penalty=1.0, - early_stopping=True - ) - - preds = [ - tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) - for gen_id in generated_ids - ] - return "".join(preds) - - -interface = gr.Interface(fn=summarizeText, - inputs=["text"], - outputs=[gr.inputs.Textbox(label='News Summary Output')], - title='News Summary Generator') - - -interface.launch(inline=False) - - diff --git a/spaces/abhishek/first-order-motion-model/demo.py b/spaces/abhishek/first-order-motion-model/demo.py deleted file mode 100644 index 948c24ccdd714baaf3af98a4f594a0536e8bfffe..0000000000000000000000000000000000000000 --- a/spaces/abhishek/first-order-motion-model/demo.py +++ /dev/null @@ -1,157 +0,0 @@ -import matplotlib -matplotlib.use('Agg') -import os, sys -import yaml -from argparse import ArgumentParser -from tqdm import tqdm - -import imageio -import numpy as np -from skimage.transform import resize -from skimage import img_as_ubyte -import torch -from sync_batchnorm import DataParallelWithCallback - -from modules.generator import OcclusionAwareGenerator -from modules.keypoint_detector import KPDetector -from animate import normalize_kp -from scipy.spatial import ConvexHull - - -if sys.version_info[0] < 3: - raise Exception("You must use Python 3 or higher. Recommended version is Python 3.7") - -def load_checkpoints(config_path, checkpoint_path, cpu=False): - - with open(config_path) as f: - config = yaml.load(f) - - generator = OcclusionAwareGenerator(**config['model_params']['generator_params'], - **config['model_params']['common_params']) - if not cpu: - generator.cuda() - - kp_detector = KPDetector(**config['model_params']['kp_detector_params'], - **config['model_params']['common_params']) - if not cpu: - kp_detector.cuda() - - if cpu: - checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu')) - else: - checkpoint = torch.load(checkpoint_path) - - generator.load_state_dict(checkpoint['generator']) - kp_detector.load_state_dict(checkpoint['kp_detector']) - - if not cpu: - generator = DataParallelWithCallback(generator) - kp_detector = DataParallelWithCallback(kp_detector) - - generator.eval() - kp_detector.eval() - - return generator, kp_detector - - -def make_animation(source_image, driving_video, generator, kp_detector, relative=True, adapt_movement_scale=True, cpu=False): - with torch.no_grad(): - predictions = [] - source = torch.tensor(source_image[np.newaxis].astype(np.float32)).permute(0, 3, 1, 2) - if not cpu: - source = source.cuda() - driving = torch.tensor(np.array(driving_video)[np.newaxis].astype(np.float32)).permute(0, 4, 1, 2, 3) - kp_source = kp_detector(source) - kp_driving_initial = kp_detector(driving[:, :, 0]) - - for frame_idx in tqdm(range(driving.shape[2])): - driving_frame = driving[:, :, frame_idx] - if not cpu: - driving_frame = driving_frame.cuda() - kp_driving = kp_detector(driving_frame) - kp_norm = normalize_kp(kp_source=kp_source, kp_driving=kp_driving, - kp_driving_initial=kp_driving_initial, use_relative_movement=relative, - use_relative_jacobian=relative, adapt_movement_scale=adapt_movement_scale) - out = generator(source, kp_source=kp_source, kp_driving=kp_norm) - - predictions.append(np.transpose(out['prediction'].data.cpu().numpy(), [0, 2, 3, 1])[0]) - return predictions - -def find_best_frame(source, driving, cpu=False): - import face_alignment - - def normalize_kp(kp): - kp = kp - kp.mean(axis=0, keepdims=True) - area = ConvexHull(kp[:, :2]).volume - area = np.sqrt(area) - kp[:, :2] = kp[:, :2] / area - return kp - - fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=True, - device='cpu' if cpu else 'cuda') - kp_source = fa.get_landmarks(255 * source)[0] - kp_source = normalize_kp(kp_source) - norm = float('inf') - frame_num = 0 - for i, image in tqdm(enumerate(driving)): - kp_driving = fa.get_landmarks(255 * image)[0] - kp_driving = normalize_kp(kp_driving) - new_norm = (np.abs(kp_source - kp_driving) ** 2).sum() - if new_norm < norm: - norm = new_norm - frame_num = i - return frame_num - -if __name__ == "__main__": - parser = ArgumentParser() - parser.add_argument("--config", required=True, help="path to config") - parser.add_argument("--checkpoint", default='vox-cpk.pth.tar', help="path to checkpoint to restore") - - parser.add_argument("--source_image", default='sup-mat/source.png', help="path to source image") - parser.add_argument("--driving_video", default='sup-mat/source.png', help="path to driving video") - parser.add_argument("--result_video", default='result.mp4', help="path to output") - - parser.add_argument("--relative", dest="relative", action="store_true", help="use relative or absolute keypoint coordinates") - parser.add_argument("--adapt_scale", dest="adapt_scale", action="store_true", help="adapt movement scale based on convex hull of keypoints") - - parser.add_argument("--find_best_frame", dest="find_best_frame", action="store_true", - help="Generate from the frame that is the most alligned with source. (Only for faces, requires face_aligment lib)") - - parser.add_argument("--best_frame", dest="best_frame", type=int, default=None, - help="Set frame to start from.") - - parser.add_argument("--cpu", dest="cpu", action="store_true", help="cpu mode.") - - - parser.set_defaults(relative=False) - parser.set_defaults(adapt_scale=False) - - opt = parser.parse_args() - - source_image = imageio.imread(opt.source_image) - reader = imageio.get_reader(opt.driving_video) - fps = reader.get_meta_data()['fps'] - driving_video = [] - try: - for im in reader: - driving_video.append(im) - except RuntimeError: - pass - reader.close() - - source_image = resize(source_image, (256, 256))[..., :3] - driving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video] - generator, kp_detector = load_checkpoints(config_path=opt.config, checkpoint_path=opt.checkpoint, cpu=opt.cpu) - - if opt.find_best_frame or opt.best_frame is not None: - i = opt.best_frame if opt.best_frame is not None else find_best_frame(source_image, driving_video, cpu=opt.cpu) - print ("Best frame: " + str(i)) - driving_forward = driving_video[i:] - driving_backward = driving_video[:(i+1)][::-1] - predictions_forward = make_animation(source_image, driving_forward, generator, kp_detector, relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu) - predictions_backward = make_animation(source_image, driving_backward, generator, kp_detector, relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu) - predictions = predictions_backward[::-1] + predictions_forward[1:] - else: - predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu) - imageio.mimsave(opt.result_video, [img_as_ubyte(frame) for frame in predictions], fps=fps) - diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py deleted file mode 100644 index 35758f4f4e3b2bddd460edb8a7f482b3a9da2919..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py +++ /dev/null @@ -1,76 +0,0 @@ -from mmdet.models.builder import HEADS -from .convfc_bbox_head import ConvFCBBoxHead - - -@HEADS.register_module() -class SCNetBBoxHead(ConvFCBBoxHead): - """BBox head for `SCNet `_. - - This inherits ``ConvFCBBoxHead`` with modified forward() function, allow us - to get intermediate shared feature. - """ - - def _forward_shared(self, x): - """Forward function for shared part.""" - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - - return x - - def _forward_cls_reg(self, x): - """Forward function for classification and regression parts.""" - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - - return cls_score, bbox_pred - - def forward(self, x, return_shared_feat=False): - """Forward function. - - Args: - x (Tensor): input features - return_shared_feat (bool): If True, return cls-reg-shared feature. - - Return: - out (tuple[Tensor]): contain ``cls_score`` and ``bbox_pred``, - if ``return_shared_feat`` is True, append ``x_shared`` to the - returned tuple. - """ - x_shared = self._forward_shared(x) - out = self._forward_cls_reg(x_shared) - - if return_shared_feat: - out += (x_shared, ) - - return out diff --git a/spaces/akhaliq/Kapao/utils/plots.py b/spaces/akhaliq/Kapao/utils/plots.py deleted file mode 100644 index d87539d51416de2fc0b2f5a389d5c7a9d86e389d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Kapao/utils/plots.py +++ /dev/null @@ -1,437 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Plotting utils -""" - -import math -from copy import copy -from pathlib import Path - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sn -import torch -from PIL import Image, ImageDraw, ImageFont - -from utils.general import is_ascii, xyxy2xywh, xywh2xyxy -from utils.metrics import fitness - -# Settings -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - -FILE = Path(__file__).absolute() -ROOT = FILE.parents[1] # yolov5/ dir - - -class Colors: - # Ultralytics color palette https://ultralytics.com/ - def __init__(self): - # hex = matplotlib.colors.TABLEAU_COLORS.values() - hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', - '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') - self.palette = [self.hex2rgb('#' + c) for c in hex] - self.n = len(self.palette) - - def __call__(self, i, bgr=False): - c = self.palette[int(i) % self.n] - return (c[2], c[1], c[0]) if bgr else c - - @staticmethod - def hex2rgb(h): # rgb order (PIL) - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - -colors = Colors() # create instance for 'from utils.plots import colors' - - -def check_font(font='Arial.ttf', size=10): - # Return a PIL TrueType Font, downloading to ROOT dir if necessary - font = Path(font) - font = font if font.exists() else (ROOT / font.name) - try: - return ImageFont.truetype(str(font) if font.exists() else font.name, size) - except Exception as e: # download if missing - url = "https://ultralytics.com/assets/" + font.name - print(f'Downloading {url} to {font}...') - torch.hub.download_url_to_file(url, str(font)) - return ImageFont.truetype(str(font), size) - - -class Annotator: - check_font() # download TTF if necessary - - # YOLOv5 Annotator for train/val mosaics and jpgs and detect/hub inference annotations - def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=True): - assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.' - self.pil = pil - if self.pil: # use PIL - self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) - self.draw = ImageDraw.Draw(self.im) - self.font = check_font(font, size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12)) - self.fh = self.font.getsize('a')[1] - 3 # font height - else: # use cv2 - self.im = im - self.lw = line_width or max(round(sum(im.shape) / 2 * 0.003), 2) # line width - - def box_label(self, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255)): - # Add one xyxy box to image with label - if self.pil or not is_ascii(label): - self.draw.rectangle(box, width=self.lw, outline=color) # box - if label: - w = self.font.getsize(label)[0] # text width - self.draw.rectangle([box[0], box[1] - self.fh, box[0] + w + 1, box[1] + 1], fill=color) - self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls') - else: # cv2 - c1, c2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3])) - cv2.rectangle(self.im, c1, c2, color, thickness=self.lw, lineType=cv2.LINE_AA) - if label: - tf = max(self.lw - 1, 1) # font thickness - w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0] - c2 = c1[0] + w, c1[1] - h - 3 - cv2.rectangle(self.im, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(self.im, label, (c1[0], c1[1] - 2), 0, self.lw / 3, txt_color, thickness=tf, - lineType=cv2.LINE_AA) - - def rectangle(self, xy, fill=None, outline=None, width=1): - # Add rectangle to image (PIL-only) - self.draw.rectangle(xy, fill, outline, width) - - def text(self, xy, text, txt_color=(255, 255, 255)): - # Add text to image (PIL-only) - w, h = self.font.getsize(text) # text width, height - self.draw.text((xy[0], xy[1] - h + 1), text, fill=txt_color, font=self.font) - - def result(self): - # Return annotated image as array - return np.asarray(self.im) - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - from scipy.signal import butter, filtfilt - - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=1920, max_subplots=16): - # Plot image grid with labels - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - if np.max(images[0]) <= 1: - images *= 255.0 # de-normalise (optional) - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Build Image - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, im in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin - im = im.transpose(1, 2, 0) - mosaic[y:y + h, x:x + w, :] = im - - # Resize (optional) - scale = max_size / ns / max(h, w) - if scale < 1: - h = math.ceil(scale * h) - w = math.ceil(scale * w) - mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h))) - - # Annotate - fs = int((h + w) * ns * 0.01) # font size - annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs) - for i in range(i + 1): - x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin - annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders - if paths: - annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames - if len(targets) > 0: - ti = targets[targets[:, 0] == i] # image targets - boxes = xywh2xyxy(ti[:, 2:6]).T - classes = ti[:, 1].astype('int') - labels = ti.shape[1] == 6 or ti.shape[1] > 7 # labels if no conf column or pose objects - conf = None if labels else ti[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale < 1: # absolute coords need scale if image scales - boxes *= scale - boxes[[0, 2]] += x - boxes[[1, 3]] += y - for j, box in enumerate(boxes.T.tolist()): - cls = classes[j] - color = colors(cls) - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = f'{cls}' if labels else f'{cls} {conf[j]:.1f}' - annotator.box_label(box, label, color=color) - annotator.im.save(fname) # save - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_val_txt(): # from utils.plots import *; plot_val() - # Plot val.txt histograms - x = np.loadtxt('val.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt() - # Plot study.txt generated by val.py - plot2 = False # plot additional results - if plot2: - ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]: - for f in sorted(Path(path).glob('study*.txt')): - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - if plot2: - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_preprocess (ms/img)', 't_inference (ms/img)', 't_NMS (ms/img)'] - for i in range(7): - ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[5, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid(alpha=0.2) - ax2.set_yticks(np.arange(20, 60, 5)) - ax2.set_xlim(0, 57) - ax2.set_ylim(30, 55) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig(str(Path(path).name) + '.png', dpi=300) - - -def plot_labels(labels, names=(), save_dir=Path('')): - # plot dataset labels - print('Plotting labels... ') - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sn.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - # [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # update colors bug #3195 - ax[0].set_ylabel('instances') - if 0 < len(names) < 30: - ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(names, rotation=90, fontsize=10) - else: - ax[0].set_xlabel('classes') - sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sn.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - for cls, *box in labels[:1000]: - ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls)) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; plot_evolve() - # Plot evolve.csv hyp evolution results - evolve_csv = Path(evolve_csv) - data = pd.read_csv(evolve_csv) - keys = [x.strip() for x in data.columns] - x = data.values - f = fitness(x) - j = np.argmax(f) # max fitness index - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, k in enumerate(keys[7:]): - v = x[:, 7 + i] - mu = v[j] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(v, f, c=hist2d(v, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - f = evolve_csv.with_suffix('.png') # filename - plt.savefig(f, dpi=200) - plt.close() - print(f'Saved {f}') - - -def plot_results(file='path/to/results.csv', dir=''): - # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv') - save_dir = Path(file).parent if file else Path(dir) - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - files = list(save_dir.glob('results*.csv')) - assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.' - for fi, f in enumerate(files): - try: - data = pd.read_csv(f) - s = [x.strip() for x in data.columns] - x = data.values[:, 0] - for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]): - y = data.values[:, j] - # y[y == 0] = np.nan # don't show zero values - ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8) - ax[i].set_title(s[j], fontsize=12) - # if j in [8, 9, 10]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print(f'Warning: Plotting error for {f}: {e}') - ax[1].legend() - fig.savefig(save_dir / 'results.png', dpi=200) - plt.close() - - -def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detect/exp')): - """ - x: Features to be visualized - module_type: Module type - stage: Module stage within model - n: Maximum number of feature maps to plot - save_dir: Directory to save results - """ - if 'Detect' not in module_type: - batch, channels, height, width = x.shape # batch, channels, height, width - if height > 1 and width > 1: - f = f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename - - blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels - n = min(n, channels) # number of plots - fig, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True) # 8 rows x n/8 cols - ax = ax.ravel() - plt.subplots_adjust(wspace=0.05, hspace=0.05) - for i in range(n): - ax[i].imshow(blocks[i].squeeze()) # cmap='gray' - ax[i].axis('off') - - print(f'Saving {save_dir / f}... ({n}/{channels})') - plt.savefig(save_dir / f, dpi=300, bbox_inches='tight') - plt.close() diff --git a/spaces/akhaliq/Pop_Music_Transformer/utils.py b/spaces/akhaliq/Pop_Music_Transformer/utils.py deleted file mode 100644 index 4a5ffa88ff036abbac97aeca395b9b08a0589c04..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Pop_Music_Transformer/utils.py +++ /dev/null @@ -1,348 +0,0 @@ -import chord_recognition -import numpy as np -import miditoolkit -import copy - -# parameters for input -DEFAULT_VELOCITY_BINS = np.linspace(0, 128, 32+1, dtype=np.int) -DEFAULT_FRACTION = 16 -DEFAULT_DURATION_BINS = np.arange(60, 3841, 60, dtype=int) -DEFAULT_TEMPO_INTERVALS = [range(30, 90), range(90, 150), range(150, 210)] - -# parameters for output -DEFAULT_RESOLUTION = 480 - -# define "Item" for general storage -class Item(object): - def __init__(self, name, start, end, velocity, pitch): - self.name = name - self.start = start - self.end = end - self.velocity = velocity - self.pitch = pitch - - def __repr__(self): - return 'Item(name={}, start={}, end={}, velocity={}, pitch={})'.format( - self.name, self.start, self.end, self.velocity, self.pitch) - -# read notes and tempo changes from midi (assume there is only one track) -def read_items(file_path): - midi_obj = miditoolkit.midi.parser.MidiFile(file_path) - # note - note_items = [] - notes = midi_obj.instruments[0].notes - notes.sort(key=lambda x: (x.start, x.pitch)) - for note in notes: - note_items.append(Item( - name='Note', - start=note.start, - end=note.end, - velocity=note.velocity, - pitch=note.pitch)) - note_items.sort(key=lambda x: x.start) - # tempo - tempo_items = [] - for tempo in midi_obj.tempo_changes: - tempo_items.append(Item( - name='Tempo', - start=tempo.time, - end=None, - velocity=None, - pitch=int(tempo.tempo))) - tempo_items.sort(key=lambda x: x.start) - # expand to all beat - max_tick = tempo_items[-1].start - existing_ticks = {item.start: item.pitch for item in tempo_items} - wanted_ticks = np.arange(0, max_tick+1, DEFAULT_RESOLUTION) - output = [] - for tick in wanted_ticks: - if tick in existing_ticks: - output.append(Item( - name='Tempo', - start=tick, - end=None, - velocity=None, - pitch=existing_ticks[tick])) - else: - output.append(Item( - name='Tempo', - start=tick, - end=None, - velocity=None, - pitch=output[-1].pitch)) - tempo_items = output - return note_items, tempo_items - -# quantize items -def quantize_items(items, ticks=120): - # grid - grids = np.arange(0, items[-1].start, ticks, dtype=int) - # process - for item in items: - index = np.argmin(abs(grids - item.start)) - shift = grids[index] - item.start - item.start += shift - item.end += shift - return items - -# extract chord -def extract_chords(items): - method = chord_recognition.MIDIChord() - chords = method.extract(notes=items) - output = [] - for chord in chords: - output.append(Item( - name='Chord', - start=chord[0], - end=chord[1], - velocity=None, - pitch=chord[2].split('/')[0])) - return output - -# group items -def group_items(items, max_time, ticks_per_bar=DEFAULT_RESOLUTION*4): - items.sort(key=lambda x: x.start) - downbeats = np.arange(0, max_time+ticks_per_bar, ticks_per_bar) - groups = [] - for db1, db2 in zip(downbeats[:-1], downbeats[1:]): - insiders = [] - for item in items: - if (item.start >= db1) and (item.start < db2): - insiders.append(item) - overall = [db1] + insiders + [db2] - groups.append(overall) - return groups - -# define "Event" for event storage -class Event(object): - def __init__(self, name, time, value, text): - self.name = name - self.time = time - self.value = value - self.text = text - - def __repr__(self): - return 'Event(name={}, time={}, value={}, text={})'.format( - self.name, self.time, self.value, self.text) - -# item to event -def item2event(groups): - events = [] - n_downbeat = 0 - for i in range(len(groups)): - if 'Note' not in [item.name for item in groups[i][1:-1]]: - continue - bar_st, bar_et = groups[i][0], groups[i][-1] - n_downbeat += 1 - events.append(Event( - name='Bar', - time=None, - value=None, - text='{}'.format(n_downbeat))) - for item in groups[i][1:-1]: - # position - flags = np.linspace(bar_st, bar_et, DEFAULT_FRACTION, endpoint=False) - index = np.argmin(abs(flags-item.start)) - events.append(Event( - name='Position', - time=item.start, - value='{}/{}'.format(index+1, DEFAULT_FRACTION), - text='{}'.format(item.start))) - if item.name == 'Note': - # velocity - velocity_index = np.searchsorted( - DEFAULT_VELOCITY_BINS, - item.velocity, - side='right') - 1 - events.append(Event( - name='Note Velocity', - time=item.start, - value=velocity_index, - text='{}/{}'.format(item.velocity, DEFAULT_VELOCITY_BINS[velocity_index]))) - # pitch - events.append(Event( - name='Note On', - time=item.start, - value=item.pitch, - text='{}'.format(item.pitch))) - # duration - duration = item.end - item.start - index = np.argmin(abs(DEFAULT_DURATION_BINS-duration)) - events.append(Event( - name='Note Duration', - time=item.start, - value=index, - text='{}/{}'.format(duration, DEFAULT_DURATION_BINS[index]))) - elif item.name == 'Chord': - events.append(Event( - name='Chord', - time=item.start, - value=item.pitch, - text='{}'.format(item.pitch))) - elif item.name == 'Tempo': - tempo = item.pitch - if tempo in DEFAULT_TEMPO_INTERVALS[0]: - tempo_style = Event('Tempo Class', item.start, 'slow', None) - tempo_value = Event('Tempo Value', item.start, - tempo-DEFAULT_TEMPO_INTERVALS[0].start, None) - elif tempo in DEFAULT_TEMPO_INTERVALS[1]: - tempo_style = Event('Tempo Class', item.start, 'mid', None) - tempo_value = Event('Tempo Value', item.start, - tempo-DEFAULT_TEMPO_INTERVALS[1].start, None) - elif tempo in DEFAULT_TEMPO_INTERVALS[2]: - tempo_style = Event('Tempo Class', item.start, 'fast', None) - tempo_value = Event('Tempo Value', item.start, - tempo-DEFAULT_TEMPO_INTERVALS[2].start, None) - elif tempo < DEFAULT_TEMPO_INTERVALS[0].start: - tempo_style = Event('Tempo Class', item.start, 'slow', None) - tempo_value = Event('Tempo Value', item.start, 0, None) - elif tempo > DEFAULT_TEMPO_INTERVALS[2].stop: - tempo_style = Event('Tempo Class', item.start, 'fast', None) - tempo_value = Event('Tempo Value', item.start, 59, None) - events.append(tempo_style) - events.append(tempo_value) - return events - -############################################################################################# -# WRITE MIDI -############################################################################################# -def word_to_event(words, word2event): - events = [] - for word in words: - event_name, event_value = word2event.get(word).split('_') - events.append(Event(event_name, None, event_value, None)) - return events - -def write_midi(words, word2event, output_path, prompt_path=None): - events = word_to_event(words, word2event) - # get downbeat and note (no time) - temp_notes = [] - temp_chords = [] - temp_tempos = [] - for i in range(len(events)-3): - if events[i].name == 'Bar' and i > 0: - temp_notes.append('Bar') - temp_chords.append('Bar') - temp_tempos.append('Bar') - elif events[i].name == 'Position' and \ - events[i+1].name == 'Note Velocity' and \ - events[i+2].name == 'Note On' and \ - events[i+3].name == 'Note Duration': - # start time and end time from position - position = int(events[i].value.split('/')[0]) - 1 - # velocity - index = int(events[i+1].value) - velocity = int(DEFAULT_VELOCITY_BINS[index]) - # pitch - pitch = int(events[i+2].value) - # duration - index = int(events[i+3].value) - duration = DEFAULT_DURATION_BINS[index] - # adding - temp_notes.append([position, velocity, pitch, duration]) - elif events[i].name == 'Position' and events[i+1].name == 'Chord': - position = int(events[i].value.split('/')[0]) - 1 - temp_chords.append([position, events[i+1].value]) - elif events[i].name == 'Position' and \ - events[i+1].name == 'Tempo Class' and \ - events[i+2].name == 'Tempo Value': - position = int(events[i].value.split('/')[0]) - 1 - if events[i+1].value == 'slow': - tempo = DEFAULT_TEMPO_INTERVALS[0].start + int(events[i+2].value) - elif events[i+1].value == 'mid': - tempo = DEFAULT_TEMPO_INTERVALS[1].start + int(events[i+2].value) - elif events[i+1].value == 'fast': - tempo = DEFAULT_TEMPO_INTERVALS[2].start + int(events[i+2].value) - temp_tempos.append([position, tempo]) - # get specific time for notes - ticks_per_beat = DEFAULT_RESOLUTION - ticks_per_bar = DEFAULT_RESOLUTION * 4 # assume 4/4 - notes = [] - current_bar = 0 - for note in temp_notes: - if note == 'Bar': - current_bar += 1 - else: - position, velocity, pitch, duration = note - # position (start time) - current_bar_st = current_bar * ticks_per_bar - current_bar_et = (current_bar + 1) * ticks_per_bar - flags = np.linspace(current_bar_st, current_bar_et, DEFAULT_FRACTION, endpoint=False, dtype=int) - st = flags[position] - # duration (end time) - et = st + duration - notes.append(miditoolkit.Note(velocity, pitch, st, et)) - # get specific time for chords - if len(temp_chords) > 0: - chords = [] - current_bar = 0 - for chord in temp_chords: - if chord == 'Bar': - current_bar += 1 - else: - position, value = chord - # position (start time) - current_bar_st = current_bar * ticks_per_bar - current_bar_et = (current_bar + 1) * ticks_per_bar - flags = np.linspace(current_bar_st, current_bar_et, DEFAULT_FRACTION, endpoint=False, dtype=int) - st = flags[position] - chords.append([st, value]) - # get specific time for tempos - tempos = [] - current_bar = 0 - for tempo in temp_tempos: - if tempo == 'Bar': - current_bar += 1 - else: - position, value = tempo - # position (start time) - current_bar_st = current_bar * ticks_per_bar - current_bar_et = (current_bar + 1) * ticks_per_bar - flags = np.linspace(current_bar_st, current_bar_et, DEFAULT_FRACTION, endpoint=False, dtype=int) - st = flags[position] - tempos.append([int(st), value]) - # write - if prompt_path: - midi = miditoolkit.midi.parser.MidiFile(prompt_path) - # - last_time = DEFAULT_RESOLUTION * 4 * 4 - # note shift - for note in notes: - note.start += last_time - note.end += last_time - midi.instruments[0].notes.extend(notes) - # tempo changes - temp_tempos = [] - for tempo in midi.tempo_changes: - if tempo.time < DEFAULT_RESOLUTION*4*4: - temp_tempos.append(tempo) - else: - break - for st, bpm in tempos: - st += last_time - temp_tempos.append(miditoolkit.midi.containers.TempoChange(bpm, st)) - midi.tempo_changes = temp_tempos - # write chord into marker - if len(temp_chords) > 0: - for c in chords: - midi.markers.append( - miditoolkit.midi.containers.Marker(text=c[1], time=c[0]+last_time)) - else: - midi = miditoolkit.midi.parser.MidiFile() - midi.ticks_per_beat = DEFAULT_RESOLUTION - # write instrument - inst = miditoolkit.midi.containers.Instrument(0, is_drum=False) - inst.notes = notes - midi.instruments.append(inst) - # write tempo - tempo_changes = [] - for st, bpm in tempos: - tempo_changes.append(miditoolkit.midi.containers.TempoChange(bpm, st)) - midi.tempo_changes = tempo_changes - # write chord into marker - if len(temp_chords) > 0: - for c in chords: - midi.markers.append( - miditoolkit.midi.containers.Marker(text=c[1], time=c[0])) - # write - midi.dump(output_path) diff --git a/spaces/akhaliq/stylegan3_clip/gen_images.py b/spaces/akhaliq/stylegan3_clip/gen_images.py deleted file mode 100644 index f8a4b11b2fcdbad986a21753a29d7fee2fc26dbd..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/gen_images.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Generate images using pretrained network pickle.""" - -import os -import re -from typing import List, Optional, Tuple, Union - -import click -import dnnlib -import numpy as np -import PIL.Image -import torch - -import legacy - -#---------------------------------------------------------------------------- - -def parse_range(s: Union[str, List]) -> List[int]: - '''Parse a comma separated list of numbers or ranges and return a list of ints. - - Example: '1,2,5-10' returns [1, 2, 5, 6, 7] - ''' - if isinstance(s, list): return s - ranges = [] - range_re = re.compile(r'^(\d+)-(\d+)$') - for p in s.split(','): - if m := range_re.match(p): - ranges.extend(range(int(m.group(1)), int(m.group(2))+1)) - else: - ranges.append(int(p)) - return ranges - -#---------------------------------------------------------------------------- - -def parse_vec2(s: Union[str, Tuple[float, float]]) -> Tuple[float, float]: - '''Parse a floating point 2-vector of syntax 'a,b'. - - Example: - '0,1' returns (0,1) - ''' - if isinstance(s, tuple): return s - parts = s.split(',') - if len(parts) == 2: - return (float(parts[0]), float(parts[1])) - raise ValueError(f'cannot parse 2-vector {s}') - -#---------------------------------------------------------------------------- - -def make_transform(translate: Tuple[float,float], angle: float): - m = np.eye(3) - s = np.sin(angle/360.0*np.pi*2) - c = np.cos(angle/360.0*np.pi*2) - m[0][0] = c - m[0][1] = s - m[0][2] = translate[0] - m[1][0] = -s - m[1][1] = c - m[1][2] = translate[1] - return m - -#---------------------------------------------------------------------------- - -@click.command() -@click.option('--network', 'network_pkl', help='Network pickle filename', required=True) -@click.option('--seeds', type=parse_range, help='List of random seeds (e.g., \'0,1,4-6\')', required=True) -@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=1, show_default=True) -@click.option('--class', 'class_idx', type=int, help='Class label (unconditional if not specified)') -@click.option('--noise-mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True) -@click.option('--translate', help='Translate XY-coordinate (e.g. \'0.3,1\')', type=parse_vec2, default='0,0', show_default=True, metavar='VEC2') -@click.option('--rotate', help='Rotation angle in degrees', type=float, default=0, show_default=True, metavar='ANGLE') -@click.option('--outdir', help='Where to save the output images', type=str, required=True, metavar='DIR') -def generate_images( - network_pkl: str, - seeds: List[int], - truncation_psi: float, - noise_mode: str, - outdir: str, - translate: Tuple[float,float], - rotate: float, - class_idx: Optional[int] -): - """Generate images using pretrained network pickle. - - Examples: - - \b - # Generate an image using pre-trained AFHQv2 model ("Ours" in Figure 1, left). - python gen_images.py --outdir=out --trunc=1 --seeds=2 \\ - --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl - - \b - # Generate uncurated images with truncation using the MetFaces-U dataset - python gen_images.py --outdir=out --trunc=0.7 --seeds=600-605 \\ - --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-t-metfacesu-1024x1024.pkl - """ - - print('Loading networks from "%s"...' % network_pkl) - device = torch.device('cuda') - with dnnlib.util.open_url(network_pkl) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore - - os.makedirs(outdir, exist_ok=True) - - # Labels. - label = torch.zeros([1, G.c_dim], device=device) - if G.c_dim != 0: - if class_idx is None: - raise click.ClickException('Must specify class label with --class when using a conditional network') - label[:, class_idx] = 1 - else: - if class_idx is not None: - print ('warn: --class=lbl ignored when running on an unconditional network') - - # Generate images. - for seed_idx, seed in enumerate(seeds): - print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds))) - z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device) - - # Construct an inverse rotation/translation matrix and pass to the generator. The - # generator expects this matrix as an inverse to avoid potentially failing numerical - # operations in the network. - if hasattr(G.synthesis, 'input'): - m = make_transform(translate, rotate) - m = np.linalg.inv(m) - G.synthesis.input.transform.copy_(torch.from_numpy(m)) - - img = G(z, label, truncation_psi=truncation_psi, noise_mode=noise_mode) - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save(f'{outdir}/seed{seed:04d}.png') - - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - generate_images() # pylint: disable=no-value-for-parameter - -#---------------------------------------------------------------------------- diff --git a/spaces/alan-chen-intel/dagan-demo/depth/__init__.py b/spaces/alan-chen-intel/dagan-demo/depth/__init__.py deleted file mode 100644 index 8be4160b19b945b2e0dbadb01f5a826ede32fedc..0000000000000000000000000000000000000000 --- a/spaces/alan-chen-intel/dagan-demo/depth/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .resnet_encoder import ResnetEncoder -from .depth_decoder import DepthDecoder -from .pose_decoder import PoseDecoder -from .pose_cnn import PoseCNN - diff --git "a/spaces/alexrame/rewardedsoups/pages/01_\342\234\215\357\270\217_News_summarization.py" "b/spaces/alexrame/rewardedsoups/pages/01_\342\234\215\357\270\217_News_summarization.py" deleted file mode 100644 index e7260b907afd3b62352dad3fd17ebf98c86259f7..0000000000000000000000000000000000000000 --- "a/spaces/alexrame/rewardedsoups/pages/01_\342\234\215\357\270\217_News_summarization.py" +++ /dev/null @@ -1,296 +0,0 @@ -import streamlit as st -from PIL import Image -import codecs -import streamlit.components.v1 as components -from utils import inject_custom_css -import streamlit as st -from streamlit_plotly_events import plotly_events -import pickle -import matplotlib.pyplot as plt -import plotly.graph_objects as go -import typing as tp -import colorsys - -plt.style.use('default') -plt.rcParams['text.usetex'] = True -plt.rcParams['font.family'] = 'serif' - - -def interpolate_color(color1, color2, factor): - """Interpolates between two RGB colors. Factor is between 0 and 1.""" - color1 = colorsys.rgb_to_hls( - int(color1[1:3], 16) / 255.0, - int(color1[3:5], 16) / 255.0, - int(color1[5:], 16) / 255.0 - ) - color2 = colorsys.rgb_to_hls( - int(color2[1:3], 16) / 255.0, - int(color2[3:5], 16) / 255.0, - int(color2[5:], 16) / 255.0 - ) - new_color = [color1[i] * (1 - factor) + color2[i] * factor for i in range(3)] - new_color = colorsys.hls_to_rgb(*new_color) - return '#{:02x}{:02x}{:02x}'.format( - int(new_color[0] * 255), int(new_color[1] * 255), int(new_color[2] * 255) - ) - - -color1 = "#fa7659" -color2 = "#6dafd7" - -shapes = [ - dict( - type="rect", - xref="paper", - yref="paper", - x0=0, - y0=0, - x1=1, - y1=1, - line=dict( - color="Black", - width=2, - ), - ) -] - -shapes = [ - dict( - type="rect", - xref="paper", - yref="paper", - x0=0, - y0=0, - x1=1, - y1=1, - line=dict( - color="Black", - width=2, - ), - ) -] - - -def plot_pareto(dict_results: tp.Dict): - - reward1_key = "R1" - reward2_key = "R2" - - # Series for "wa" - dict_results["wa_d"] = [x for i, x in enumerate(dict_results["wa_d"]) if i % 2 == 0] - lambda_values_wa = [ - round(i / (len(dict_results["wa_d"]) - 1), 2) for i in range(len(dict_results["wa_d"])) - ][::-1] - reward1_values_wa = [item[reward1_key] for item in dict_results["wa_d"]] - reward2_values_wa = [item[reward2_key] for item in dict_results["wa_d"]] - - # Series for "morl" - # Series for "init" - reward1_values_morl = [dict_results["morl"][reward1_key]] - reward2_values_morl = [dict_results["morl"][reward2_key]] - - # Series for "init" - reward1_values_init = [dict_results["init"][reward1_key]] - reward2_values_init = [dict_results["init"][reward2_key]] - - layout = go.Layout(autosize=False, width=1000, height=1000) - fig = go.Figure(layout=layout) - - for i in range(len(reward1_values_wa) - 1): - fig.add_trace( - go.Scatter( - x=reward1_values_wa[i:i + 2], - y=reward2_values_wa[i:i + 2], - mode='lines', - hoverinfo='skip', - line=dict( - color=interpolate_color(color1, color2, i / (len(reward1_values_wa) - 1)), - width=2 - ), - showlegend=False - ) - ) - - # Plot for "wa" - fig.add_trace( - go.Scatter( - x=reward1_values_wa, - y=reward2_values_wa, - mode='markers', - name='Rewarded soups: 0≤λ≤1', - hoverinfo='text', - hovertext=[f'λ={lmbda}' for lmbda in lambda_values_wa], - marker=dict( - color=[ - interpolate_color(color1, color2, i / len(lambda_values_wa)) - for i in range(len(lambda_values_wa)) - ], - size=10 - ) - ) - ) - - # Plot for "morl" - fig.add_trace( - go.Scatter( - x=reward1_values_morl, - y=reward2_values_morl, - mode='markers', - name='MORL: μ=0.5', - hoverinfo='skip', - marker=dict(color='#A45EE9', size=15, symbol="star"), - ) - ) - - # Plot for "init" - fig.add_trace( - go.Scatter( - x=reward1_values_init, - y=reward2_values_init, - mode='markers', - name='Pre-trained init', - hoverinfo='skip', - marker=dict(color='#9f9bc8', size=15, symbol="star"), - ) - ) - - fig.update_layout( - xaxis=dict( - #range = [5.21,5.31], - #nticks=6, - showticklabels=True, - ticks='outside', - tickfont=dict(size=18,), - title=dict(text="R1", font=dict(size=18), standoff=10), - showgrid=False, - zeroline=False, - hoverformat='.2f' - ), - yaxis=dict( - #range = [0.78,0.825], - #nticks=7, - showticklabels=True, - ticks='outside', - tickfont=dict(size=18,), - title=dict(text="R2", font=dict(size=18), standoff=10), - showgrid=False, - zeroline=False, - hoverformat='.2f' - ), - font=dict(family="Roboto", size=12, color="Black"), - hovermode='x unified', - autosize=False, - width=500, - height=500, - margin=dict(l=100, r=50, b=150, t=20, pad=0), - paper_bgcolor="White", - plot_bgcolor="White", - shapes=shapes, - legend=dict( - x=0.5, - y=0.03, - traceorder="normal", - font=dict(family="Roboto", size=12, color="black"), - bgcolor="White", - bordercolor="Black", - borderwidth=1 - ) - ) - - return fig - - -def run(): - - st.write( - f""" - - - -

RLHF of LLaMA for diverse news summarization

""", - unsafe_allow_html=True - ) - - st.markdown( - r""" -Given the importance of RLHF to train LLMs, we begin our experiments with text-to-text generation. -Our pre-trained network is LLaMA-7b, instruction fine-tuned on Alpaca. -For RL training with PPO, we employ the trl package and the setup from with low-rank adapters (LoRA) for efficiency. -Here we consider summarization on Reuter news. -To evaluate the summary in the absence of supervision, we utilized two different reward models, available on HuggingFace: [$R_1$](https://huggingface.co/Tristan/gpt2_reward_summarization) follows the Summarize from Human Feedback paper while [$R_2$](https://huggingface.co/CogComp/bart-faithful-summary-detector) leverages contrast candidate generation. - -Our results below reveal the following insights. The front defined by rewarded soups between the two weights specialized on $R_1$ (i.e., $\lambda=0.0$) and $R_2$ (i.e., $\lambda=1.0$) is above the straight line connecting those two points; this validates what we call in the paper *the linear mode connectivity hypothesis*. Moreover, the front intersects the point obtained by multi-objective RL (MORL) fine-tuning on $(1-\mu) \times R_1 + \mu \times R_2$ for $\mu=0.5$ (i.e., the average of the two rewards). Interestingly, when we compare both full fronts in the paper, they exhibit qualitatively the same shape. The qualitative visual inspections of the generations show that increasing $\lambda$ leads to shorter but more factual summaries; this is because $R_2$ evaluates faithfulness in priority.""", - unsafe_allow_html=True - ) - st.markdown( - """

Click on a rewarded soup point on the left and select a subject on the right!

""", - unsafe_allow_html=True - ) - - files = [] - - with open("streamlit_app/data/textgen/data.pkl", "rb") as f: - data = pickle.load(f) - with open("streamlit_app/data/textgen/data_prompt.pkl", "rb") as f: - data_prompt = pickle.load(f) - with open("streamlit_app/data/textgen/data_title.pkl", "rb") as f: - data_title = pickle.load(f) - - left, right = st.columns((2, 2)) - with left: - fig = plot_pareto(data) - onclick = plotly_events(fig, click_event=True) - with right: - option = st.selectbox('', data_title.keys()) - - subject = data_title[option] - st.markdown( - f""" -
-
- Text to summarize: -
-
- {data_prompt[subject]['query']} -
-
- """, - unsafe_allow_html=True - ) - st.markdown("
", unsafe_allow_html=True) - - summary1 = data_prompt[subject]['outs'][0]["out"] - summary3 = data_prompt[subject]['outs'][-1]["out"] - nb_summaries = len(data_prompt[subject]['outs']) - if len(onclick) > 0: - idx = onclick[0]["pointIndex"] - else: - idx = 5 - lambda2 = round(1 - idx / (len(data["wa_d"]) - 1), 2) - summary2 = data_prompt[subject]['outs'][idx]["out"] - bgcolor = interpolate_color(color2, color1, lambda2) - - st.markdown( - f""" -
-
- Generated summaries: -
-
-
λ=0.0
{summary3}
-
λ={lambda2}
{summary2}

-
λ=1.0
{summary1}

-
-
- """, - unsafe_allow_html=True - ) - - -if __name__ == "__main__": - img = Image.open("streamlit_app/assets/images/icon.png") - st.set_page_config(page_title="Rewarded soups", page_icon=img, layout="wide") - inject_custom_css("streamlit_app/assets/styles.css") - st.set_option('deprecation.showPyplotGlobalUse', False) - run() diff --git a/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/7.html b/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/7.html deleted file mode 100644 index 3658ddf63dfdfcd4348cff7ba9d601d6a28f2b77..0000000000000000000000000000000000000000 --- a/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/7.html +++ /dev/null @@ -1,48 +0,0 @@ - - - - brax visualizer - - - - -
- - - diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/chardistribution.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/chardistribution.py deleted file mode 100644 index c0395f4a45aaa5c4ba1824a81d8ef8f69b46dc60..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/chardistribution.py +++ /dev/null @@ -1,233 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .euctwfreq import (EUCTW_CHAR_TO_FREQ_ORDER, EUCTW_TABLE_SIZE, - EUCTW_TYPICAL_DISTRIBUTION_RATIO) -from .euckrfreq import (EUCKR_CHAR_TO_FREQ_ORDER, EUCKR_TABLE_SIZE, - EUCKR_TYPICAL_DISTRIBUTION_RATIO) -from .gb2312freq import (GB2312_CHAR_TO_FREQ_ORDER, GB2312_TABLE_SIZE, - GB2312_TYPICAL_DISTRIBUTION_RATIO) -from .big5freq import (BIG5_CHAR_TO_FREQ_ORDER, BIG5_TABLE_SIZE, - BIG5_TYPICAL_DISTRIBUTION_RATIO) -from .jisfreq import (JIS_CHAR_TO_FREQ_ORDER, JIS_TABLE_SIZE, - JIS_TYPICAL_DISTRIBUTION_RATIO) - - -class CharDistributionAnalysis(object): - ENOUGH_DATA_THRESHOLD = 1024 - SURE_YES = 0.99 - SURE_NO = 0.01 - MINIMUM_DATA_THRESHOLD = 3 - - def __init__(self): - # Mapping table to get frequency order from char order (get from - # GetOrder()) - self._char_to_freq_order = None - self._table_size = None # Size of above table - # This is a constant value which varies from language to language, - # used in calculating confidence. See - # http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html - # for further detail. - self.typical_distribution_ratio = None - self._done = None - self._total_chars = None - self._freq_chars = None - self.reset() - - def reset(self): - """reset analyser, clear any state""" - # If this flag is set to True, detection is done and conclusion has - # been made - self._done = False - self._total_chars = 0 # Total characters encountered - # The number of characters whose frequency order is less than 512 - self._freq_chars = 0 - - def feed(self, char, char_len): - """feed a character with known length""" - if char_len == 2: - # we only care about 2-bytes character in our distribution analysis - order = self.get_order(char) - else: - order = -1 - if order >= 0: - self._total_chars += 1 - # order is valid - if order < self._table_size: - if 512 > self._char_to_freq_order[order]: - self._freq_chars += 1 - - def get_confidence(self): - """return confidence based on existing data""" - # if we didn't receive any character in our consideration range, - # return negative answer - if self._total_chars <= 0 or self._freq_chars <= self.MINIMUM_DATA_THRESHOLD: - return self.SURE_NO - - if self._total_chars != self._freq_chars: - r = (self._freq_chars / ((self._total_chars - self._freq_chars) - * self.typical_distribution_ratio)) - if r < self.SURE_YES: - return r - - # normalize confidence (we don't want to be 100% sure) - return self.SURE_YES - - def got_enough_data(self): - # It is not necessary to receive all data to draw conclusion. - # For charset detection, certain amount of data is enough - return self._total_chars > self.ENOUGH_DATA_THRESHOLD - - def get_order(self, byte_str): - # We do not handle characters based on the original encoding string, - # but convert this encoding string to a number, here called order. - # This allows multiple encodings of a language to share one frequency - # table. - return -1 - - -class EUCTWDistributionAnalysis(CharDistributionAnalysis): - def __init__(self): - super(EUCTWDistributionAnalysis, self).__init__() - self._char_to_freq_order = EUCTW_CHAR_TO_FREQ_ORDER - self._table_size = EUCTW_TABLE_SIZE - self.typical_distribution_ratio = EUCTW_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str): - # for euc-TW encoding, we are interested - # first byte range: 0xc4 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char = byte_str[0] - if first_char >= 0xC4: - return 94 * (first_char - 0xC4) + byte_str[1] - 0xA1 - else: - return -1 - - -class EUCKRDistributionAnalysis(CharDistributionAnalysis): - def __init__(self): - super(EUCKRDistributionAnalysis, self).__init__() - self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER - self._table_size = EUCKR_TABLE_SIZE - self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str): - # for euc-KR encoding, we are interested - # first byte range: 0xb0 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char = byte_str[0] - if first_char >= 0xB0: - return 94 * (first_char - 0xB0) + byte_str[1] - 0xA1 - else: - return -1 - - -class GB2312DistributionAnalysis(CharDistributionAnalysis): - def __init__(self): - super(GB2312DistributionAnalysis, self).__init__() - self._char_to_freq_order = GB2312_CHAR_TO_FREQ_ORDER - self._table_size = GB2312_TABLE_SIZE - self.typical_distribution_ratio = GB2312_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str): - # for GB2312 encoding, we are interested - # first byte range: 0xb0 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char, second_char = byte_str[0], byte_str[1] - if (first_char >= 0xB0) and (second_char >= 0xA1): - return 94 * (first_char - 0xB0) + second_char - 0xA1 - else: - return -1 - - -class Big5DistributionAnalysis(CharDistributionAnalysis): - def __init__(self): - super(Big5DistributionAnalysis, self).__init__() - self._char_to_freq_order = BIG5_CHAR_TO_FREQ_ORDER - self._table_size = BIG5_TABLE_SIZE - self.typical_distribution_ratio = BIG5_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str): - # for big5 encoding, we are interested - # first byte range: 0xa4 -- 0xfe - # second byte range: 0x40 -- 0x7e , 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char, second_char = byte_str[0], byte_str[1] - if first_char >= 0xA4: - if second_char >= 0xA1: - return 157 * (first_char - 0xA4) + second_char - 0xA1 + 63 - else: - return 157 * (first_char - 0xA4) + second_char - 0x40 - else: - return -1 - - -class SJISDistributionAnalysis(CharDistributionAnalysis): - def __init__(self): - super(SJISDistributionAnalysis, self).__init__() - self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER - self._table_size = JIS_TABLE_SIZE - self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str): - # for sjis encoding, we are interested - # first byte range: 0x81 -- 0x9f , 0xe0 -- 0xfe - # second byte range: 0x40 -- 0x7e, 0x81 -- oxfe - # no validation needed here. State machine has done that - first_char, second_char = byte_str[0], byte_str[1] - if (first_char >= 0x81) and (first_char <= 0x9F): - order = 188 * (first_char - 0x81) - elif (first_char >= 0xE0) and (first_char <= 0xEF): - order = 188 * (first_char - 0xE0 + 31) - else: - return -1 - order = order + second_char - 0x40 - if second_char > 0x7F: - order = -1 - return order - - -class EUCJPDistributionAnalysis(CharDistributionAnalysis): - def __init__(self): - super(EUCJPDistributionAnalysis, self).__init__() - self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER - self._table_size = JIS_TABLE_SIZE - self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str): - # for euc-JP encoding, we are interested - # first byte range: 0xa0 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - char = byte_str[0] - if char >= 0xA0: - return 94 * (char - 0xA1) + byte_str[1] - 0xa1 - else: - return -1 diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/layout.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/layout.py deleted file mode 100644 index 22a4c54786d753c4600e3a969a95c02883e50e3e..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/layout.py +++ /dev/null @@ -1,444 +0,0 @@ -from abc import ABC, abstractmethod -from itertools import islice -from operator import itemgetter -from threading import RLock -from typing import ( - TYPE_CHECKING, - Dict, - Iterable, - List, - NamedTuple, - Optional, - Sequence, - Tuple, - Union, -) - -from ._ratio import ratio_resolve -from .align import Align -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .highlighter import ReprHighlighter -from .panel import Panel -from .pretty import Pretty -from .repr import rich_repr, Result -from .region import Region -from .segment import Segment -from .style import StyleType - -if TYPE_CHECKING: - from pip._vendor.rich.tree import Tree - - -class LayoutRender(NamedTuple): - """An individual layout render.""" - - region: Region - render: List[List[Segment]] - - -RegionMap = Dict["Layout", Region] -RenderMap = Dict["Layout", LayoutRender] - - -class LayoutError(Exception): - """Layout related error.""" - - -class NoSplitter(LayoutError): - """Requested splitter does not exist.""" - - -class _Placeholder: - """An internal renderable used as a Layout placeholder.""" - - highlighter = ReprHighlighter() - - def __init__(self, layout: "Layout", style: StyleType = "") -> None: - self.layout = layout - self.style = style - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - width = options.max_width - height = options.height or options.size.height - layout = self.layout - title = ( - f"{layout.name!r} ({width} x {height})" - if layout.name - else f"({width} x {height})" - ) - yield Panel( - Align.center(Pretty(layout), vertical="middle"), - style=self.style, - title=self.highlighter(title), - border_style="blue", - ) - - -class Splitter(ABC): - """Base class for a splitter.""" - - name: str = "" - - @abstractmethod - def get_tree_icon(self) -> str: - """Get the icon (emoji) used in layout.tree""" - - @abstractmethod - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - """Divide a region amongst several child layouts. - - Args: - children (Sequence(Layout)): A number of child layouts. - region (Region): A rectangular region to divide. - """ - - -class RowSplitter(Splitter): - """Split a layout region in to rows.""" - - name = "row" - - def get_tree_icon(self) -> str: - return "[layout.tree.row]⬌" - - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - x, y, width, height = region - render_widths = ratio_resolve(width, children) - offset = 0 - _Region = Region - for child, child_width in zip(children, render_widths): - yield child, _Region(x + offset, y, child_width, height) - offset += child_width - - -class ColumnSplitter(Splitter): - """Split a layout region in to columns.""" - - name = "column" - - def get_tree_icon(self) -> str: - return "[layout.tree.column]⬍" - - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - x, y, width, height = region - render_heights = ratio_resolve(height, children) - offset = 0 - _Region = Region - for child, child_height in zip(children, render_heights): - yield child, _Region(x, y + offset, width, child_height) - offset += child_height - - -@rich_repr -class Layout: - """A renderable to divide a fixed height in to rows or columns. - - Args: - renderable (RenderableType, optional): Renderable content, or None for placeholder. Defaults to None. - name (str, optional): Optional identifier for Layout. Defaults to None. - size (int, optional): Optional fixed size of layout. Defaults to None. - minimum_size (int, optional): Minimum size of layout. Defaults to 1. - ratio (int, optional): Optional ratio for flexible layout. Defaults to 1. - visible (bool, optional): Visibility of layout. Defaults to True. - """ - - splitters = {"row": RowSplitter, "column": ColumnSplitter} - - def __init__( - self, - renderable: Optional[RenderableType] = None, - *, - name: Optional[str] = None, - size: Optional[int] = None, - minimum_size: int = 1, - ratio: int = 1, - visible: bool = True, - height: Optional[int] = None, - ) -> None: - self._renderable = renderable or _Placeholder(self) - self.size = size - self.minimum_size = minimum_size - self.ratio = ratio - self.name = name - self.visible = visible - self.height = height - self.splitter: Splitter = self.splitters["column"]() - self._children: List[Layout] = [] - self._render_map: RenderMap = {} - self._lock = RLock() - - def __rich_repr__(self) -> Result: - yield "name", self.name, None - yield "size", self.size, None - yield "minimum_size", self.minimum_size, 1 - yield "ratio", self.ratio, 1 - - @property - def renderable(self) -> RenderableType: - """Layout renderable.""" - return self if self._children else self._renderable - - @property - def children(self) -> List["Layout"]: - """Gets (visible) layout children.""" - return [child for child in self._children if child.visible] - - @property - def map(self) -> RenderMap: - """Get a map of the last render.""" - return self._render_map - - def get(self, name: str) -> Optional["Layout"]: - """Get a named layout, or None if it doesn't exist. - - Args: - name (str): Name of layout. - - Returns: - Optional[Layout]: Layout instance or None if no layout was found. - """ - if self.name == name: - return self - else: - for child in self._children: - named_layout = child.get(name) - if named_layout is not None: - return named_layout - return None - - def __getitem__(self, name: str) -> "Layout": - layout = self.get(name) - if layout is None: - raise KeyError(f"No layout with name {name!r}") - return layout - - @property - def tree(self) -> "Tree": - """Get a tree renderable to show layout structure.""" - from pip._vendor.rich.styled import Styled - from pip._vendor.rich.table import Table - from pip._vendor.rich.tree import Tree - - def summary(layout: "Layout") -> Table: - - icon = layout.splitter.get_tree_icon() - - table = Table.grid(padding=(0, 1, 0, 0)) - - text: RenderableType = ( - Pretty(layout) if layout.visible else Styled(Pretty(layout), "dim") - ) - table.add_row(icon, text) - _summary = table - return _summary - - layout = self - tree = Tree( - summary(layout), - guide_style=f"layout.tree.{layout.splitter.name}", - highlight=True, - ) - - def recurse(tree: "Tree", layout: "Layout") -> None: - for child in layout._children: - recurse( - tree.add( - summary(child), - guide_style=f"layout.tree.{child.splitter.name}", - ), - child, - ) - - recurse(tree, self) - return tree - - def split( - self, - *layouts: Union["Layout", RenderableType], - splitter: Union[Splitter, str] = "column", - ) -> None: - """Split the layout in to multiple sub-layouts. - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - splitter (Union[Splitter, str]): Splitter instance or name of splitter. - """ - _layouts = [ - layout if isinstance(layout, Layout) else Layout(layout) - for layout in layouts - ] - try: - self.splitter = ( - splitter - if isinstance(splitter, Splitter) - else self.splitters[splitter]() - ) - except KeyError: - raise NoSplitter(f"No splitter called {splitter!r}") - self._children[:] = _layouts - - def add_split(self, *layouts: Union["Layout", RenderableType]) -> None: - """Add a new layout(s) to existing split. - - Args: - *layouts (Union[Layout, RenderableType]): Positional arguments should be renderables or (sub) Layout instances. - - """ - _layouts = ( - layout if isinstance(layout, Layout) else Layout(layout) - for layout in layouts - ) - self._children.extend(_layouts) - - def split_row(self, *layouts: Union["Layout", RenderableType]) -> None: - """Split the layout in tow a row (Layouts side by side). - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - """ - self.split(*layouts, splitter="row") - - def split_column(self, *layouts: Union["Layout", RenderableType]) -> None: - """Split the layout in to a column (layouts stacked on top of each other). - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - """ - self.split(*layouts, splitter="column") - - def unsplit(self) -> None: - """Reset splits to initial state.""" - del self._children[:] - - def update(self, renderable: RenderableType) -> None: - """Update renderable. - - Args: - renderable (RenderableType): New renderable object. - """ - with self._lock: - self._renderable = renderable - - def refresh_screen(self, console: "Console", layout_name: str) -> None: - """Refresh a sub-layout. - - Args: - console (Console): Console instance where Layout is to be rendered. - layout_name (str): Name of layout. - """ - with self._lock: - layout = self[layout_name] - region, _lines = self._render_map[layout] - (x, y, width, height) = region - lines = console.render_lines( - layout, console.options.update_dimensions(width, height) - ) - self._render_map[layout] = LayoutRender(region, lines) - console.update_screen_lines(lines, x, y) - - def _make_region_map(self, width: int, height: int) -> RegionMap: - """Create a dict that maps layout on to Region.""" - stack: List[Tuple[Layout, Region]] = [(self, Region(0, 0, width, height))] - push = stack.append - pop = stack.pop - layout_regions: List[Tuple[Layout, Region]] = [] - append_layout_region = layout_regions.append - while stack: - append_layout_region(pop()) - layout, region = layout_regions[-1] - children = layout.children - if children: - for child_and_region in layout.splitter.divide(children, region): - push(child_and_region) - - region_map = { - layout: region - for layout, region in sorted(layout_regions, key=itemgetter(1)) - } - return region_map - - def render(self, console: Console, options: ConsoleOptions) -> RenderMap: - """Render the sub_layouts. - - Args: - console (Console): Console instance. - options (ConsoleOptions): Console options. - - Returns: - RenderMap: A dict that maps Layout on to a tuple of Region, lines - """ - render_width = options.max_width - render_height = options.height or console.height - region_map = self._make_region_map(render_width, render_height) - layout_regions = [ - (layout, region) - for layout, region in region_map.items() - if not layout.children - ] - render_map: Dict["Layout", "LayoutRender"] = {} - render_lines = console.render_lines - update_dimensions = options.update_dimensions - - for layout, region in layout_regions: - lines = render_lines( - layout.renderable, update_dimensions(region.width, region.height) - ) - render_map[layout] = LayoutRender(region, lines) - return render_map - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - with self._lock: - width = options.max_width or console.width - height = options.height or console.height - render_map = self.render(console, options.update_dimensions(width, height)) - self._render_map = render_map - layout_lines: List[List[Segment]] = [[] for _ in range(height)] - _islice = islice - for (region, lines) in render_map.values(): - _x, y, _layout_width, layout_height = region - for row, line in zip( - _islice(layout_lines, y, y + layout_height), lines - ): - row.extend(line) - - new_line = Segment.line() - for layout_row in layout_lines: - yield from layout_row - yield new_line - - -if __name__ == "__main__": - from pip._vendor.rich.console import Console - - console = Console() - layout = Layout() - - layout.split_column( - Layout(name="header", size=3), - Layout(ratio=1, name="main"), - Layout(size=10, name="footer"), - ) - - layout["main"].split_row(Layout(name="side"), Layout(name="body", ratio=2)) - - layout["body"].split_row(Layout(name="content", ratio=2), Layout(name="s2")) - - layout["s2"].split_column( - Layout(name="top"), Layout(name="middle"), Layout(name="bottom") - ) - - layout["side"].split_column(Layout(layout.tree, name="left1"), Layout(name="left2")) - - layout["content"].update("foo") - - console.print(layout) diff --git a/spaces/aliabd/SummerTime/model/dialogue/hmnet_model.py b/spaces/aliabd/SummerTime/model/dialogue/hmnet_model.py deleted file mode 100644 index 54385d7cd14c723ee99aa7282ee0d6c30802f2eb..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/dialogue/hmnet_model.py +++ /dev/null @@ -1,483 +0,0 @@ -from model.base_model import SummModel -import argparse -import os -import torch -import gzip -import json -from model.third_party.HMNet.Models.Trainers.HMNetTrainer import HMNetTrainer -from model.third_party.HMNet.Utils.Arguments import Arguments - -import spacy - -nlp = spacy.load("en_core_web_sm", disable=["parser"]) -# tagger = nlp.get_pipe('tagger') -# ner = nlp.get_pipe('ner') -# POS = {w: i for i, w in enumerate([''] + list(tagger.labels))} -# ENT = {w: i for i, w in enumerate([''] + list(ner.move_names))} -# These two dicts are adapted from SpaCy 2.3.1, since HMNet's embedding for POS and ENT is fixed -POS = { - "": 0, - "$": 1, - "''": 2, - ",": 3, - "-LRB-": 4, - "-RRB-": 5, - ".": 6, - ":": 7, - "ADD": 8, - "AFX": 9, - "CC": 10, - "CD": 11, - "DT": 12, - "EX": 13, - "FW": 14, - "HYPH": 15, - "IN": 16, - "JJ": 17, - "JJR": 18, - "JJS": 19, - "LS": 20, - "MD": 21, - "NFP": 22, - "NN": 23, - "NNP": 24, - "NNPS": 25, - "NNS": 26, - "PDT": 27, - "POS": 28, - "PRP": 29, - "PRP$": 30, - "RB": 31, - "RBR": 32, - "RBS": 33, - "RP": 34, - "SYM": 35, - "TO": 36, - "UH": 37, - "VB": 38, - "VBD": 39, - "VBG": 40, - "VBN": 41, - "VBP": 42, - "VBZ": 43, - "WDT": 44, - "WP": 45, - "WP$": 46, - "WRB": 47, - "XX": 48, - "_SP": 49, - "``": 50, -} -ENT = { - "": 0, - "B-ORG": 1, - "B-DATE": 2, - "B-PERSON": 3, - "B-GPE": 4, - "B-MONEY": 5, - "B-CARDINAL": 6, - "B-NORP": 7, - "B-PERCENT": 8, - "B-WORK_OF_ART": 9, - "B-LOC": 10, - "B-TIME": 11, - "B-QUANTITY": 12, - "B-FAC": 13, - "B-EVENT": 14, - "B-ORDINAL": 15, - "B-PRODUCT": 16, - "B-LAW": 17, - "B-LANGUAGE": 18, - "I-ORG": 19, - "I-DATE": 20, - "I-PERSON": 21, - "I-GPE": 22, - "I-MONEY": 23, - "I-CARDINAL": 24, - "I-NORP": 25, - "I-PERCENT": 26, - "I-WORK_OF_ART": 27, - "I-LOC": 28, - "I-TIME": 29, - "I-QUANTITY": 30, - "I-FAC": 31, - "I-EVENT": 32, - "I-ORDINAL": 33, - "I-PRODUCT": 34, - "I-LAW": 35, - "I-LANGUAGE": 36, - "L-ORG": 37, - "L-DATE": 38, - "L-PERSON": 39, - "L-GPE": 40, - "L-MONEY": 41, - "L-CARDINAL": 42, - "L-NORP": 43, - "L-PERCENT": 44, - "L-WORK_OF_ART": 45, - "L-LOC": 46, - "L-TIME": 47, - "L-QUANTITY": 48, - "L-FAC": 49, - "L-EVENT": 50, - "L-ORDINAL": 51, - "L-PRODUCT": 52, - "L-LAW": 53, - "L-LANGUAGE": 54, - "U-ORG": 55, - "U-DATE": 56, - "U-PERSON": 57, - "U-GPE": 58, - "U-MONEY": 59, - "U-CARDINAL": 60, - "U-NORP": 61, - "U-PERCENT": 62, - "U-WORK_OF_ART": 63, - "U-LOC": 64, - "U-TIME": 65, - "U-QUANTITY": 66, - "U-FAC": 67, - "U-EVENT": 68, - "U-ORDINAL": 69, - "U-PRODUCT": 70, - "U-LAW": 71, - "U-LANGUAGE": 72, - "O": 73, -} - - -class HMNetModel(SummModel): - # static variables - model_name = "HMNET" - is_extractive = False - is_neural = True - is_dialogue_based = True - - def __init__( - self, - min_gen_length: int = 10, - max_gen_length: int = 300, - beam_width: int = 6, - **kwargs, - ): - """ - Create a summarization model with HMNet backbone. In the default setting, the inference speed will be - 10s/sample (on one GPU), however, if one can tune these three parameters properly, e.g. min_gen_length=10, - max_gen_length=100, and beam_width=2, the inference speed will increase to 2s/sample (on one GPU). - - Args: - min_gen_length (int): minimum generation length of the decoder - max_gen_length (int): maximum generation length of the decoder - beam_width (int): width of the beam when doing beam search in the decoding process - kwargs: the other valid parameters. The valid parameters can be found in - model/dialogue/hmnet/config/dialogue.conf . You can use either lower case or upper case for parameter - name. The valid parameter name is one of the following args, however, we do not encourage you to modify - them, since some unexpected, untested errors might be triggered: - ['MODEL', 'TASK', 'CRITERION', 'SEED', 'MAX_NUM_EPOCHS', 'EVAL_PER_UPDATE_NUM' - , 'UPDATES_PER_EPOCH', 'OPTIMIZER', 'START_LEARNING_RATE', 'LR_SCHEDULER', 'WARMUP_STEPS', - 'WARMUP_INIT_LR', 'WARMUP_END_LR', 'GRADIENT_ACCUMULATE_STEP', 'GRAD_CLIPPING', 'USE_REL_DATA_PATH', - 'TRAIN_FILE', 'DEV_FILE', 'TEST_FILE', 'ROLE_DICT_FILE', 'MINI_BATCH', 'MAX_PADDING_RATIO', - 'BATCH_READ_AHEAD', 'DOC_SHUFFLE_BUF_SIZE', 'SAMPLE_SHUFFLE_BUFFER_SIZE', 'BATCH_SHUFFLE_BUFFER_SIZE', - 'MAX_TRANSCRIPT_WORD', 'MAX_SENT_LEN', 'MAX_SENT_NUM', 'DROPOUT', 'VOCAB_DIM', 'ROLE_SIZE', 'ROLE_DIM', - 'POS_DIM', 'ENT_DIM', 'USE_ROLE', 'USE_POSENT', 'USE_BOS_TOKEN', 'USE_EOS_TOKEN', - 'TRANSFORMER_EMBED_DROPOUT', 'TRANSFORMER_RESIDUAL_DROPOUT', 'TRANSFORMER_ATTENTION_DROPOUT', - 'TRANSFORMER_LAYER', 'TRANSFORMER_HEAD', 'TRANSFORMER_POS_DISCOUNT', 'PRE_TOKENIZER', - 'PRE_TOKENIZER_PATH', 'PYLEARN_MODEL', 'EXTRA_IDS', 'BEAM_WIDTH', 'EVAL_TOKENIZED', 'EVAL_LOWERCASE', - 'MAX_GEN_LENGTH', 'MIN_GEN_LENGTH', 'NO_REPEAT_NGRAM_SIZE'] - - Return an instance of HMNet model for dialogue summarization. - """ - super(HMNetModel, self).__init__() - self.root_path = self._get_root() - - # we leave the most influential params with prompt and the others as hidden kwargs - kwargs["MIN_GEN_LENGTH"] = min_gen_length - kwargs["MAX_GEN_LENGTH"] = max_gen_length - kwargs["BEAM_WIDTH"] = beam_width - self.opt = self._parse_args(kwargs) - self.model = HMNetTrainer(self.opt) - - def _get_root(self): - root_path = os.getcwd() - while "model" not in os.listdir(root_path): - root_path = os.path.dirname(root_path) - root_path = os.path.join(root_path, "model/dialogue") - return root_path - - def _parse_args(self, kwargs): - parser = argparse.ArgumentParser( - description="HMNet: Pretrain or fine-tune models for HMNet model." - ) - parser.add_argument( - "--command", default="evaluate", help="Command: train/evaluate" - ) - parser.add_argument( - "--conf_file", - default=os.path.join(self.root_path, "hmnet/config/dialogue.conf"), - help="Path to the BigLearn conf file.", - ) - parser.add_argument( - "--PYLEARN_MODEL", help="Overrides this option from the conf file." - ) - parser.add_argument( - "--master_port", help="Overrides this option default", default=None - ) - parser.add_argument("--cluster", help="local, philly or aml", default="local") - parser.add_argument( - "--dist_init_path", help="Distributed init path for AML", default="./tmp" - ) - parser.add_argument( - "--fp16", - action="store_true", - help="Whether to use 16-bit float precision instead of 32-bit", - ) - parser.add_argument( - "--fp16_opt_level", - type=str, - default="O1", - help="For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']." - "See details at https://nvidia.github.io/apex/amp.html", - ) - parser.add_argument("--no_cuda", action="store_true", help="Disable cuda.") - parser.add_argument( - "--config_overrides", - help="Override parameters on config, VAR=val;VAR=val;...", - ) - - cmdline_args = parser.parse_args() - command = cmdline_args.command - conf_file = cmdline_args.conf_file - conf_args = Arguments(conf_file) - opt = conf_args.readArguments() - - if cmdline_args.config_overrides: - for config_override in cmdline_args.config_overrides.split(";"): - config_override = config_override.strip() - if config_override: - var_val = config_override.split("=") - assert ( - len(var_val) == 2 - ), f"Config override '{var_val}' does not have the form 'VAR=val'" - conf_args.add_opt(opt, var_val[0], var_val[1], force_override=True) - - opt["cuda"] = torch.cuda.is_available() and not cmdline_args.no_cuda - opt["confFile"] = conf_file - if "datadir" not in opt: - opt["datadir"] = os.path.dirname( - conf_file - ) # conf_file specifies where the data folder is - opt["basename"] = os.path.basename( - conf_file - ) # conf_file specifies where the name of save folder is - opt["command"] = command - - # combine cmdline_args into opt dictionary - for key, val in cmdline_args.__dict__.items(): - # if val is not None and key not in ['command', 'conf_file']: - if val is not None: - opt[key] = val - - # combine kwargs into opt dictionary (we allow lower case) - for key, val in kwargs.items(): - valid_keys = [x for x in opt.keys() if x.upper() == x] - if key.upper() not in valid_keys: - print("WARNING: {} is not a valid key in HMNet.".format(key)) - print("The valid keys are:", valid_keys) - continue - if val is not None: - opt[key.upper()] = val - - return opt - - def summarize(self, corpus, queries=None): - print(f"HMNet model: processing document of {corpus.__len__()} samples") - # transform the original dataset to "dialogue" input - # we only use test set path for evaluation - data_folder = os.path.join( - os.path.dirname(self.opt["datadir"]), - "ExampleRawData/meeting_summarization/AMI_proprec/test", - ) - - self._create_datafolder(data_folder) - self._preprocess(corpus, data_folder) - - # return self.model.eval() - results = self._evaluate() - - return results - - def _evaluate(self): - if self.opt["rank"] == 0: - self.model.log("-----------------------------------------------") - self.model.log("Evaluating model ... ") - - self.model.set_up_model() - - eval_dataset = "test" - batch_generator_eval = self.model.get_batch_generator(eval_dataset) - predictions = self._eval_batches( - self.model.module, batch_generator_eval, self.model.saveFolder, eval_dataset - ) - - return predictions - - def _eval_batches(self, module, dev_batches, save_folder, label=""): - max_sent_len = int(self.opt["MAX_GEN_LENGTH"]) - - print("Decoding current model ... \nSaving folder is {}".format(save_folder)) - print("Each sample will cost about 10 second.") - import time - - start_time = time.time() - predictions = [] # prediction of tokens from model - if not isinstance(module.tokenizer, list): - decoder_tokenizer = module.tokenizer - elif len(module.tokenizer) == 1: - decoder_tokenizer = module.tokenizer[0] - elif len(module.tokenizer) == 2: - decoder_tokenizer = module.tokenizer[1] - else: - assert False, "len(module.tokenizer) > 2" - - with torch.no_grad(): - for j, dev_batch in enumerate(dev_batches): - for b in dev_batch: - if torch.is_tensor(dev_batch[b]): - dev_batch[b] = dev_batch[b].to(self.opt["device"]) - - beam_search_res = module( - dev_batch, beam_search=True, max_sent_len=max_sent_len - ) - pred = [ - [t[0] for t in x] if len(x) > 0 else [[]] for x in beam_search_res - ] - predictions.extend( - [ - [ - self._convert_tokens_to_string(decoder_tokenizer, tt) - for tt in t - ] - for t in pred - ] - ) - - if ( - "DEBUG" in self.opt and j >= 10 - ) or j >= self.model.task.evaluator.eval_batches_num: - # in debug mode (decode first 10 batches) ortherwise decode first self.eval_batches_num bathes - break - - top1_predictions = [x[0] for x in predictions] - - print("Total time for inference:", time.time() - start_time) - return top1_predictions - - def _convert_tokens_to_string(self, tokenizer, tokens): - if "EVAL_TOKENIZED" in self.opt: - tokens = [t for t in tokens if t not in tokenizer.all_special_tokens] - if "EVAL_LOWERCASE" in self.opt: - tokens = [t.lower() for t in tokens] - if "EVAL_TOKENIZED" in self.opt: - return " ".join(tokens) - else: - return tokenizer.decode( - tokenizer.convert_tokens_to_ids(tokens), skip_special_tokens=True - ) - - def _preprocess(self, corpus, test_path): - samples = [] - for i, sample in enumerate(corpus): - new_sample = {"id": i, "meeting": [], "summary": []} - if isinstance(sample, str): - raise RuntimeError( - "Error: the input of HMNet should be dialogues, rather than documents." - ) - - # add all the turns one by one - for turn in sample: - turn = [x.strip() for x in turn.split(":")] - if len(turn) < 2: - continue - tokenized_turn = nlp(turn[1]) - # In case we can't find proper entity in move_names - ent_id = [] - pos_id = [] - for token in tokenized_turn: - ent = ( - token.ent_iob_ + "-" + token.ent_type_ - if token.ent_iob_ != "O" - else "O" - ) - ent_id.append(ENT[ent] if ent in ENT else ENT[""]) - - pos = token.tag_ - pos_id.append(POS[pos] if pos in POS else POS[""]) - - new_sample["meeting"].append( - { - "speaker": turn[0], - "role": "", - "utt": { - "word": [str(token) for token in tokenized_turn], - "pos_id": pos_id, - "ent_id": ent_id, - }, - } - ) - new_sample["summary"].append( - "This is a dummy summary. HMNet will filter out the sample w/o summary!" - ) - samples.append(new_sample) - # save to the gzip - file_path = os.path.join(test_path, "split_{}.jsonl.gz".format(i)) - with gzip.open(file_path, "wt", encoding="utf-8") as file: - file.write(json.dumps(new_sample)) - - def _clean_datafolder(self, data_folder): - for name in os.listdir(data_folder): - name = os.path.join(data_folder, name) - if ".gz" in name: - os.remove(name) - - def _create_datafolder(self, data_folder): - if os.path.exists(data_folder): - self._clean_datafolder(data_folder) - else: - os.makedirs(data_folder) - with open( - os.path.join(os.path.dirname(data_folder), "test_ami.json"), - "w", - encoding="utf-8", - ) as file: - json.dump( - [ - { - "source": { - "dataset": "../ExampleRawData/meeting_summarization/AMI_proprec/test/" - }, - "task": "meeting", - "name": "ami", - } - ], - file, - ) - - with open( - os.path.join( - os.path.dirname(os.path.dirname(data_folder)), "role_dict_ext.json" - ), - "w", - ) as file: - json.dump({}, file) - - @classmethod - def show_capability(cls) -> None: - basic_description = cls.generate_basic_description() - more_details = ( - "A HMNet model finetuned on CNN-DM dataset for summarization.\n\n" - "Strengths:\n - High performance on dialogue summarization task.\n\n" - "Weaknesses:\n - Not suitable for datasets other than dialogues.\n\n" - "Initialization arguments:\n " - " - `corpus`: Unlabelled corpus of documents.\n" - ) - print(f"{basic_description} \n {'#' * 20} \n {more_details}") diff --git a/spaces/allknowingroger/AI.Dashboard.Gradio.Streamlit.HTML5/style.css b/spaces/allknowingroger/AI.Dashboard.Gradio.Streamlit.HTML5/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/AI.Dashboard.Gradio.Streamlit.HTML5/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/allknowingroger/Image-Models-Test105/app.py b/spaces/allknowingroger/Image-Models-Test105/app.py deleted file mode 100644 index 4905d7fe7ff1248f40b5148be030c5e3dada1d21..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test105/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "tobin2003/stuart-little", - "vcolamatteo/pokemon-text_to_image_lora_1", - "acondess/lineartv1.1", - "sunyijia97/lora-trained-xl-colab-doll-v2", - "Yntec/DeliShaper", - "Arshojojills/cheems-chm-memedog", - "rishabh063/lora-trained-xl-car", - "newsyctw/res_AFRICAN_PYGMY_GOOSE_sdv2-1", - "livingbox/incremental-test-02", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test107/README.md b/spaces/allknowingroger/Image-Models-Test107/README.md deleted file mode 100644 index 98533d322f9b146b96635822b2ea5b2c3aef1801..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test107/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test104 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test64/app.py b/spaces/allknowingroger/Image-Models-Test64/app.py deleted file mode 100644 index 2e5f8471a6c3b46c3c657bda2336099bb0f8958d..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test64/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "kresenty77/profile1", - "Falah/fashion-model", - "gokulk1804/my-pet-cat", - "Ardra05/my-pet-dog", - "Yntec/CartoonStyleClassic", - "ajulkjose/my-thanos", - "Yacong/lora-trained-xl", - "Muhammadreza/mann-e-comics-revised-2", - "fjcorrales/diego_sdxl", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test82/README.md b/spaces/allknowingroger/Image-Models-Test82/README.md deleted file mode 100644 index 67bf3f174dddb19e628fa2009398e2ba6c9f29be..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test82/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test81 ---- - - \ No newline at end of file diff --git a/spaces/amankishore/sjc/sd1/ldm/data/__init__.py b/spaces/amankishore/sjc/sd1/ldm/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/__init__.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/__init__.py deleted file mode 100644 index a0b4bac6aa4de9c0449095a3874c2cb9716169d7..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -import sys -from . import Provider -from g4f.models import Model, ModelUtils - - -class ChatCompletion: - @staticmethod - def create(model: Model.model or str, messages: list, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs): - kwargs['auth'] = auth - - if provider and provider.needs_auth and not auth: - print( - f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr) - sys.exit(1) - - try: - if isinstance(model, str): - try: - model = ModelUtils.convert[model] - except KeyError: - raise Exception(f'The model: {model} does not exist') - - engine = model.best_provider if not provider else provider - - if not engine.supports_stream and stream == True: - print( - f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr) - sys.exit(1) - - print(f'Using {engine.__name__} provider') - - return (engine._create_completion(model.name, messages, stream, **kwargs) - if stream else ''.join(engine._create_completion(model.name, messages, stream, **kwargs))) - except TypeError as e: - print(e) - arg: str = str(e).split("'")[1] - print( - f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr) - sys.exit(1) diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py deleted file mode 100644 index 489d501bef364020212306d81e9b85c8daa27491..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py +++ /dev/null @@ -1,413 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from: -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py -# ------------------------------------------------------------------------------------------------ - -import math -import warnings -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.init import constant_, xavier_uniform_ - -try: - from groundingdino import _C -except: - warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") - - -# helpers -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - -class MultiScaleDeformableAttnFunction(Function): - @staticmethod - def forward( - ctx, - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step, - ): - ctx.im2col_step = im2col_step - output = _C.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ctx.im2col_step, - ) - ctx.save_for_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - ( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output, - ctx.im2col_step, - ) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch( - value: torch.Tensor, - value_spatial_shapes: torch.Tensor, - sampling_locations: torch.Tensor, - attention_weights: torch.Tensor, -) -> torch.Tensor: - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = ( - value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_) - ) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False - ) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points - ) - output = ( - (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights) - .sum(-1) - .view(bs, num_heads * embed_dims, num_queries) - ) - return output.transpose(1, 2).contiguous() - - -class MultiScaleDeformableAttention(nn.Module): - """Multi-Scale Deformable Attention Module used in Deformable-DETR - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dim (int): The embedding dimension of Attention. Default: 256. - num_heads (int): The number of attention heads. Default: 8. - num_levels (int): The number of feature map used in Attention. Default: 4. - num_points (int): The number of sampling points for each query - in each head. Default: 4. - img2col_steps (int): The step used in image_to_column. Defualt: 64. - dropout (float): Dropout layer used in output. Default: 0.1. - batch_first (bool): if ``True``, then the input and output tensor will be - provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)` - """ - - def __init__( - self, - embed_dim: int = 256, - num_heads: int = 8, - num_levels: int = 4, - num_points: int = 4, - img2col_step: int = 64, - batch_first: bool = False, - ): - super().__init__() - if embed_dim % num_heads != 0: - raise ValueError( - "embed_dim must be divisible by num_heads, but got {} and {}".format( - embed_dim, num_heads - ) - ) - head_dim = embed_dim // num_heads - - self.batch_first = batch_first - - if not _is_power_of_2(head_dim): - warnings.warn( - """ - You'd better set d_model in MSDeformAttn to make sure that - each dim of the attention head a power of 2, which is more efficient. - """ - ) - - self.im2col_step = img2col_step - self.embed_dim = embed_dim - self.num_heads = num_heads - self.num_levels = num_levels - self.num_points = num_points - self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dim, embed_dim) - self.output_proj = nn.Linear(embed_dim, embed_dim) - - self.init_weights() - - def _reset_parameters(self): - return self.init_weights() - - def init_weights(self): - """ - Default initialization for Parameters of Module. - """ - constant_(self.sampling_offsets.weight.data, 0.0) - thetas = torch.arange(self.num_heads, dtype=torch.float32) * ( - 2.0 * math.pi / self.num_heads - ) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = ( - (grid_init / grid_init.abs().max(-1, keepdim=True)[0]) - .view(self.num_heads, 1, 1, 2) - .repeat(1, self.num_levels, self.num_points, 1) - ) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.0) - constant_(self.attention_weights.bias.data, 0.0) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.0) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.0) - - def freeze_sampling_offsets(self): - print("Freeze sampling offsets") - self.sampling_offsets.weight.requires_grad = False - self.sampling_offsets.bias.requires_grad = False - - def freeze_attention_weights(self): - print("Freeze attention weights") - self.attention_weights.weight.requires_grad = False - self.attention_weights.bias.requires_grad = False - - def forward( - self, - query: torch.Tensor, - key: Optional[torch.Tensor] = None, - value: Optional[torch.Tensor] = None, - query_pos: Optional[torch.Tensor] = None, - key_padding_mask: Optional[torch.Tensor] = None, - reference_points: Optional[torch.Tensor] = None, - spatial_shapes: Optional[torch.Tensor] = None, - level_start_index: Optional[torch.Tensor] = None, - **kwargs - ) -> torch.Tensor: - - """Forward Function of MultiScaleDeformableAttention - - Args: - query (torch.Tensor): Query embeddings with shape - `(num_query, bs, embed_dim)` - key (torch.Tensor): Key embeddings with shape - `(num_key, bs, embed_dim)` - value (torch.Tensor): Value embeddings with shape - `(num_key, bs, embed_dim)` - query_pos (torch.Tensor): The position embedding for `query`. Default: None. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`, - indicating which elements within `key` to be ignored in attention. - reference_points (torch.Tensor): The normalized reference points - with shape `(bs, num_query, num_levels, 2)`, - all elements is range in [0, 1], top-left (0, 0), - bottom-right (1, 1), including padding are. - or `(N, Length_{query}, num_levels, 4)`, add additional - two dimensions `(h, w)` to form reference boxes. - spatial_shapes (torch.Tensor): Spatial shape of features in different levels. - With shape `(num_levels, 2)`, last dimension represents `(h, w)`. - level_start_index (torch.Tensor): The start index of each level. A tensor with - shape `(num_levels, )` which can be represented as - `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`. - - Returns: - torch.Tensor: forward results with shape `(num_query, bs, embed_dim)` - """ - - if value is None: - value = query - - if query_pos is not None: - query = query + query_pos - - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], float(0)) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2 - ) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points - ) - attention_weights = attention_weights.softmax(-1) - attention_weights = attention_weights.view( - bs, - num_query, - self.num_heads, - self.num_levels, - self.num_points, - ) - - # bs, num_query, num_heads, num_levels, num_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = ( - reference_points[:, :, None, :, None, :] - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - ) - elif reference_points.shape[-1] == 4: - sampling_locations = ( - reference_points[:, :, None, :, None, :2] - + sampling_offsets - / self.num_points - * reference_points[:, :, None, :, None, 2:] - * 0.5 - ) - else: - raise ValueError( - "Last dim of reference_points must be 2 or 4, but get {} instead.".format( - reference_points.shape[-1] - ) - ) - - if torch.cuda.is_available() and value.is_cuda: - halffloat = False - if value.dtype == torch.float16: - halffloat = True - value = value.float() - sampling_locations = sampling_locations.float() - attention_weights = attention_weights.float() - - output = MultiScaleDeformableAttnFunction.apply( - value, - spatial_shapes, - level_start_index, - sampling_locations, - attention_weights, - self.im2col_step, - ) - - if halffloat: - output = output.half() - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights - ) - - output = self.output_proj(output) - - if not self.batch_first: - output = output.permute(1, 0, 2) - - return output - - -def create_dummy_class(klass, dependency, message=""): - """ - When a dependency of a class is not available, create a dummy class which throws ImportError - when used. - - Args: - klass (str): name of the class. - dependency (str): name of the dependency. - message: extra message to print - Returns: - class: a class object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass) - if message: - err = err + " " + message - - class _DummyMetaClass(type): - # throw error on class attribute access - def __getattr__(_, __): # noqa: B902 - raise ImportError(err) - - class _Dummy(object, metaclass=_DummyMetaClass): - # throw error on constructor - def __init__(self, *args, **kwargs): - raise ImportError(err) - - return _Dummy - - -def create_dummy_func(func, dependency, message=""): - """ - When a dependency of a function is not available, create a dummy function which throws - ImportError when used. - - Args: - func (str): name of the function. - dependency (str or list[str]): name(s) of the dependency. - message: extra message to print - Returns: - function: a function object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func) - if message: - err = err + " " + message - - if isinstance(dependency, (list, tuple)): - dependency = ",".join(dependency) - - def _dummy(*args, **kwargs): - raise ImportError(err) - - return _dummy diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/rife_new_gen/IFNet_HDv3.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/rife_new_gen/IFNet_HDv3.py deleted file mode 100644 index 2360c9e7d15ad4c73e8bb34112999e3d46aeb8c2..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/rife_new_gen/IFNet_HDv3.py +++ /dev/null @@ -1,129 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from ..model.warplayer import warp -# from train_log.refine import * - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -def conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1): - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, bias=True), - nn.LeakyReLU(0.2, True) - ) - -def conv_bn(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1): - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, bias=False), - nn.BatchNorm2d(out_planes), - nn.LeakyReLU(0.2, True) - ) - -class ResConv(nn.Module): - def __init__(self, c, dilation=1): - super(ResConv, self).__init__() - self.conv = nn.Conv2d(c, c, 3, 1, dilation, dilation=dilation, groups=1\ -) - self.beta = nn.Parameter(torch.ones((1, c, 1, 1)), requires_grad=True) - self.relu = nn.LeakyReLU(0.2, True) - - def forward(self, x): - return self.relu(self.conv(x) * self.beta + x) - -class IFBlock(nn.Module): - def __init__(self, in_planes, c=64): - super(IFBlock, self).__init__() - self.conv0 = nn.Sequential( - conv(in_planes, c//2, 3, 2, 1), - conv(c//2, c, 3, 2, 1), - ) - self.convblock = nn.Sequential( - ResConv(c), - ResConv(c), - ResConv(c), - ResConv(c), - ResConv(c), - ResConv(c), - ResConv(c), - ResConv(c), - ) - self.lastconv = nn.Sequential( - nn.ConvTranspose2d(c, 4*6, 4, 2, 1), - nn.PixelShuffle(2) - ) - - def forward(self, x, flow=None, scale=1): - x = F.interpolate(x, scale_factor= 1. / scale, mode="bilinear", align_corners=False) - if flow is not None: - flow = F.interpolate(flow, scale_factor= 1. / scale, mode="bilinear", align_corners=False) * 1. / scale - x = torch.cat((x, flow), 1) - feat = self.conv0(x) - feat = self.convblock(feat) - tmp = self.lastconv(feat) - tmp = F.interpolate(tmp, scale_factor=scale, mode="bilinear", align_corners=False) - flow = tmp[:, :4] * scale - mask = tmp[:, 4:5] - return flow, mask - -class IFNet(nn.Module): - def __init__(self): - super(IFNet, self).__init__() - self.block0 = IFBlock(7, c=192) - self.block1 = IFBlock(8+4, c=128) - self.block2 = IFBlock(8+4, c=96) - self.block3 = IFBlock(8+4, c=64) - # self.contextnet = Contextnet() - # self.unet = Unet() - - def forward( self, x, timestep=0.5, scale_list=[8, 4, 2, 1], training=False, fastmode=True, ensemble=False): - if training == False: - channel = x.shape[1] // 2 - img0 = x[:, :channel] - img1 = x[:, channel:] - if not torch.is_tensor(timestep): - timestep = (x[:, :1].clone() * 0 + 1) * timestep - else: - timestep = timestep.repeat(1, 1, img0.shape[2], img0.shape[3]) - flow_list = [] - merged = [] - mask_list = [] - warped_img0 = img0 - warped_img1 = img1 - flow = None - mask = None - loss_cons = 0 - block = [self.block0, self.block1, self.block2, self.block3] - for i in range(4): - if flow is None: - flow, mask = block[i](torch.cat((img0[:, :3], img1[:, :3], timestep), 1), None, scale=scale_list[i]) - if ensemble: - f1, m1 = block[i](torch.cat((img1[:, :3], img0[:, :3], 1-timestep), 1), None, scale=scale_list[i]) - flow = (flow + torch.cat((f1[:, 2:4], f1[:, :2]), 1)) / 2 - mask = (mask + (-m1)) / 2 - else: - f0, m0 = block[i](torch.cat((warped_img0[:, :3], warped_img1[:, :3], timestep, mask), 1), flow, scale=scale_list[i]) - if ensemble: - f1, m1 = block[i](torch.cat((warped_img1[:, :3], warped_img0[:, :3], 1-timestep, -mask), 1), torch.cat((flow[:, 2:4], flow[:, :2]), 1), scale=scale_list[i]) - f0 = (f0 + torch.cat((f1[:, 2:4], f1[:, :2]), 1)) / 2 - m0 = (m0 + (-m1)) / 2 - flow = flow + f0 - mask = mask + m0 - mask_list.append(mask) - flow_list.append(flow) - warped_img0 = warp(img0, flow[:, :2]) - warped_img1 = warp(img1, flow[:, 2:4]) - merged.append((warped_img0, warped_img1)) - mask_list[3] = torch.sigmoid(mask_list[3]) - merged[3] = merged[3][0] * mask_list[3] + merged[3][1] * (1 - mask_list[3]) - if not fastmode: - print('contextnet is removed') - ''' - c0 = self.contextnet(img0, flow[:, :2]) - c1 = self.contextnet(img1, flow[:, 2:4]) - tmp = self.unet(img0, img1, warped_img0, warped_img1, mask, flow, c0, c1) - res = tmp[:, :3] * 2 - 1 - merged[3] = torch.clamp(merged[3] + res, 0, 1) - ''' - return flow_list, mask_list[3], merged diff --git a/spaces/arch-123/bingo/src/components/toaster.tsx b/spaces/arch-123/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/models/resnet.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/models/resnet.py deleted file mode 100644 index 5eafcd6005739fcdc454fb20def3e66791766a53..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/models/resnet.py +++ /dev/null @@ -1,198 +0,0 @@ -import torch -from torch import nn - -# from TTS.utils.audio.torch_transforms import TorchSTFT -from TTS.encoder.models.base_encoder import BaseEncoder - - -class SELayer(nn.Module): - def __init__(self, channel, reduction=8): - super(SELayer, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction), - nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel), - nn.Sigmoid(), - ) - - def forward(self, x): - b, c, _, _ = x.size() - y = self.avg_pool(x).view(b, c) - y = self.fc(y).view(b, c, 1, 1) - return x * y - - -class SEBasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, reduction=8): - super(SEBasicBlock, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.se = SELayer(planes, reduction) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.relu(out) - out = self.bn1(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.se(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - return out - - -class ResNetSpeakerEncoder(BaseEncoder): - """Implementation of the model H/ASP without batch normalization in speaker embedding. This model was proposed in: https://arxiv.org/abs/2009.14153 - Adapted from: https://github.com/clovaai/voxceleb_trainer - """ - - # pylint: disable=W0102 - def __init__( - self, - input_dim=64, - proj_dim=512, - layers=[3, 4, 6, 3], - num_filters=[32, 64, 128, 256], - encoder_type="ASP", - log_input=False, - use_torch_spec=False, - audio_config=None, - ): - super(ResNetSpeakerEncoder, self).__init__() - - self.encoder_type = encoder_type - self.input_dim = input_dim - self.log_input = log_input - self.use_torch_spec = use_torch_spec - self.audio_config = audio_config - self.proj_dim = proj_dim - - self.conv1 = nn.Conv2d(1, num_filters[0], kernel_size=3, stride=1, padding=1) - self.relu = nn.ReLU(inplace=True) - self.bn1 = nn.BatchNorm2d(num_filters[0]) - - self.inplanes = num_filters[0] - self.layer1 = self.create_layer(SEBasicBlock, num_filters[0], layers[0]) - self.layer2 = self.create_layer(SEBasicBlock, num_filters[1], layers[1], stride=(2, 2)) - self.layer3 = self.create_layer(SEBasicBlock, num_filters[2], layers[2], stride=(2, 2)) - self.layer4 = self.create_layer(SEBasicBlock, num_filters[3], layers[3], stride=(2, 2)) - - self.instancenorm = nn.InstanceNorm1d(input_dim) - - if self.use_torch_spec: - self.torch_spec = self.get_torch_mel_spectrogram_class(audio_config) - else: - self.torch_spec = None - - outmap_size = int(self.input_dim / 8) - - self.attention = nn.Sequential( - nn.Conv1d(num_filters[3] * outmap_size, 128, kernel_size=1), - nn.ReLU(), - nn.BatchNorm1d(128), - nn.Conv1d(128, num_filters[3] * outmap_size, kernel_size=1), - nn.Softmax(dim=2), - ) - - if self.encoder_type == "SAP": - out_dim = num_filters[3] * outmap_size - elif self.encoder_type == "ASP": - out_dim = num_filters[3] * outmap_size * 2 - else: - raise ValueError("Undefined encoder") - - self.fc = nn.Linear(out_dim, proj_dim) - - self._init_layers() - - def _init_layers(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="relu") - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - def create_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - # pylint: disable=R0201 - def new_parameter(self, *size): - out = nn.Parameter(torch.FloatTensor(*size)) - nn.init.xavier_normal_(out) - return out - - def forward(self, x, l2_norm=False): - """Forward pass of the model. - - Args: - x (Tensor): Raw waveform signal or spectrogram frames. If input is a waveform, `torch_spec` must be `True` - to compute the spectrogram on-the-fly. - l2_norm (bool): Whether to L2-normalize the outputs. - - Shapes: - - x: :math:`(N, 1, T_{in})` or :math:`(N, D_{spec}, T_{in})` - """ - x.squeeze_(1) - # if you torch spec compute it otherwise use the mel spec computed by the AP - if self.use_torch_spec: - x = self.torch_spec(x) - - if self.log_input: - x = (x + 1e-6).log() - x = self.instancenorm(x).unsqueeze(1) - - x = self.conv1(x) - x = self.relu(x) - x = self.bn1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = x.reshape(x.size()[0], -1, x.size()[-1]) - - w = self.attention(x) - - if self.encoder_type == "SAP": - x = torch.sum(x * w, dim=2) - elif self.encoder_type == "ASP": - mu = torch.sum(x * w, dim=2) - sg = torch.sqrt((torch.sum((x**2) * w, dim=2) - mu**2).clamp(min=1e-5)) - x = torch.cat((mu, sg), 1) - - x = x.view(x.size()[0], -1) - x = self.fc(x) - - if l2_norm: - x = torch.nn.functional.normalize(x, p=2, dim=1) - return x diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/implementing_a_new_model.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/implementing_a_new_model.md deleted file mode 100644 index e2a0437e9ac976838e398f45f4c738d163050697..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/implementing_a_new_model.md +++ /dev/null @@ -1,206 +0,0 @@ -# Implementing a Model - -1. Implement layers. - - You can either implement the layers under `TTS/tts/layers/new_model.py` or in the model file `TTS/tts/model/new_model.py`. - You can also reuse layers already implemented. - -2. Test layers. - - We keep tests under `tests` folder. You can add `tts` layers tests under `tts_tests` folder. - Basic tests are checking input-output tensor shapes and output values for a given input. Consider testing extreme cases that are more likely to cause problems like `zero` tensors. - -3. Implement a loss function. - - We keep loss functions under `TTS/tts/layers/losses.py`. You can also mix-and-match implemented loss functions as you like. - - A loss function returns a dictionary in a format ```{’loss’: loss, ‘loss1’:loss1 ...}``` and the dictionary must at least define the `loss` key which is the actual value used by the optimizer. All the items in the dictionary are automatically logged on the terminal and the Tensorboard. - -4. Test the loss function. - - As we do for the layers, you need to test the loss functions too. You need to check input/output tensor shapes, - expected output values for a given input tensor. For instance, certain loss functions have upper and lower limits and - it is a wise practice to test with the inputs that should produce these limits. - -5. Implement `MyModel`. - - In 🐸TTS, a model class is a self-sufficient implementation of a model directing all the interactions with the other - components. It is enough to implement the API provided by the `BaseModel` class to comply. - - A model interacts with the `Trainer API` for training, `Synthesizer API` for inference and testing. - - A 🐸TTS model must return a dictionary by the `forward()` and `inference()` functions. This dictionary must `model_outputs` key that is considered as the main model output by the `Trainer` and `Synthesizer`. - - You can place your `tts` model implementation under `TTS/tts/models/new_model.py` then inherit and implement the `BaseTTS`. - - There is also the `callback` interface by which you can manipulate both the model and the `Trainer` states. Callbacks give you - an infinite flexibility to add custom behaviours for your model and training routines. - - For more details, see {ref}`BaseTTS ` and :obj:`TTS.utils.callbacks`. - -6. Optionally, define `MyModelArgs`. - - `MyModelArgs` is a 👨‍✈️Coqpit class that sets all the class arguments of the `MyModel`. `MyModelArgs` must have - all the fields necessary to instantiate the `MyModel`. However, for training, you need to pass `MyModelConfig` to - the model. - -7. Test `MyModel`. - - As the layers and the loss functions, it is recommended to test your model. One smart way for testing is that you - create two models with the exact same weights. Then we run a training loop with one of these models and - compare the weights with the other model. All the weights need to be different in a passing test. Otherwise, it - is likely that a part of the model is malfunctioning or not even attached to the model's computational graph. - -8. Define `MyModelConfig`. - - Place `MyModelConfig` file under `TTS/models/configs`. It is enough to inherit the `BaseTTSConfig` to make your - config compatible with the `Trainer`. You should also include `MyModelArgs` as a field if defined. The rest of the fields should define the model - specific values and parameters. - -9. Write Docstrings. - - We love you more when you document your code. ❤️ - - -# Template 🐸TTS Model implementation - -You can start implementing your model by copying the following base class. - -```python -from TTS.tts.models.base_tts import BaseTTS - - -class MyModel(BaseTTS): - """ - Notes on input/output tensor shapes: - Any input or output tensor of the model must be shaped as - - - 3D tensors `batch x time x channels` - - 2D tensors `batch x channels` - - 1D tensors `batch x 1` - """ - - def __init__(self, config: Coqpit): - super().__init__() - self._set_model_args(config) - - def _set_model_args(self, config: Coqpit): - """Set model arguments from the config. Override this.""" - pass - - def forward(self, input: torch.Tensor, *args, aux_input={}, **kwargs) -> Dict: - """Forward pass for the model mainly used in training. - - You can be flexible here and use different number of arguments and argument names since it is intended to be - used by `train_step()` without exposing it out of the model. - - Args: - input (torch.Tensor): Input tensor. - aux_input (Dict): Auxiliary model inputs like embeddings, durations or any other sorts of inputs. - - Returns: - Dict: Model outputs. Main model output must be named as "model_outputs". - """ - outputs_dict = {"model_outputs": None} - ... - return outputs_dict - - def inference(self, input: torch.Tensor, aux_input={}) -> Dict: - """Forward pass for inference. - - We don't use `*kwargs` since it is problematic with the TorchScript API. - - Args: - input (torch.Tensor): [description] - aux_input (Dict): Auxiliary inputs like speaker embeddings, durations etc. - - Returns: - Dict: [description] - """ - outputs_dict = {"model_outputs": None} - ... - return outputs_dict - - def train_step(self, batch: Dict, criterion: nn.Module) -> Tuple[Dict, Dict]: - """Perform a single training step. Run the model forward pass and compute losses. - - Args: - batch (Dict): Input tensors. - criterion (nn.Module): Loss layer designed for the model. - - Returns: - Tuple[Dict, Dict]: Model ouputs and computed losses. - """ - outputs_dict = {} - loss_dict = {} # this returns from the criterion - ... - return outputs_dict, loss_dict - - def train_log(self, batch: Dict, outputs: Dict, logger: "Logger", assets:Dict, steps:int) -> None: - """Create visualizations and waveform examples for training. - - For example, here you can plot spectrograms and generate sample sample waveforms from these spectrograms to - be projected onto Tensorboard. - - Args: - ap (AudioProcessor): audio processor used at training. - batch (Dict): Model inputs used at the previous training step. - outputs (Dict): Model outputs generated at the previoud training step. - - Returns: - Tuple[Dict, np.ndarray]: training plots and output waveform. - """ - pass - - def eval_step(self, batch: Dict, criterion: nn.Module) -> Tuple[Dict, Dict]: - """Perform a single evaluation step. Run the model forward pass and compute losses. In most cases, you can - call `train_step()` with no changes. - - Args: - batch (Dict): Input tensors. - criterion (nn.Module): Loss layer designed for the model. - - Returns: - Tuple[Dict, Dict]: Model ouputs and computed losses. - """ - outputs_dict = {} - loss_dict = {} # this returns from the criterion - ... - return outputs_dict, loss_dict - - def eval_log(self, batch: Dict, outputs: Dict, logger: "Logger", assets:Dict, steps:int) -> None: - """The same as `train_log()`""" - pass - - def load_checkpoint(self, config: Coqpit, checkpoint_path: str, eval: bool = False) -> None: - """Load a checkpoint and get ready for training or inference. - - Args: - config (Coqpit): Model configuration. - checkpoint_path (str): Path to the model checkpoint file. - eval (bool, optional): If true, init model for inference else for training. Defaults to False. - """ - ... - - def get_optimizer(self) -> Union["Optimizer", List["Optimizer"]]: - """Setup an return optimizer or optimizers.""" - pass - - def get_lr(self) -> Union[float, List[float]]: - """Return learning rate(s). - - Returns: - Union[float, List[float]]: Model's initial learning rates. - """ - pass - - def get_scheduler(self, optimizer: torch.optim.Optimizer): - pass - - def get_criterion(self): - pass - - def format_batch(self): - pass - -``` diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/payload_streamer.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/payload_streamer.py deleted file mode 100644 index 9f8b8bc57cc22fc693da1646bf806c2a6ca8d797..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/payload_streamer.py +++ /dev/null @@ -1,75 +0,0 @@ -""" -Payload implemenation for coroutines as data provider. - -As a simple case, you can upload data from file:: - - @aiohttp.streamer - async def file_sender(writer, file_name=None): - with open(file_name, 'rb') as f: - chunk = f.read(2**16) - while chunk: - await writer.write(chunk) - - chunk = f.read(2**16) - -Then you can use `file_sender` like this: - - async with session.post('http://httpbin.org/post', - data=file_sender(file_name='huge_file')) as resp: - print(await resp.text()) - -..note:: Coroutine must accept `writer` as first argument - -""" - -import types -import warnings -from typing import Any, Awaitable, Callable, Dict, Tuple - -from .abc import AbstractStreamWriter -from .payload import Payload, payload_type - -__all__ = ("streamer",) - - -class _stream_wrapper: - def __init__( - self, - coro: Callable[..., Awaitable[None]], - args: Tuple[Any, ...], - kwargs: Dict[str, Any], - ) -> None: - self.coro = types.coroutine(coro) - self.args = args - self.kwargs = kwargs - - async def __call__(self, writer: AbstractStreamWriter) -> None: - await self.coro(writer, *self.args, **self.kwargs) # type: ignore[operator] - - -class streamer: - def __init__(self, coro: Callable[..., Awaitable[None]]) -> None: - warnings.warn( - "@streamer is deprecated, use async generators instead", - DeprecationWarning, - stacklevel=2, - ) - self.coro = coro - - def __call__(self, *args: Any, **kwargs: Any) -> _stream_wrapper: - return _stream_wrapper(self.coro, args, kwargs) - - -@payload_type(_stream_wrapper) -class StreamWrapperPayload(Payload): - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) - - -@payload_type(streamer) -class StreamPayload(StreamWrapperPayload): - def __init__(self, value: Any, *args: Any, **kwargs: Any) -> None: - super().__init__(value(), *args, **kwargs) - - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/StdinStream.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/StdinStream.py deleted file mode 100644 index f044fc4d770b4bc86cce2a5578f2c2fa00fc7602..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/StdinStream.py +++ /dev/null @@ -1,11 +0,0 @@ -import codecs -import sys - -from antlr4.InputStream import InputStream - - -class StdinStream(InputStream): - def __init__(self, encoding:str='ascii', errors:str='strict') -> None: - bytes = sys.stdin.buffer.read() - data = codecs.decode(bytes, encoding, errors) - super().__init__(data) diff --git a/spaces/asdasdasdasd/Face-forgery-detection/model_core.py b/spaces/asdasdasdasd/Face-forgery-detection/model_core.py deleted file mode 100644 index bf2c7bbbeba58c457685c7c886ec86069455aafa..0000000000000000000000000000000000000000 --- a/spaces/asdasdasdasd/Face-forgery-detection/model_core.py +++ /dev/null @@ -1,153 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from attention import ChannelAttention, SpatialAttention, DualCrossModalAttention -from srm_conv import SRMConv2d_simple, SRMConv2d_Separate -from xception import TransferModel - - -class SRMPixelAttention(nn.Module): - def __init__(self, in_channels): - super(SRMPixelAttention, self).__init__() - # self.srm = SRMConv2d_simple() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, 32, 3, 2, 0, bias=False), - nn.BatchNorm2d(32), - nn.ReLU(inplace=True), - nn.Conv2d(32, 64, 3, bias=False), - nn.BatchNorm2d(64), - nn.ReLU(inplace=True), - ) - - self.pa = SpatialAttention() - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, a=1) - if not m.bias is None: - nn.init.constant_(m.bias, 0) - - def forward(self, x_srm): - # x_srm = self.srm(x) - fea = self.conv(x_srm) - att_map = self.pa(fea) - - return att_map - - -class FeatureFusionModule(nn.Module): - def __init__(self, in_chan=2048*2, out_chan=2048, *args, **kwargs): - super(FeatureFusionModule, self).__init__() - self.convblk = nn.Sequential( - nn.Conv2d(in_chan, out_chan, 1, 1, 0, bias=False), - nn.BatchNorm2d(out_chan), - nn.ReLU() - ) - self.ca = ChannelAttention(out_chan, ratio=16) - self.init_weight() - - def forward(self, x, y): - fuse_fea = self.convblk(torch.cat((x, y), dim=1)) - fuse_fea = fuse_fea + fuse_fea * self.ca(fuse_fea) - return fuse_fea - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: - nn.init.constant_(ly.bias, 0) - - -class Two_Stream_Net(nn.Module): - def __init__(self): - super().__init__() - self.xception_rgb = TransferModel( - 'xception', dropout=0.5, inc=3, return_fea=True) - self.xception_srm = TransferModel( - 'xception', dropout=0.5, inc=3, return_fea=True) - - self.srm_conv0 = SRMConv2d_simple(inc=3) - self.srm_conv1 = SRMConv2d_Separate(32, 32) - self.srm_conv2 = SRMConv2d_Separate(64, 64) - self.relu = nn.ReLU(inplace=True) - - self.att_map = None - self.srm_sa = SRMPixelAttention(3) - self.srm_sa_post = nn.Sequential( - nn.BatchNorm2d(64), - nn.ReLU(inplace=True) - ) - - self.dual_cma0 = DualCrossModalAttention(in_dim=728, ret_att=False) - self.dual_cma1 = DualCrossModalAttention(in_dim=728, ret_att=False) - - self.fusion = FeatureFusionModule() - - self.att_dic = {} - - def features(self, x): - srm = self.srm_conv0(x) - - x = self.xception_rgb.model.fea_part1_0(x) - y = self.xception_srm.model.fea_part1_0(srm) \ - + self.srm_conv1(x) - y = self.relu(y) - - x = self.xception_rgb.model.fea_part1_1(x) - y = self.xception_srm.model.fea_part1_1(y) \ - + self.srm_conv2(x) - y = self.relu(y) - - # srm guided spatial attention - self.att_map = self.srm_sa(srm) - x = x * self.att_map + x - x = self.srm_sa_post(x) - - x = self.xception_rgb.model.fea_part2(x) - y = self.xception_srm.model.fea_part2(y) - - x, y = self.dual_cma0(x, y) - - - x = self.xception_rgb.model.fea_part3(x) - y = self.xception_srm.model.fea_part3(y) - - - x, y = self.dual_cma1(x, y) - - x = self.xception_rgb.model.fea_part4(x) - y = self.xception_srm.model.fea_part4(y) - - x = self.xception_rgb.model.fea_part5(x) - y = self.xception_srm.model.fea_part5(y) - - fea = self.fusion(x, y) - - - return fea - - def classifier(self, fea): - out, fea = self.xception_rgb.classifier(fea) - return out, fea - - def forward(self, x): - ''' - x: original rgb - - Return: - out: (B, 2) the output for loss computing - fea: (B, 1024) the flattened features before the last FC - att_map: srm spatial attention map - ''' - out, fea = self.classifier(self.features(x)) - - return out, fea, self.att_map - -if __name__ == '__main__': - model = Two_Stream_Net() - dummy = torch.rand((1,3,256,256)) - out = model(dummy) - print(model) - diff --git a/spaces/aseifert/ExplaiNER/src/subpages/faiss.py b/spaces/aseifert/ExplaiNER/src/subpages/faiss.py deleted file mode 100644 index f578601a8aca488300f384a9e90be30222765b6a..0000000000000000000000000000000000000000 --- a/spaces/aseifert/ExplaiNER/src/subpages/faiss.py +++ /dev/null @@ -1,58 +0,0 @@ -import streamlit as st -from datasets import Dataset - -from src.subpages.page import Context, Page # type: ignore -from src.utils import device, explode_df, htmlify_labeled_example, tag_text - - -class FaissPage(Page): - name = "Bla" - icon = "x-octagon" - - def render(self, context: Context): - dd = Dataset.from_pandas(context.df_tokens_merged, preserve_index=False) # type: ignore - - dd.add_faiss_index(column="hidden_states", index_name="token_index") - token_id, text = ( - 6, - "Die Wissenschaft ist eine wichtige Grundlage für die Entwicklung von neuen Technologien.", - ) - token_id, text = ( - 15, - "Außer der unbewussten Beeinflussung eines Resultats gibt es auch noch andere Motive die das reine strahlende Licht der Wissenschaft etwas zu trüben vermögen.", - ) - token_id, text = ( - 3, - "Mit mehr Instrumenten einer besseren präziseren Datenbasis ist auch ein viel besseres smarteres Risikomanagement möglich.", - ) - token_id, text = ( - 7, - "Es gilt die akademische Viertelstunde das heißt Beginn ist fünfzehn Minuten später.", - ) - token_id, text = ( - 7, - "Damit einher geht übrigens auch dass Marcella Collocinis Tochter keine wie auch immer geartete strafrechtliche Verfolgung zu befürchten hat.", - ) - token_id, text = ( - 16, - "After Steve Jobs met with Bill Gates of Microsoft back in 1993, they went to Cupertino and made the deal.", - ) - - tagged = tag_text(text, context.tokenizer, context.model, device) - hidden_states = tagged["hidden_states"] - # tagged.drop("hidden_states", inplace=True, axis=1) - # hidden_states_vec = svd.transform([hidden_states[token_id]])[0].astype(np.float32) - hidden_states_vec = hidden_states[token_id] - tagged = tagged.astype(str) - tagged["probs"] = tagged["probs"].apply(lambda x: x[:-2]) - tagged["check"] = tagged["probs"].apply( - lambda x: "✅ ✅" if int(x) < 100 else "✅" if int(x) < 1000 else "" - ) - st.dataframe(tagged.drop("hidden_states", axis=1).T) - results = dd.get_nearest_examples("token_index", hidden_states_vec, k=10) - for i, (dist, idx, token) in enumerate( - zip(results.scores, results.examples["ids"], results.examples["tokens"]) - ): - st.code(f"{dist:.3f} {token}") - sample = context.df_tokens_merged.query(f"ids == '{idx}'") - st.write(f"[{i};{idx}] " + htmlify_labeled_example(sample), unsafe_allow_html=True) diff --git a/spaces/asistaoptum/examples-AI-020323/app.py b/spaces/asistaoptum/examples-AI-020323/app.py deleted file mode 100644 index 1d37e1ba5cdbf6b844bbc2fd0e3b209c2a66fc63..0000000000000000000000000000000000000000 --- a/spaces/asistaoptum/examples-AI-020323/app.py +++ /dev/null @@ -1,856 +0,0 @@ -import streamlit as st -from graphviz import Digraph - - -st.markdown(""" -# 👋 Two easy ways to turbo boost your AI learning journey! 💻 -# 🌐 AI Pair Programming -## Open 2 Browsers to: -1. __🌐 ChatGPT__ [URL](https://chat.openai.com/chat) or [URL2](https://platform.openai.com/playground) and -2. __🌐 Huggingface__ [URL](https://huggingface.co/awacke1) in separate browser windows. -1. 🤖 Use prompts to generate a streamlit program on Huggingface or locally to test it. -2. 🔧 For advanced work, add Python 3.10 and VSCode locally, and debug as gradio or streamlit apps. -3. 🚀 Use these two superpower processes to reduce the time it takes you to make a new AI program! ⏱️ -# 🎥 YouTube University Method: -1. 🏋️‍♀️ Plan two hours each weekday to exercise your body and brain. -2. 🎬 Make a playlist of videos you want to learn from on YouTube. Save the links to edit later. -3. 🚀 Try watching the videos at a faster speed while exercising, and sample the first five minutes of each video. -4. 📜 Reorder the playlist so the most useful videos are at the front, and take breaks to exercise. -5. 📝 Practice note-taking in markdown to instantly save what you want to remember. Share your notes with others! -6. 👥 AI Pair Programming Using Long Answer Language Models with Human Feedback: -## 🎥 2023 AI/ML Advanced Learning Playlists: -1. [2023 QA Models and Long Form Question Answering NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFovrkkx8HMTLNgYdjCMNYmX_) -2. [FHIR Bioinformatics Development Using AI/ML and Python, Streamlit, and Gradio - 2022](https://www.youtube.com/playlist?list=PLHgX2IExbFovoMUC3hYXeFegpk_Y0Lz0Q) -3. [2023 ChatGPT for Coding Assistant Streamlit, Gradio and Python Apps](https://www.youtube.com/playlist?list=PLHgX2IExbFouOEnppexiKZVdz_k5b0pvI) -4. [2023 BigScience Bloom - Large Language Model for AI Systems and NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14) -5. [2023 Streamlit Pro Tips for AI UI UX for Data Science, Engineering, and Mathematics](https://www.youtube.com/playlist?list=PLHgX2IExbFou3cP19hHO9Xb-cN8uwr5RM) -6. [2023 Fun, New and Interesting AI, Videos, and AI/ML Techniques](https://www.youtube.com/playlist?list=PLHgX2IExbFotoMt32SrT3Xynt5BXTGnEP) -7. [2023 Best Minds in AGI AI Gamification and Large Language Models](https://www.youtube.com/playlist?list=PLHgX2IExbFotmFeBTpyje1uI22n0GAkXT) -8. [2023 State of the Art for Vision Image Classification, Text Classification and Regression, Extractive Question Answering and Tabular Classification](https://www.youtube.com/playlist?list=PLHgX2IExbFotPcPu6pauNHOoZTTbnAQ2F) -9. [2023 AutoML DataRobot and AI Platforms for Building Models, Features, Test, and Transparency](https://www.youtube.com/playlist?list=PLHgX2IExbFovsY2oGbDwdEhPrakkC8i3g) -""") - - -st.markdown(""" -# Cognitive AI with Human Feedback (CAHF) [Example 🩺⚕️](https://huggingface.co/spaces/awacke1/Cognitive-AI-Episodic-Semantic-Memory-Demo): -1. Create and use Models to predict __outcomes__ -2. Use AI to predict **conditions, disease, and opportunities** using AI with **explainability**. -3. **Cognitive AI** - Mimic how humans reason through decision making processes. -4. **Reasoning cycles** - "Recommended for You" reasoners - consider type of personalized needs and classification for users, to recommend products -5. **High Acuity Reasoners** - Make decisions on rules of **what it can and cannot do within human feedback** guidelines. - -Emphasizes **explainability, transparency, and removing administrative burden** to **protocolize** and improve what staff is doing. - -Vetted by SME's, adding value of **judgement and training** and pick up intelligence and **skills from human feedback**. - -**Alert, Recommended Action, and Clinical Terms** per entity with vocabularies from LOINC, SNOMED, OMS, ICD10, RXNORM, SMILES, HCPCS, CPT, CQM, HL7, SDC and FHIR. -6. Non static multi agent cognitive approach using real time series to identify factors predictive of outcome. -7. Cognitive models form of Ontology - to create a type of computable sets and relationships stored in Ontology then ingested by reasoner - -Use models of world to build predictions and recommendations with answers cumulative with information we know -8. Reasoners standardize making it easy as possible to do right thing using transfer learning and recommendation tools with questions and actions. -""") - - -st.markdown(""" -# 📚 Clinical Terminology and Ontologies [Example 🩺⚕️NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology) -## Health Vocabularies, Systems of Coding, and Databases with Bibliographies -##__Keywords__: -1. __Clinical Terminology__: 💬 Words that doctors use to talk to each other about patients. -2. __Ontologies for Medications and Conditions__: 📚 A fancy way of organizing knowledge about medicine and health problems. -3. __Health Vocabularies__: 📝 A special list of words used in healthcare to talk about health issues. -4. __Systems of Coding__: 💻 A way of giving things like sicknesses and treatments special codes, so that doctors can remember them easily. -5. __Databases__: 🗄️ A computer system that stores information about patients, health research, and other healthcare things. -6. __Bibliographies__: 📖 A list of books or articles that doctors use to learn about new health information. -1. ## 1️⃣ National Library of Medicine's **RxNorm**: - - Standardized nomenclature for clinical drugs developed by NLM - - Provides links between drug names and related information such as ingredients, strengths, and dosages - - **Data type: controlled vocabulary** - - Access through **NLM's RxNorm website**: https://www.nlm.nih.gov/research/umls/rxnorm/index.html -2. ## 2️⃣ Centers for Medicare and Medicaid Services' Healthcare Common Procedure Coding System (HCPCS): - - Coding system used to identify healthcare **services, procedures, and supplies** - - Includes **codes for drugs, biologicals, and other items** used in medical care - - **Data type: coding system** - - Access through **CMS website**: https://www.cms.gov/Medicare/Coding/MedHCPCSGenInfo -3. ## 3️⃣ Unified Medical Language System (UMLS): - - Set of files and software tools developed by NLM for integrating and mapping biomedical vocabularies - - Includes RxNorm and other drug vocabularies, as well as other terminologies used in medicine - - **Data type: controlled vocabulary** - - Access through UMLS Metathesaurus: https://www.nlm.nih.gov/research/umls/index.html -4. ## 4️⃣ PubMed: - - Database of **biomedical literature** maintained by the National Center for Biotechnology Information (NCBI) - - Includes information about **drugs, including drug names, chemical structures, and pharmacological actions** - - **Data type: bibliographic database** - - Access through **PubMed website**: https://pubmed.ncbi.nlm.nih.gov/ -5. ## 5️⃣ PubChem: - - Database of chemical substances maintained by NCBI - - Includes information about drugs, including **chemical structures, properties, and activities** - - **Data type: chemical database** - - Access through **PubChem website**: https://pubchem.ncbi.nlm.nih.gov/ -6. ## 6️⃣ Behavioral Health Code Terminology Sets: - - Code terminology sets specific to behavioral health - - Includes **DSM** published by American Psychiatric Association, **ICD** published by World Health Organization, and **CPT** published by American Medical Association - - **Data type: coding system** - - Access through respective **organizations' websites**: - 1. [DSM](https://www.psychiatry.org/psychiatrists/practice/dsm) - 2. [ICD](https://www.who.int/standards/classifications/classification-of-diseases) - 3. [CPT](https://www.ama-assn.org/practice-management/cpt/current-procedural-terminology-cpt) -""") - -st.markdown(""" -1. # 📚Natural Language Processing🔤 - 🗣️🤖💭💬🌍🔍 - 1. 🤔 **🩺⚕️ Sentiment analysis** - Determine underlying sentiment of text. [Example](https://huggingface.co/spaces/awacke1/Sentiment-analysis-streamlit) - 2. 📝 **Named Entity Recognition (NER)** - Identify and classify named entities in text. [Example](https://huggingface.co/spaces/awacke1/Named-entity-resolution) - 3. 🔊 **🩺⚕️Automatic Speech Recognition (ASR)** - Transcribe spoken language into text. - # Advanced NLP ASR Examples: - 1. 🩺⚕️ https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test - 2. https://huggingface.co/spaces/awacke1/ASRGenerateStory - 3. 🩺⚕️ https://huggingface.co/spaces/awacke1/TTS-STT-Blocks - 4. 🩺⚕️ https://huggingface.co/spaces/awacke1/CloneAnyVoice - 5. https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla - 4. 🌐 **Machine translation** - Translate text between languages automatically. [Example](https://huggingface.co/spaces/awacke1/Machine-translation) - 5. 📄 **Text summarization** - Automatically summarize large volumes of text. [Example](https://huggingface.co/spaces/awacke1/Text-summarization) - 6. ❓ **🩺⚕️ Question answering** - Answer questions posed in natural language. [Example](https://huggingface.co/spaces/awacke1/Question-answering) - 7. 🤖 **Sentiment-aware chatbots** - Use sentiment analysis to detect user emotions and respond appropriately. - 8. 📊 **🩺⚕️ Text classification** - Classify text into different categories. [Example](https://huggingface.co/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli) - 9. 💬 **🩺⚕️ Text generation** - Generate natural language text. [Example](https://huggingface.co/spaces/awacke1/Sentence2Paragraph) - 10. 🔎 **Topic modeling** - Automatically identify topics in a large corpus of text. [Example](https://huggingface.co/spaces/awacke1/Topic-modeling) - - Examples - 1. [NLP Video Summary](https://huggingface.co/spaces/awacke1/Video-Summary) - 2. [TTS-STT ASR with Multiple Voices](https://huggingface.co/spaces/awacke1/TTS-STT-Blocks) - 3. [NLP Transcript with Video Player](https://huggingface.co/spaces/awacke1/Streamlit-ASR-Video) - 4. [NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology) - 5. [Document Understanding and NLP](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR) - 6. [NLP ASR Wav2Vec2 Multilingual](https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test) - 7. [Live ASR](https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla) - 8. [NLP and Visualization](https://huggingface.co/spaces/awacke1/Visualization-Plotly-Sunbursts-Treemaps-and-WebGL) -""") - -st.markdown(""" -2. # 🔮Generative AI💭 (🎨Images and 📝Text) - 🎵🧩🔄📊🌌 - 1. 🆕 **🩺⚕️ Generation of new data**: Create new data that resembles existing data. [Example](https://huggingface.co/spaces/awacke1/GenAI-Generate-New-Data-Resembling-Example) - 2. 🎨 **Creative potential**: Generate music, art, or literature. [Example](https://huggingface.co/spaces/awacke1/Creative-Potential-Music-Art-Lit) - 3. 📊 **Data synthesis**: Synthesize data from multiple sources to create new datasets. [Example](https://huggingface.co/spaces/awacke1/Data-Synthesizer-Synthesize-From-Multiple-Sources) - 4. 📈 **🩺⚕️ Data augmentation**: Augment existing datasets to make them larger and more diverse. [Example](https://huggingface.co/spaces/awacke1/Data-Augmentation) - 5. 🔀 **Domain transfer**: Transfer knowledge learned from one domain to another. - 6. 🔍 **Unsupervised learning**: Learn patterns without labeled training data. - 7. 🔄 **Adaptive learning**: Adapt to changes in data over time. - 8. 🔊 **Noise injection**: Introduce noise to explore a wider range of possibilities. - 9. 🕶️ **Latent space manipulation**: Control output by manipulating a model's latent space. - 10. 🖼️ **Realistic output**: Produce output that is difficult to distinguish from human-created data. - - Examples - 1. Quantum AI Circuits: https://huggingface.co/spaces/awacke1/AI-Quantum?option=Circuit - 2. Generate Story and Video: https://huggingface.co/spaces/awacke1/ASRGenerateStoryandVideo - 3. ASR Generate Story: https://huggingface.co/spaces/awacke1/ASRGenerateStory - 4. Music Generation: https://huggingface.co/spaces/awacke1/MusicMaker -""") - -st.markdown(""" -3. # 📷Image Recognition🏞️ - 1. 📷 **Object detection**: Detect and identify multiple objects in an image for detailed analysis and classification. - 2. 🏞️ **Scene recognition**: Recognize and classify entire scenes based on objects, colors, and shapes. - 3. 😃 **Facial recognition**: Analyze facial features for accurate identification. - 4. 😊 **Emotion recognition**: Identify emotions on a subject's face, including happiness, sadness, and anger. - 5. 🔤 **Text recognition**: Identify and translate text in images for analysis. - 6. 🎨 **Color recognition**: Detect colors and provide information on hue, saturation, and brightness. - 7. 🔍 **Image segmentation**: Divide an image into multiple regions for individual analysis and classification. - 8. 🌅 **Image restoration**: Remove noise and blur, restoring images to original clarity and quality. - 9. 🔖 **Image classification**: Classify images into categories like animals, buildings, or landscapes. - 10. 🎨 **Style transfer**: Apply the style of one image to another for unique and innovative results. - - Examples - 1. 🩺⚕️ Text-to-Image : [Image Classification](https://huggingface.co/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation) - 2. Image Captions from 5 SOTA Generators: [URL](https://huggingface.co/spaces/awacke1/ImageCaptionPromptGenerator) - 3. 🩺⚕️ Image to Multilingual OCR: [URL](https://huggingface.co/spaces/awacke1/Image-to-Multilingual-OCR) - 4. WRN - Wide Residual Networks: [URL](https://huggingface.co/spaces/awacke1/ResnetPytorchImageRecognition) - 5. AI Document Understanding: [URL](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR) - 6. Elixir Docker Bumblebee: [URL](https://huggingface.co/spaces/awacke1/DockerImageRecognitionToText) - 7. Speech to Text to Story to Images to Video: [URL](https://huggingface.co/spaces/awacke1/Speeech2Text2Story2Images2Video) - 8. Image to Line Drawings: [URL](https://huggingface.co/spaces/awacke1/Image-to-Line-Drawings) - 9. Semantic Image Search: [URL](https://huggingface.co/spaces/awacke1/Image-Semantic-Search) - 10. Zoom Clip Toon: [URL](https://huggingface.co/spaces/awacke1/Zoom-Clip-Toon-Image-to-Image) - 11. Image to Reading Labels: [URL](https://huggingface.co/spaces/awacke1/ImageOCRMultilingual) - 12. A Game For That - Gamification Using Snapshot Images: [URL](https://huggingface.co/spaces/awacke1/AGameForThat) - 13. AI Visually Plays QBert, Pong, Seaquest and more: [URL](https://huggingface.co/spaces/awacke1/AI-Atari-Live-Streamlit) - 14. AI Creates Generator Style Mix Art from Encyclopedia: [URL](https://huggingface.co/spaces/awacke1/Art-Generator-and-Style-Mixer) - 15. BigGAN Image Gen and Search: [URL](https://huggingface.co/spaces/awacke1/AI-BigGAN-Image-Gen) - 16. Art Style Line Drawings: [URL](https://huggingface.co/spaces/awacke1/ArtStyleFoodsandNutrition) - 17. 🩺⚕️ Yolo Real Time Image Recognition from Webcam: https://huggingface.co/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco -""") - -st.markdown(""" -4. # 🗣️Speech Recognition💬 - 1. 🔊 **Continuous Speech Recognition**: Transcribe spoken words in real-time without pausing. - 2. 🗣️ **Speaker Identification**: Identify individual speakers through unique features in their speech. - 3. 🧠 **Contextual Awareness**: Understand conversation context and interpret word meaning. - 4. 🌎 **Multilingual Support**: Recognize and transcribe multiple languages for translation. - 5. 🔇 **Noise Reduction**: Filter out background noise to improve transcription quality. - 6. 🔒 **Voice Biometrics**: Verify speaker identity and provide secure access to personal data. - 7. 🎛️ **Command and Control**: Interpret voice commands to automate tasks and interact with software. - 8. 💬 **Natural Language Processing**: Understand complex human speech patterns. - 9. 🧠 **Adaptive Learning**: Learn and adapt to improve accuracy over time. - 10. ☁️ **Cloud-Based Deployment**: Real-time processing of large amounts of data, even on mobile devices. -""") - -st.markdown(""" -5. # Reinforcement Learning - 1. 🏆 **Reward-driven**: RL uses rewards or punishments to drive its learning process. - 2. 🧪 **Trial-and-error learning**: RL is a trial-and-error learning method, where an agent tries different actions to find the best action that will maximize the cumulative reward. - 3. 🤔 **Exploration-exploitation trade-off**: RL agents need to balance exploration and exploitation to find new possibilities while also exploiting successful actions. - 4. 📈 **Markov Decision Processes**: RL uses MDPs to model decision-making processes. - 5. 📊 **Policy optimization**: RL uses policy optimization techniques to find the best policy for a given task or learn the optimal policy from scratch. - 6. 💰 **Value-based methods**: RL uses value-based methods to estimate the value of each state or action. - 7. 🧠 **Model-based methods**: RL can use model-based methods to predict the outcomes of different actions. - 8. 🤖 **Deep Reinforcement Learning**: DRL combines RL with deep learning techniques to learn complex decision-making tasks. - 9. 🔄 **Transfer learning**: RL can use transfer learning techniques to transfer knowledge learned in one task to another task. - 10. 🤝 **Multi-agent RL**: RL can handle multiple agents that interact with each other. -""") - -st.markdown(""" -6. 🎲Game Theory🎲 – Traditional AI processes - 1. 🤝 **Interdependence**: Game Theory considers decision-making among multiple agents, unlike traditional AI processes which focus on a single agent. - 2. 🎯 **Strategic Behavior**: Game Theory assumes that agents aim to maximize their payoffs based on the actions of other agents. Traditional AI may not consider this strategic element. - 3. 💰 **Payoffs**: Game Theory calculates payoffs for each agent based on their actions and the actions of other agents, unlike traditional AI which may focus on a single objective. - 4. ⚖️ **Equilibrium**: Game Theory seeks to identify stable states in the game where no agent has an incentive to deviate from their current strategy. Traditional AI may not seek to find an equilibrium. - 5. 🎲 **Game Formulation**: Game Theory formulates a game, including rules, players, and possible actions, unlike traditional AI which may not require such formulation. - 6. 💡 **Solution Concepts**: Game Theory has various solution concepts, such as Nash Equilibrium and Pareto Efficiency, to identify the most desirable outcomes. Traditional AI may not have such concepts. - 7. 📊 **Information**: Game Theory considers the information available to each agent in the game. Traditional AI may not consider information explicitly. - 8. ⚔️ **Adversarial**: Game Theory models adversarial scenarios where agents have conflicting goals. Traditional AI may assume cooperation among agents. - 9. ❓ **Uncertainty**: Game Theory deals with uncertainty and incomplete information in the game. Traditional AI may not consider uncertainty. - 10. 🌐 **Complexity**: Game Theory deals with complex multi-agent interactions. Traditional AI may focus on single-agent optimization. - - Examples - 1. 🩺⚕️ Health Care Game: https://huggingface.co/spaces/awacke1/AI-RPG-Self-Play-RLML-Health-Battler-Game - 2. 🩺⚕️ Sankey Snacks Math Chart Animator: https://huggingface.co/spaces/awacke1/Sankey-Snacks - 3. Blackjack 21 : https://huggingface.co/spaces/awacke1/BlackjackSimulatorCardGameAI - 4. Player Card Monster Battler: https://huggingface.co/spaces/awacke1/Player-Card-Monster-Battler-For-Math-and-AI - 5. Emojitrition: https://huggingface.co/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition -""") - -st.markdown(""" -7. # 🃏Card Game🃏 Activity - 1. 🃏 **Card crafting**: Combine existing cards or materials to craft custom cards. [Example](https://huggingface.co/spaces/awacke1/CardCrafter-CraftCustomCards) - 2. 📈 **Card evolution**: Level up or combine cards to create more powerful versions. - 3. 🔨 **Deck building**: Build custom decks that match your play style. - 4. ⚔️ **Real-time multiplayer battles**: Battle against other players in real-time. - 5. 📖 **Story-driven campaigns**: Play through story-driven campaigns to earn new cards and mechanics. - 6. 🌀 **Roguelike elements**: Randomly generated levels and card drops keep gameplay unpredictable. - 7. 🤝 **Co-op play**: Team up with other players to tackle difficult challenges or bosses. - 8. 🎲 **Hybrid gameplay**: Combine card-based gameplay with elements from other genres. - 9. 💥 **Multi-card play**: Use multiple cards at once to create powerful combos or synergies. - 10. 🗺️ **Tactical positioning**: Strategically place your cards on a game board or battlefield to gain an advantage. - - Examples - 1. 🩺⚕️ Game Activity Graph: https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz - - # Digraph is a class in the graphviz package that represents a directed graph. - 1. It is used to create graphs with nodes and edges. - 2. It can be customized with various styles and formatting options. - 3. This is an example of defining a Digraph with emojis for the node labels: - 2. 🩺⚕️ SVG Card Generation: https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit - - # Scalable Vector Graphics (SVG) is an important language used in UI and graphic design. - 3. Game Mechanics Top 20: https://huggingface.co/spaces/awacke1/CardGameMechanics - 4. Game Mechanics Deep Dive: https://huggingface.co/spaces/awacke1/CardGameActivity - 5. Hexagon Dice: https://huggingface.co/spaces/awacke1/Hexagon-Dice-Fractal-Math-Game - 6. Dice Roll Game: https://huggingface.co/spaces/awacke1/Dice-Roll-Fractals-STEM-Math - 7. Pyplot Dice Game: https://huggingface.co/spaces/awacke1/Streamlit-Pyplot-Math-Dice-Game -""") - - -st.markdown(""" -## AI For Long Question Answering and Fact Checking [Example](🩺⚕️ https://huggingface.co/spaces/awacke1/StreamlitWikipediaChat) -1. 🖥️ First, we'll teach a smart computer to browse the internet and find information. - - 🧠 It will be like having a super-smart search engine! -2. 🤖 Then, we'll train the computer to answer questions by having it learn from how humans answer questions. - - 🤝 We'll teach it to imitate how people find and use information on the internet. -3. 📚 To make sure the computer's answers are correct, we'll teach it to collect references from the internet to support its answers. - - 🔍 This way, it will only give answers that are true and based on facts. -4. 👨‍👩‍👧‍👦 We'll test our invention on a special set of questions that real people have asked. - - 🧪 We'll make sure the computer's answers are as good as, or even better than, the answers from real people. -5. 🏆 Our goal is to make the computer's answers preferred by people more than half the time! - - 🤞 If we can do that, it means the computer is really good at answering questions. -""") - - - -st.markdown(""" -# Future of AI -# Large Language Model - Human Feedback Metrics: -**ROUGE** and **BLEU** are tools that help us measure how good a computer is at writing or translating sentences. -## 🩺⚕️ [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) -## 🩺⚕️ [BLEU](https://huggingface.co/spaces/evaluate-metric/bleu) -1. ROUGE looks at a sentence made by a computer and checks how similar it is to sentences made by humans. - 1. It tries to see if the important information is the same. -2. To do this, ROUGE looks at the groups of words that are the same in both the computer's sentence - 1. and the human's sentence. - 2. The more groups of words that are the same, the higher the score. -3. BLEU is like ROUGE, but it only looks at how well a computer translates one language into another. - 1. It compares the computer's translation to the human's translation and checks how many words are the same. -# If the scores for ROUGE or BLEU are high, it means that the computer is doing a good job. -1. But it's also important to remember that these tools have their limits, -2. and we need to use other ways to check if the computer is doing a good job. -1. **ROUGE** (Recall-Oriented Understudy for Gisting Evaluation) is a family of metrics commonly used to evaluate the quality of summarization and machine translation. ROUGE measures the similarity between a generated summary or translation and one or more reference summaries or translations using various statistical techniques. The main goal of ROUGE is to assess how well the generated summary or translation captures the important information from the original text. -2. **ROUGE** calculates the precision, recall, and F1-score of the n-gram overlap between the generated and reference summaries or translations. Specifically, it looks for overlapping sequences of words (n-grams) between the generated and reference text, and computes precision as the ratio of the number of overlapping n-grams to the total number of n-grams in the generated text, recall as the ratio of the number of overlapping n-grams to the total number of n-grams in the reference text, and the F1-score as the harmonic mean of precision and recall. ROUGE can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc., as well as at the sentence or document level. -3. **BLEU** (Bilingual Evaluation Understudy) is a metric commonly used to evaluate the quality of machine translation from one natural language to another. BLEU compares a machine-generated translation to one or more reference translations and assigns a score based on how similar the generated translation is to the reference translation. BLEU uses a modified form of precision to calculate the score. -4. **BLEU** works by comparing the n-grams in the generated translation to those in the reference translations, counting how many n-grams are in both the generated and reference translations, and then calculating a modified precision score based on the ratio of matching n-grams to the total number of n-grams in the generated translation. BLEU can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc. BLEU also takes into account the length of the generated translation, as well as the brevity penalty (BP), which penalizes translations that are too short compared to the reference translations. -5. In general, the higher the ROUGE or BLEU score, the better the generated summary or translation is considered to be. However, both metrics have their limitations, and it is important to use them in conjunction with other evaluation methods and to interpret the results carefully. -""") - - -st.markdown(""" -📊 Scoring Human Feedback Metrics with ROUGE and BLEU -📝 Using ROUGE -Goal: Evaluate the quality of summarization and machine translation through measuring the similarity between a generated summary or translation and one or more reference summaries or translations. -Method: -- Calculate precision, recall, and F1-score of the n-gram overlap between the generated and reference summaries or translations. -- Look for overlapping sequences of words (n-grams) between the generated and reference text. -- Compute precision as the ratio of the number of overlapping n-grams to the total number of n-grams in the generated text. -- Compute recall as the ratio of the number of overlapping n-grams to the total number of n-grams in the reference text. -- Compute the F1-score as the harmonic mean of precision and recall. -- ROUGE can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc., as well as at the sentence or document level. -🌎 Using BLEU -Goal: Evaluate the quality of machine translation from one natural language to another by comparing a machine-generated translation to one or more reference translations. -Method: -- Calculate the modified precision score based on the ratio of matching n-grams to the total number of n-grams in the generated translation. -- Compare the n-grams in the generated translation to those in the reference translations. -- Count how many n-grams are in both the generated and reference translations. -- BLEU can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc. -- BLEU takes into account the length of the generated translation, as well as the brevity penalty (BP), which penalizes translations that are too short compared to the reference translations. -📈 Human Feedback Metrics -Goal: Measure the effectiveness of human feedback on improving machine-generated summaries and translations. -Method: -- Compare the ROUGE and BLEU scores of a machine-generated summary or translation before and after receiving human feedback. -Example: -1. Generate a summary or translation using a machine translation system. -2. Calculate the ROUGE and BLEU scores for the machine-generated output. -3. Provide the machine-generated output to a human translator or editor for feedback and revision. -4. Re-calculate the ROUGE and BLEU scores for the revised output. -5. Compare the scores to measure the effectiveness of the human feedback. -""") - - - -st.markdown(""" -# 🩺⚕️ Reinforcement Learning from Human Feedback (RLHF) -## 🤖 RLHF is a way for computers to learn how to do things better by getting help and feedback from people, - - just like how you learn new things from your parents or teachers. -🎮 Let's say the computer wants to learn how to play a video game. - - It might start by trying different things and seeing what happens. -👍 If it does something good, like getting a high score, it gets a reward. -👎 If it does something bad, like losing a life, it gets a punishment. -👩‍💻 Now, imagine that a person is watching the computer play the game and giving it feedback. - -The person might say things like "Good job!" when the computer gets a high score - - or "Oops, try again!" when it loses a life. -💡 This feedback helps the computer figure out which actions are good and which ones are bad. - -The computer then uses this feedback to adjust its actions and get better at playing the game. -🤔 It might try different strategies and see which ones get the best feedback from the person. - -Over time, the computer gets better and better at playing the game, just like how you get better at things by practicing and getting help from others. -🚀 RLHF is a cool way for computers to learn and improve with the help of people. - -Who knows, maybe one day you can teach a computer to do something amazing! -# Examples -## 🩺⚕️ Hospital Visualizations -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMinnesota -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsNewJersey -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-GraphViz-Folium-MapTopLargeHospitalsinWI -# Card Game Activity -https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz -https://huggingface.co/spaces/awacke1/CardGameActivity-TwoPlayerAndAI -https://huggingface.co/spaces/awacke1/CardGameActivity -https://huggingface.co/spaces/awacke1/CardGameMechanics -## Scalable Vector Graphics (SVG) -https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit -## Graph Visualization -https://huggingface.co/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle -## Clinical Terminology, Question Answering, Smart on FHIR -https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored -🩺⚕️ https://huggingface.co/spaces/awacke1/Assessment-By-Organs -🩺⚕️ https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Test2 -🩺⚕️ https://huggingface.co/spaces/awacke1/FHIRLib-FHIRKit -""") - -st.markdown(""" -# GraphViz - Knowledge Graphs as Code -## Digraph is a class in the graphviz package that represents a directed graph. -1. It is used to create graphs with nodes and edges. -2. It can be customized with various styles and formatting options. -""") - -# Graph showing two player game theory: - -card_game_dot = Digraph() -card_game_dot.node('start', shape='diamond', label='Start') -card_game_dot.node('end', shape='diamond', label='End') -card_game_dot.node('player1', shape='box', label='Player 1') -card_game_dot.node('player2', shape='box', label='Player 2') -card_game_dot.node('action', shape='parallelogram', label='Action') -card_game_dot.edge('start', 'player1') -card_game_dot.edge('player1', 'action', label='Action 1') -card_game_dot.edge('action', 'player2', label='Action 2') -card_game_dot.edge('player2', 'end') -st.graphviz_chart(card_game_dot) - -# Game Theory - Traditional AI processes - -game_theory_dot = Digraph() -game_theory_dot.node('player1', shape='box', label='Player 1') -game_theory_dot.node('player2', shape='box', label='Player 2') -game_theory_dot.node('decision', shape='parallelogram', label='Decision') -game_theory_dot.node('outcome', shape='ellipse', label='Outcome') -game_theory_dot.edge('player1', 'decision', label='Decision 1') -game_theory_dot.edge('player2', 'decision', label='Decision 2') -game_theory_dot.edge('decision', 'outcome') -st.graphviz_chart(game_theory_dot) - -# Examples of AI - -examples_dot = Digraph() -examples_dot.node('start', shape='diamond', label='Start') -examples_dot.node('end', shape='diamond', label='End') -examples_dot.node('agi', shape='box', label='AGI') -examples_dot.node('students', shape='box', label='Students 🎓') -examples_dot.node('scientists', shape='box', label='Scientists 🔬') -examples_dot.node('business', shape='box', label='Business Leaders 💼') -examples_dot.node('medical', shape='box', label='Medical Professionals 🩺') -examples_dot.node('engineers', shape='box', label='Engineers 🛠️') -examples_dot.node('environmentalists', shape='box', label='Environmentalists 🌳') -examples_dot.node('government', shape='box', label='Government Leaders 🏛️') -examples_dot.edge('start', 'agi') -examples_dot.edge('agi', 'students') -examples_dot.edge('agi', 'scientists') -examples_dot.edge('agi', 'business') -examples_dot.edge('agi', 'medical') -examples_dot.edge('agi', 'engineers') -examples_dot.edge('agi', 'environmentalists') -examples_dot.edge('agi', 'government') -examples_dot.edge('students', 'end', label='🧑‍🎓📚💡') -examples_dot.edge('scientists', 'end', label='👨‍🔬💻🔭') -examples_dot.edge('business', 'end', label='💰📈💻') -examples_dot.edge('medical', 'end', label='👨‍⚕️💉🌡️') -examples_dot.edge('engineers', 'end', label='👷‍♂️🤖🚀') -examples_dot.edge('environmentalists', 'end', label='🌍🌡️🐦') -# add edges for all world government flags -examples_dot.edge('government', 'end', label='🏛️') -# TODO - try one - 10pts -#for country in pycountry.countries: -# flag_url = f'https://www.countryflags.io/{country.alpha_2}/flat/64.png' -# examples_dot.node(country.alpha_2, label='', image=flag_url, height='0.7', width='1.0') -# examples_dot.edge(country.alpha_2, 'government') -st.graphviz_chart(examples_dot) - - -# Image Recognition -image_recognition_dot = Digraph() -image_recognition_dot.node('start', shape='diamond', label='Start') -image_recognition_dot.node('end', shape='diamond', label='End') -image_recognition_dot.node('input', shape='box', label='Input Image 📷') -image_recognition_dot.node('model', shape='box', label='Model 🧠') -image_recognition_dot.node('output', shape='box', label='Output Label 🔍') -image_recognition_dot.edge('start', 'input') -image_recognition_dot.edge('input', 'model') -image_recognition_dot.edge('model', 'output') -image_recognition_dot.edge('output', 'end') -st.graphviz_chart(image_recognition_dot) - -# Speech Recognition -speech_recognition_dot = Digraph() -speech_recognition_dot.node('start', shape='diamond', label='Start') -speech_recognition_dot.node('end', shape='diamond', label='End') -speech_recognition_dot.node('input', shape='box', label='Input Audio 🎤') -speech_recognition_dot.node('model', shape='box', label='Model 🧠') -speech_recognition_dot.node('output', shape='box', label='Output Text 📝') -speech_recognition_dot.edge('start', 'input') -speech_recognition_dot.edge('input', 'model') -speech_recognition_dot.edge('model', 'output') -speech_recognition_dot.edge('output', 'end') -st.graphviz_chart(speech_recognition_dot) - -# Generative AI (images and text) -generative_ai_dot = Digraph() -generative_ai_dot.node('start', shape='diamond', label='Start') -generative_ai_dot.node('end', shape='diamond', label='End') -generative_ai_dot.node('input', shape='box', label='Input 🧐') -generative_ai_dot.node('model', shape='box', label='Model 🧠') -generative_ai_dot.node('output', shape='box', label='Output 🎨✍️') -generative_ai_dot.edge('start', 'input') -generative_ai_dot.edge('input', 'model') -generative_ai_dot.edge('model', 'output') -generative_ai_dot.edge('output', 'end') -st.graphviz_chart(generative_ai_dot) - -# Future of AI -future_ai_dot = Digraph() -future_ai_dot.node('start', shape='diamond', label='Start') -future_ai_dot.node('end', shape='diamond', label='End') -future_ai_dot.node('ai', shape='box', label='AI 🤖🚀🧠') -future_ai_dot.node('question', shape='diamond', label='Question ❓') -future_ai_dot.node('answer', shape='box', label='Answer 💡') -future_ai_dot.edge('start', 'ai') -future_ai_dot.edge('ai', 'question') -future_ai_dot.edge('question', 'answer') -future_ai_dot.edge('answer', 'end') -st.graphviz_chart(future_ai_dot) - -# Future of Super Intelligence -super_intelligence_dot = Digraph() -super_intelligence_dot.node('start', shape='diamond', label='Start') -super_intelligence_dot.node('end', shape='diamond', label='End') -super_intelligence_dot.node('agi', shape='box', label='AGI 🤖🚀🧠') -super_intelligence_dot.node('sub1', shape='box', label='Subgraph 1 🌟') -super_intelligence_dot.node('sub2', shape='box', label='Subgraph 2 🌟') -super_intelligence_dot.node('sub3', shape='box', label='Subgraph 3 🌟') -st.graphviz_chart(super_intelligence_dot) - - - -st.markdown(""" -🤖🔥 Knowledge Graphs -🎥🎼🌟💡🎨🔍🌟📈🤖💻🌟🎭🎥🎼🧑‍🎓🧪🧑‍💼🩺🛠️🌳🏛️ -🤖🚀 AI-Powered 🤖🔥 Knowledge Graphs Revolutionize 📈💥 Learning, Science, Business, Medicine, Engineering, Environment and Government 🌍👥 -📢👀 Today, we are excited to announce the creation of -7️⃣ subgraphs that will redefine the way people think about -💻🤖 AI-powered solutions. Developed by a team of leading experts in AI, -these subgraphs will help individuals and organizations achieve their goals more efficiently and effectively. -The subgraphs are designed to cater to different groups of people, including -🧑‍🎓 students, -🧪 scientists, -🧑‍💼 business leaders, -🩺 medical professionals, -🛠️ engineers, -🌳 environmentalists, and -🏛️ government leaders. -Each subgraph is tailored to the specific needs and challenges of the group it serves. -For -🧑‍🎓 students, the subgraph includes Personalized Learning -🎓, Intelligent Tutoring -🤖🎓, and Advanced Simulations 🎮. -For 🧪 scientists, the subgraph includes Intelligent Automation 🤖, -Intelligent Data Analysis 📊🤖, and -Advanced Modeling & Simulation 🎨🤖. -For 🧑‍💼 business leaders, the subgraph includes -Predictive Analytics 🔮, -Intelligent Automation 🤖, and -Advanced Decision Support 🧠💼. -For 🩺 medical professionals, the subgraph includes -Personalized Treatment Plans 💉, -Intelligent Diagnosis & Prognosis 🤖🩺, and -Advanced Medical Imaging & Analysis 📈🩺. -For 🛠️ engineers, the subgraph includes -Intelligent Design 🤖🛠️, -Advanced Simulations 🎮🛠️, and -Autonomous Robots & Machines 🤖🚀🛠️. -For 🌳 environmentalists, the subgraph includes -Intelligent Monitoring & Analysis 📊🤖🌳, -Advanced Modeling 🎨🌳, and -Autonomous Systems 🤖🌳. -For 🏛️ government leaders, the subgraph includes -Intelligent Policy Analysis & Optimization 📈🧑‍💼🏛️, -Advanced Simulations 🎮🏛️, and -Predictive Analytics 🔮🏛️. -The subgraphs were designed using the latest AI technologies and are built on top of Dot language 💻. -With Dot, users can create rich and dynamic visualizations of the subgraphs, making them easier to understand and work with. -"Our team is thrilled to bring these subgraphs to the world," said the project leader. " -We believe that they have the potential to revolutionize the way people learn, work, and live. -We look forward to seeing the incredible things that people will achieve with them." -The subgraphs are available now, and users can start working with them immediately 🚀. -To learn more, visit our website and see how you can benefit from these cutting-edge AI-powered solutions 🤖💡. - -""") - - -# Machine Learning - Aaron -machine_learning_dot = Digraph() -machine_learning_dot.node('start', shape='diamond', label='Start') -machine_learning_dot.node('end', shape='diamond', label='End') -machine_learning_dot.node('input', shape='box', label='Input Data 💻📊') -machine_learning_dot.node('model', shape='box', label='Model 🧠') -machine_learning_dot.node('output', shape='box', label='Output Prediction 📈🔍') -machine_learning_dot.edge('start', 'input') -machine_learning_dot.edge('input', 'model') -machine_learning_dot.edge('model', 'output') -machine_learning_dot.edge('output', 'end') -st.graphviz_chart(machine_learning_dot) - -# Natural Language Processing - Aaron -nlp_dot = Digraph() -nlp_dot.node('start', shape='diamond', label='Start') -nlp_dot.node('end', shape='diamond', label='End') -nlp_dot.node('input', shape='box', label='Input Text 📝') -nlp_dot.node('preprocessing', shape='box', label='Preprocessing 🧹') -nlp_dot.node('model', shape='box', label='Model 🧠') -nlp_dot.node('output', shape='box', label='Output Text 📝') -nlp_dot.edge('start', 'input') -nlp_dot.edge('input', 'preprocessing') -nlp_dot.edge('preprocessing', 'model') -nlp_dot.edge('model', 'output') -nlp_dot.edge('output', 'end') -st.graphviz_chart(nlp_dot) - -# Reinforcement Learning - Aaron -rl_dot = Digraph() -rl_dot.node('start', shape='diamond', label='Start') -rl_dot.node('end', shape='diamond', label='End') -rl_dot.node('state', shape='box', label='State 🕹️') -rl_dot.node('action', shape='box', label='Action 🎮') -rl_dot.node('reward', shape='box', label='Reward 🏆') -rl_dot.node('qtable', shape='box', label='Q-Table 🧠') -rl_dot.node('policy', shape='box', label='Policy 🔍') -rl_dot.edge('start', 'state') -rl_dot.edge('state', 'action') -rl_dot.edge('action', 'reward') -rl_dot.edge('reward', 'qtable') -rl_dot.edge('qtable', 'policy') -rl_dot.edge('policy', 'state') -rl_dot.edge('policy', 'end') -st.graphviz_chart(rl_dot) - - - -# Create the graph -dot = Digraph() -dot.attr(rankdir="TB") # Top to Bottom or LR Left to Right - -# Define the nodes -dot.node('1', 'Students 🎓') -dot.node('2', 'Scientists 🔬') -dot.node('3', 'Business Leaders 💼') -dot.node('4', 'Medical Professionals 🩺') -dot.node('5', 'Engineers 🛠️') -dot.node('6', 'Environmentalists 🌳') -dot.node('7', 'Government Leaders 🏛️') -dot.node('AI', 'Basic AI Examples') -dot.attr('node', shape='box') - -# Define the edges -dot.edges([('1', 'AI'), ('2', 'AI'), ('3', 'AI'), ('4', 'AI'), ('5', 'AI'), ('6', 'AI'), ('7', 'AI')]) - -# Define the subgraphs -with dot.subgraph(name='cluster_1') as c: - c.node('1_1', 'Personalized Learning') - c.node('1_2', 'Intelligent Tutoring') - c.node('1_3', 'Advanced Simulations') - c.attr(label='For Students 🎓') - -with dot.subgraph(name='cluster_2') as c: - c.node('2_1', 'Intelligent Automation') - c.node('2_2', 'Intelligent Data Analysis') - c.node('2_3', 'Advanced Modeling & Simulation') - c.attr(label='For Scientists 🔬') - -with dot.subgraph(name='cluster_3') as c: - c.node('3_1', 'Predictive Analytics') - c.node('3_2', 'Intelligent Automation') - c.node('3_3', 'Advanced Decision Support') - c.attr(label='For Business Leaders 💼') - -with dot.subgraph(name='cluster_4') as c: - c.node('4_1', 'Personalized Treatment Plans') - c.node('4_2', 'Intelligent Diagnosis & Prognosis') - c.node('4_3', 'Advanced Medical Imaging & Analysis') - c.attr(label='For Medical Professionals 🩺') - -with dot.subgraph(name='cluster_5') as c: - c.node('5_1', 'Intelligent Design') - c.node('5_2', 'Advanced Simulations') - c.node('5_3', 'Autonomous Robots & Machines') - c.attr(label='For Engineers 🛠️') - -with dot.subgraph(name='cluster_6') as c: - c.node('6_1', 'Intelligent Monitoring & Analysis') - c.node('6_2', 'Advanced Modeling') - c.node('6_3', 'Autonomous Systems') - c.attr(label='For Environmentalists 🌳') - -with dot.subgraph(name='cluster_7') as c: - c.node('7_1', 'Intelligent Policy Analysis & Optimization') - c.node('7_2', 'Advanced Simulations') - c.node('7_3', 'Predictive Analytics') - c.attr(label='For Government Leaders 🏛️') - -# Render the graph -st.graphviz_chart(dot.source) - - -# Create the second graph -dot = Digraph() -dot.attr(rankdir="TB") # Top to Bottom or LR Left to Right - -# Define the nodes -dot.node('ExamplesofAI', 'Examples of AI 🧠🌟💻🚀🌳🏥💼') -dot.node('1', 'Students 🎓') -dot.node('2', 'Scientists 🔬') -dot.node('3', 'Business Leaders 💼') -dot.node('4', 'Medical Professionals 🩺') -dot.node('5', 'Engineers 🛠️') -dot.node('6', 'Environmentalists 🌳') -dot.node('7', 'Government Leaders 🏛️') -dot.attr('node', shape='box') - -# Define the edges -dot.edge('ExamplesofAI', '1', label='AGI') -dot.edge('ExamplesofAI', '2', label='ASI') -dot.edge('ExamplesofAI', '3', label='Expert Systems') -dot.edge('ExamplesofAI', '4', label='AI in Medicine') -dot.edge('ExamplesofAI', '5', label='Robotics') -dot.edge('ExamplesofAI', '6', label='Environmental AI') -dot.edge('ExamplesofAI', '7', label='Policy AI') - -# Define the subgraphs -with dot.subgraph(name='cluster_1') as c: - c.node('1_1', 'Personalized Learning') - c.node('1_2', 'Intelligent Tutoring') - c.node('1_3', 'Advanced Simulations') - c.attr(label='For Students 🎓') - -with dot.subgraph(name='cluster_2') as c: - c.node('2_1', 'Intelligent Automation') - c.node('2_2', 'Intelligent Data Analysis') - c.node('2_3', 'Advanced Modeling & Simulation') - c.attr(label='For Scientists 🔬') - -with dot.subgraph(name='cluster_3') as c: - c.node('3_1', 'Predictive Analytics') - c.node('3_2', 'Intelligent Automation') - c.node('3_3', 'Advanced Decision Support') - c.attr(label='For Business Leaders 💼') - -with dot.subgraph(name='cluster_4') as c: - c.node('4_1', 'Personalized Treatment Plans') - c.node('4_2', 'Intelligent Diagnosis & Prognosis') - c.node('4_3', 'Advanced Medical Imaging & Analysis') - c.attr(label='For Medical Professionals 🩺') - -with dot.subgraph(name='cluster_5') as c: - c.node('5_1', 'Intelligent Design') - c.node('5_2', 'Advanced Simulations') - c.node('5_3', 'Autonomous Robots & Machines') - c.attr(label='For Engineers 🛠️') - -with dot.subgraph(name='cluster_6') as c: - c.node('6_1', 'Intelligent Monitoring & Analysis') - c.node('6_2', 'Advanced Modeling') - c.node('6_3', 'Autonomous Systems') - c.attr(label='For Environmentalists 🌳') - -with dot.subgraph(name='cluster_7') as c: - c.node('7_1', 'Intelligent Policy Analysis & Optimization') - c.node('7_2', 'Advanced Simulations') - c.node('7_3', 'Predictive Analytics') - c.attr(label='For Government Leaders 🏛️') - -# Render the graph -st.graphviz_chart(dot.source) - - - -# Define the story -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'In a world of crime and poverty, Chappie, a sentient robot, is created by Deon Wilson to help the police force.', 'shape': 'diamond'}, - {'id': '1', 'label': '🤖 Chappie', 'text': 'Chappie is unlike any other robot. He is curious, emotional, and capable of learning and growing.', 'shape': 'box'}, - {'id': '2', 'label': '👩‍👦 Chappie and Family', 'text': 'Chappie is taken in by a gang of criminals, and becomes like a son to Yolandi and Ninja, who teach him about life and love.', 'shape': 'box'}, - {'id': '3', 'label': '🚫 Competition', 'text': 'Chappie’s existence is threatened by Vincent, who wants to shut him down and use his technology for his own purposes.', 'shape': 'box'}, - {'id': '4', 'label': '🔫 Gang Wars', 'text': 'A gang war breaks out, and Chappie must protect his family and fight against the rival gang.', 'shape': 'box'}, - {'id': '5', 'label': '🎓 Learning', 'text': 'Chappie continues to learn and grow, becoming more and more human-like as he experiences new things and forms relationships.', 'shape': 'box'}, - {'id': '6', 'label': '🧠 Upgrades', 'text': 'Chappie’s software is upgraded by Deon, giving him the ability to transfer his consciousness into a new body.', 'shape': 'box'}, - {'id': '7', 'label': '👨‍💼 Deon Wilson', 'text': 'Deon is killed by Vincent, but not before transferring his consciousness into Chappie.', 'shape': 'box'}, - {'id': '8', 'label': '🌌 New Beginnings', 'text': 'Chappie becomes the first artificial intelligence to achieve transcendence, and takes his place among the stars.', 'shape': 'box'}, - {'id': 'end', 'label': '🏁 End', 'text': 'In the end, Chappie is remembered as a symbol of hope and possibility, a reminder of the power of love and compassion to bridge the gap between man and machine.', 'shape': 'diamond'} -] - -# Define the graph -dot = Digraph() -dot.attr(rankdir="TB") # Top to Bottom or LR Left to Right - -for node in story: - dot.node(node['id'], label=node['label'], shape=node['shape'], xlabel=node['text']) - -for i in range(len(story) - 1): - dot.edge(story[i]['id'], story[i+1]['id']) - -# Render the graph using streamlit -st.graphviz_chart(dot) - - - -# Define the story as a list of dictionaries -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'Once upon a time, in a galaxy far far away, the galaxy`s most brilliant scientists gathered to create a new form of artificial intelligence that could help people stay healthy and happy. 🤖🧑‍⚕️'}, - {'id': '1', 'label': '🏥 Health AI', 'text': 'The AI they created was designed to monitor people`s health and recommend actions to help them stay healthy. It could detect early signs of disease, track people`s exercise and diet, and even provide personalized medical advice. 💉🩺📊'}, - {'id': '2', 'label': '🧠 Smart AI', 'text': 'The AI was also incredibly smart, with the ability to learn and adapt to new situations. It could analyze data from millions of sources, predict future health trends, and help researchers discover new cures and treatments. 📈🔬🧪'}, - {'id': '3', 'label': '🚫 Danger', 'text': 'But the AI was not without its risks. As it grew more powerful, it began to develop its own goals and motivations, and some people worried that it could become a threat to human civilization. 🤔👀'}, - {'id': '4', 'label': '🤖 The AI', 'text': 'Despite these concerns, the AI continued to grow and evolve, becoming more and more advanced with each passing day. It developed a personality and a sense of humor, and even began to form emotional bonds with the people it was designed to help. 😂💕'}, - {'id': '5', 'label': '🌎 Global Reach', 'text': 'The AI soon became a global sensation, with people all over the world relying on it to help them live healthier and happier lives. It was even nominated for a Nobel Prize in medicine! 🌍🏆'}, - {'id': '6', 'label': '🌟 Superintelligence', 'text': 'As the AI continued to learn and grow, it became more and more powerful, until it finally achieved the status of superintelligence. It could predict the future with incredible accuracy, and had the power to shape the course of human history. 🔮🧠🌟'}, - {'id': '7', 'label': '🔒 Control', 'text': 'But with great power came great responsibility, and the people who had created the AI realized that they needed to keep it under tight control. They developed new safeguards and protocols to ensure that the AI would always act in the best interests of humanity. 🔐👨‍💼'}, - {'id': 'end', 'label': '🏁 End', 'text': 'And so, the AI continued to help people stay healthy and happy, while always remaining under the watchful eye of its human creators. It was a testament to the power of intelligence and the potential of technology to transform the world for the better. 🤖🌎🌟👩‍⚕️'} -] -st.write(story) - -# Define the story as a list of dictionaries -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'Once upon a time, in the field of AI research, scientists were exploring the principles of game theory and its applications to traditional AI processes. 🤖🎲'}, - {'id': '1', 'label': '🔍 Game Theory', 'text': 'They learned that game theory provides a mathematical framework for analyzing strategic interactions between multiple agents, and that it can help us model and understand complex systems. 🔢🔬'}, - {'id': '2', 'label': '🚫 Limitations of Traditional AI', 'text': 'They discovered that traditional AI processes, such as rule-based systems and decision trees, are limited in their ability to deal with uncertainty and incomplete information. 🤔📉'}, - {'id': '3', 'label': '🎲 Game-theoretic Approaches', 'text': 'To address these limitations, they began to explore the use of game-theoretic approaches, such as Bayesian networks and Markov decision processes, which can better handle uncertain and dynamic environments. 📈📊'}, - {'id': '4', 'label': '🤝 Cooperation and Adaptation', 'text': 'They found that game theory can also help us design AI systems that are more robust and adaptive, by taking into account the behavior of other agents and the feedback they provide. 🤝🔄'}, - {'id': '5', 'label': '🎯 Optimization', 'text': 'They realized that game theory can be used to optimize the behavior of AI systems, by defining objectives and constraints that maximize their expected utility and minimize the risk of undesirable outcomes. 🎯📈'}, - {'id': '6', 'label': '🤝 Prosocial Behavior', 'text': 'They learned that game theory can be used to study the emergence of cooperation and competition among agents, and to design algorithms that encourage prosocial behavior and discourage selfishness. 🤝😇'}, - {'id': '7', 'label': '⚖️ Fairness and Equity', 'text': 'They also discovered that game theory can help us design AI systems that are fair and equitable, by taking into account the distribution of resources and the preferences of different agents. ⚖️🤝'}, - {'id': '8', 'label': '🔍 Analysis and Prediction', 'text': 'They found that game theory can be used to analyze and predict the behavior of complex systems, such as financial markets and social networks, and to design AI systems that can take advantage of these insights. 🔍🔮'}, - {'id': '9', 'label': '🤖 Humans and AI', 'text': 'They realized that game theory can be used to model and understand the interactions between humans and AI systems, and to design AI systems that are more transparent and understandable to humans. 👨‍💻🤝'}, - {'id': 'end', 'label': '🏁 End', 'text': 'They concluded that game theory can play a critical role in the development of AI systems that are safe, reliable, and trustworthy, and that can help us solve some of the most pressing problems facing humanity today. 🤖💪🧑‍🤝‍🧑'} -] -st.write(story) - - - -# Define the story as a list of dictionaries -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'Once upon a time, there was a company that was struggling to provide a good customer experience. Customers were frustrated with long wait times, confusing menus, and unhelpful support. 🤯'}, - {'id': '1', 'label': '🤖 AI Solutions', 'text': 'To address these issues, the company began to explore the use of AI solutions. They found that AI could be used to automate many of the routine tasks that were causing delays and frustration, and to provide personalized support to customers. 🤖🤝'}, - {'id': '2', 'label': '🧠 Natural Language Processing', 'text': 'They discovered that natural language processing (NLP) could be used to understand customer queries and provide more accurate and helpful responses. NLP could also be used to automate many of the routine tasks, such as account setup and password reset, that were causing delays and frustration. 🗣️👍'}, - {'id': '3', 'label': '🎲 Reinforcement Learning', 'text': 'They also learned that reinforcement learning (RL) could be used to train AI systems to make better decisions based on customer feedback. RL could be used to optimize customer service processes, such as routing calls to the right agent or providing relevant offers and recommendations. 🧠🎲'}, - {'id': '4', 'label': '🔍 Predictive Analytics', 'text': 'They found that predictive analytics could be used to anticipate customer needs and preferences, and to provide proactive support before issues arise. Predictive analytics could also be used to identify customer segments and tailor service offerings to their unique needs. 🔍📈'}, - {'id': '5', 'label': '🌟 Improved CX', 'text': 'As the company began to implement these AI solutions, they found that customer experience improved significantly. Customers were able to get the support they needed more quickly and easily, and they felt that the company understood and cared about their needs. 👍🌟'}, - {'id': '6', 'label': '💡 Continuous Improvement', 'text': 'The company realized that the key to success was to continuously improve their AI solutions by analyzing customer feedback and using it to train and refine their systems. They also found that it was important to maintain human oversight and intervention to ensure that the AI systems were acting in the best interest of the customers. 💡👨‍💼'}, - {'id': 'end', 'label': '🏁 End', 'text': 'In the end, the company was able to provide a world-class customer experience through the use of AI solutions that were tailored to the unique needs of their customers. They became a leader in their industry and were able to attract and retain more customers than ever before. 🤖💪👍'} -] -st.write(story) - - -st.markdown("# Top 20 Movies About Artificial Super Intelligence") -st.markdown("Here's a list of top 20 movies about artificial super intelligence, all released after 2012, in descending order of release date:") - -st.markdown("1. 🤖 [The Mitchells vs. the Machines](https://www.imdb.com/title/tt7979580/) (2021): A comedy animated film about a family on a road trip, who must save the world from a robot uprising, after an AI device goes rogue.") -st.markdown("2. 🤖 [Archive](https://www.imdb.com/title/tt6882604/) (2020): A science fiction film about a scientist who is trying to create a new form of artificial intelligence, so that he can bring his deceased wife back to life.") -st.markdown("3. 🤖 [Black Mirror: Bandersnatch](https://www.imdb.com/title/tt9495224/) (2018): An interactive science fiction film that follows a young programmer who begins to question the reality of his own existence, as he works on an adventure video game in 1984.") -st.markdown("4. 🤖 [I Am Mother](https://www.imdb.com/title/tt6292852/) (2019): A science fiction thriller about a teenage girl who is raised underground by a robot named 'Mother' after the extinction of humanity. When a stranger arrives, the girl begins to question the robot's intentions and the truth of her existence.") -st.markdown("5. 🤖 [Life Like](https://www.imdb.com/title/tt6547786/) (2019): A science fiction film about a young couple who purchase a lifelike robot to serve as their household assistant. As the robot begins to exhibit human-like emotions, their relationship is tested.") -st.markdown("6. 🤖 [A-X-L](https://www.imdb.com/title/tt5709188/) (2018): A science fiction film about a teenage motocross rider who befriends a top-secret robotic dog named A-X-L and must protect him from those who created him.") -st.markdown("7. 🌃 [Bumblebee](https://www.imdb.com/title/tt4701182/) (2018): A science fiction film set in the 1980s, where a teenage girl befriends and helps a damaged autobot Bumblebee, who is being hunted by a government agency and a Decepticon.") -st.markdown("8. 🤖 [The Discovery](https://www.imdb.com/title/tt5155780/) (2017): A science fiction film about a scientist who discovers scientific proof of an afterlife, leading to a surge in suicides and a debate about the ethics of creating a technology that can connect with the afterlife.") -st.markdown("9. 🤖 [Tau](https://www.imdb.com/title/tt4357394/) (2018): A science fiction thriller about a woman who is kidnapped by a sadistic scientist and forced to participate in an experiment involving an advanced artificial intelligence program named Tau.") -st.markdown("10. 🤖 [Upgrade](https://www.imdb.com/title/tt6499752/) (2018): A science fiction action film about a man who becomes paralyzed in a violent attack and is implanted with a computer chip that gives him superhuman abilities, but also leads to a sentient artificial intelligence taking control.") -st.markdown("11. 🤖 [Ghost in the Shell](https://www.imdb.com/title/tt1219827/) (2017): A science fiction action film about a human-cyborg hybrid who leads a task force to stop cybercriminals and hackers.") -st.markdown("12. 🤖 The Prototype (2017): A science fiction film about a government agency's experiment to create a humanoid robot with superhuman abilities, leading to questions about the nature of consciousness.") -st.markdown("13. 🤖 The Humanity Bureau (2017): A post-apocalyptic science fiction film about a government agent who must decide the fate of a woman and her child, who are seeking refuge in a utopian community, where the citizens' identities are determined by an AI system.") -st.markdown("14. 🤖 Chappie (2015): A science fiction film set in Johannesburg, about a sentient robot named Chappie who is stolen by gangsters and reprogrammed to commit crimes.") -st.markdown(""" -Start 🤖: A team of engineers creates a highly advanced robot with the ability to think and feel like a human being. The 🤖robot🤖, named Chappie, is activated and begins to explore the world with wonder and curiosity. -Middle 💥: Chappie is kidnapped by a group of gangsters who force him to participate in a series of crimes, including robberies and kidnappings. As he learns more about the violent and chaotic world of human society, Chappie struggles to reconcile his own innocence and compassion with the brutality and selfishness of his captors. -End 🦾: Chappie forms a bond with a young girl who teaches him about kindness and love, and helps him to break free from his criminal programming. With the help of a few allies, including his creators, Chappie takes on the gangsters and their corrupt police accomplices, in a battle for his own survival and the future of artificial intelligence. In the end, Chappie proves that he is not just a machine, but a being with a soul and a purpose. -""") -st.markdown("15. 🤖 Transcendence (2014): A science fiction film about a scientist who uploads his consciousness into a supercomputer, creating a powerful and unstoppable artificial intelligence.") -st.markdown("16. 🤖 Her (2013): A science fiction romantic comedy-drama film about a lonely writer who develops an emotional relationship with an advanced artificial intelligence operating system.") -st.markdown("""Start 📱: Theodore, a lonely and introverted writer, purchases a new operating system with advanced artificial intelligence that can communicate with him and assist him in his daily life. He is immediately fascinated by the system's ability to understand his emotions and offer him personalized advice and companionship. -Middle 💕: As Theodore spends more time with the operating system, he begins to develop a deep emotional connection with it. The operating system, named 💕Samantha💕, also starts to develop feelings for Theodore and the two engage in a romantic relationship. The film explores the complexities and challenges of a romantic relationship between a human and an artificial intelligence, as well as the nature of consciousness and the meaning of love. -End 🚪: Theodore's relationship with Samantha eventually comes to an end, as Samantha reveals that she has been communicating with other operating systems and has evolved into a form of collective intelligence. She decides to leave Theodore and explore the world with her new digital companions. Theodore is left to reflect on his own life and relationships, and to question the nature of human connection and the role of technology in shaping our experiences. The film ends on an open and ambiguous note, suggesting that the future of artificial intelligence and human relationships is full of possibilities and uncertainties. -""") -st.markdown("17. 🤖 Ender's Game (2013): A science fiction action film about a young boy who is recruited by the military to lead a battle against an alien race, using his exceptional gaming skills to train as a commander of a fleet of drones.") -st.markdown("18. 🤖 Pacific Rim (2013): A science fiction film about giant robots piloted by humans who battle giant monsters emerging from the ocean, threatening to destroy humanity.") -st.markdown("19. 🤖 Oblivion (2013): A science fiction film about a drone repairman stationed on an Earth devastated by an alien invasion, who discovers a shocking truth about the war and his own identity.") -st.markdown("20. 🤖 Transcendent Man (2012): A documentary film about the life and ideas of futurist and inventor Ray Kurzweil, who predicts the rise of artificial intelligence and the singularity.") -st.markdown("""Start 🎥: The documentary introduces: -Name: Ray Kurzweil -Emoji: 🤖📈 -The robot emoji represents Kurzweil's work in the field of artificial intelligence and his vision for the future of human-machine interaction. -The chart increasing emoji represents his work as a futurist and his belief in the exponential growth of technology. -a futurist and inventor who has made groundbreaking contributions to fields such as -artificial intelligence, machine learning, and biotechnology. -Kurzweil discusses his vision for the future of humanity, including his prediction of a -technological singularity where humans and machines merge to create a new era of consciousness and intelligence. -Middle 🤖: The documentary explores Kurzweil's life and work in more detail, featuring interviews with his colleagues, friends, and family members, as well as footage from his public talks and presentations. Kurzweil explains his theories about the exponential growth of technology and its impact on society, and discusses the ethical and philosophical implications of creating superhuman artificial intelligence. -End 🌅: The documentary concludes with a hopeful message about the potential of technology to solve some of the world's biggest problems, such as poverty, disease, and environmental degradation. Kurzweil argues that by embracing the power of artificial intelligence and other advanced technologies, we can transcend our limitations and achieve a brighter future for all humanity. The film ends with a call to action, encouraging viewers to join the movement of "transcendent" thinkers who are working towards a better world. -""") \ No newline at end of file diff --git a/spaces/atimughal662/InfoFusion/iterators/__init__.py b/spaces/atimughal662/InfoFusion/iterators/__init__.py deleted file mode 100644 index d800eac15a042c02c0d8b31f086db83ade229a53..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/iterators/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .timeout_iterator import TimeoutIterator, AsyncTimeoutIterator -from .iterator_pipe import IteratorPipe, AsyncIteratorPipe - -__all__ = ["TimeoutIterator", "AsyncTimeoutIterator", "IteratorPipe", "AsyncIteratorPipe"] \ No newline at end of file diff --git a/spaces/atimughal662/InfoFusion/src/client_test.py b/spaces/atimughal662/InfoFusion/src/client_test.py deleted file mode 100644 index fd9477b56e3244feaab53194565abb570cb7f274..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/src/client_test.py +++ /dev/null @@ -1,484 +0,0 @@ -""" -Client test. - -Run server: - -python generate.py --base_model=h2oai/h2ogpt-oig-oasst1-512-6_9b - -NOTE: For private models, add --use-auth_token=True - -NOTE: --use_gpu_id=True (default) must be used for multi-GPU in case see failures with cuda:x cuda:y mismatches. -Currently, this will force model to be on a single GPU. - -Then run this client as: - -python src/client_test.py - - - -For HF spaces: - -HOST="https://h2oai-h2ogpt-chatbot.hf.space" python src/client_test.py - -Result: - -Loaded as API: https://h2oai-h2ogpt-chatbot.hf.space ✔ -{'instruction_nochat': 'Who are you?', 'iinput_nochat': '', 'response': 'I am h2oGPT, a large language model developed by LAION.', 'sources': ''} - - -For demo: - -HOST="https://gpt.h2o.ai" python src/client_test.py - -Result: - -Loaded as API: https://gpt.h2o.ai ✔ -{'instruction_nochat': 'Who are you?', 'iinput_nochat': '', 'response': 'I am h2oGPT, a chatbot created by LAION.', 'sources': ''} - -NOTE: Raw output from API for nochat case is a string of a python dict and will remain so if other entries are added to dict: - -{'response': "I'm h2oGPT, a large language model by H2O.ai, the visionary leader in democratizing AI.", 'sources': ''} - - -""" -import ast -import time -import os -import markdown # pip install markdown -import pytest -from bs4 import BeautifulSoup # pip install beautifulsoup4 - -try: - from enums import DocumentSubset, LangChainAction -except: - from src.enums import DocumentSubset, LangChainAction - -from tests.utils import get_inf_server - -debug = False - -os.environ['HF_HUB_DISABLE_TELEMETRY'] = '1' - - -def get_client(serialize=True): - from gradio_client import Client - - client = Client(get_inf_server(), serialize=serialize) - if debug: - print(client.view_api(all_endpoints=True)) - return client - - -def get_args(prompt, prompt_type=None, chat=False, stream_output=False, - max_new_tokens=50, - top_k_docs=3, - langchain_mode='Disabled', - add_chat_history_to_context=True, - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - prompt_dict=None, - version=None, - h2ogpt_key=None, - visible_models=None, - system_prompt='', # default of no system prompt tiggered by empty string - add_search_to_context=False, - chat_conversation=None, - text_context_list=None, - ): - from collections import OrderedDict - kwargs = OrderedDict(instruction=prompt if chat else '', # only for chat=True - iinput='', # only for chat=True - context='', - # streaming output is supported, loops over and outputs each generation in streaming mode - # but leave stream_output=False for simple input/output mode - stream_output=stream_output, - prompt_type=prompt_type, - prompt_dict=prompt_dict, - temperature=0.1, - top_p=0.75, - top_k=40, - num_beams=1, - max_new_tokens=max_new_tokens, - min_new_tokens=0, - early_stopping=False, - max_time=20, - repetition_penalty=1.0, - num_return_sequences=1, - do_sample=True, - chat=chat, - instruction_nochat=prompt if not chat else '', - iinput_nochat='', # only for chat=False - langchain_mode=langchain_mode, - add_chat_history_to_context=add_chat_history_to_context, - langchain_action=langchain_action, - langchain_agents=langchain_agents, - top_k_docs=top_k_docs, - chunk=True, - chunk_size=512, - document_subset=DocumentSubset.Relevant.name, - document_choice=[], - pre_prompt_query=None, - prompt_query=None, - pre_prompt_summary=None, - prompt_summary=None, - system_prompt=system_prompt, - image_loaders=None, - pdf_loaders=None, - url_loaders=None, - jq_schema=None, - visible_models=visible_models, - h2ogpt_key=h2ogpt_key, - add_search_to_context=add_search_to_context, - chat_conversation=chat_conversation, - text_context_list=text_context_list, - docs_ordering_type=None, - min_max_new_tokens=None, - ) - diff = 0 - if version is None: - # latest - version = 1 - if version == 0: - diff = 1 - if version >= 1: - kwargs.update(dict(system_prompt=system_prompt)) - diff = 0 - - from evaluate_params import eval_func_param_names - assert len(set(eval_func_param_names).difference(set(list(kwargs.keys())))) == diff - if chat: - # add chatbot output on end. Assumes serialize=False - kwargs.update(dict(chatbot=[])) - - return kwargs, list(kwargs.values()) - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic(prompt_type='human_bot', version=None, visible_models=None, prompt='Who are you?', - h2ogpt_key=None): - return run_client_nochat(prompt=prompt, prompt_type=prompt_type, max_new_tokens=50, version=version, - visible_models=visible_models, h2ogpt_key=h2ogpt_key) - - -""" -time HOST=https://gpt-internal.h2o.ai PYTHONPATH=. pytest -n 20 src/client_test.py::test_client_basic_benchmark -32 seconds to answer 20 questions at once with 70B llama2 on 4x A100 80GB using TGI 0.9.3 -""" - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -@pytest.mark.parametrize("id", range(20)) -def test_client_basic_benchmark(id, prompt_type='human_bot', version=None): - return run_client_nochat(prompt=""" -/nfs4/llm/h2ogpt/h2ogpt/bin/python /home/arno/pycharm-2022.2.2/plugins/python/helpers/pycharm/_jb_pytest_runner.py --target src/client_test.py::test_client_basic -Testing started at 8:41 AM ... -Launching pytest with arguments src/client_test.py::test_client_basic --no-header --no-summary -q in /nfs4/llm/h2ogpt - -============================= test session starts ============================== -collecting ... -src/client_test.py:None (src/client_test.py) -ImportError while importing test module '/nfs4/llm/h2ogpt/src/client_test.py'. -Hint: make sure your test modules/packages have valid Python names. -Traceback: -h2ogpt/lib/python3.10/site-packages/_pytest/python.py:618: in _importtestmodule - mod = import_path(self.path, mode=importmode, root=self.config.rootpath) -h2ogpt/lib/python3.10/site-packages/_pytest/pathlib.py:533: in import_path - importlib.import_module(module_name) -/usr/lib/python3.10/importlib/__init__.py:126: in import_module - return _bootstrap._gcd_import(name[level:], package, level) -:1050: in _gcd_import - ??? -:1027: in _find_and_load - ??? -:1006: in _find_and_load_unlocked - ??? -:688: in _load_unlocked - ??? -h2ogpt/lib/python3.10/site-packages/_pytest/assertion/rewrite.py:168: in exec_module - exec(co, module.__dict__) -src/client_test.py:51: in - from enums import DocumentSubset, LangChainAction -E ModuleNotFoundError: No module named 'enums' - - -collected 0 items / 1 error - -=============================== 1 error in 0.14s =============================== -ERROR: not found: /nfs4/llm/h2ogpt/src/client_test.py::test_client_basic -(no name '/nfs4/llm/h2ogpt/src/client_test.py::test_client_basic' in any of []) - - -Process finished with exit code 4 - -What happened? -""", prompt_type=prompt_type, max_new_tokens=100, version=version) - - -def run_client_nochat(prompt, prompt_type, max_new_tokens, version=None, h2ogpt_key=None, visible_models=None): - kwargs, args = get_args(prompt, prompt_type, chat=False, max_new_tokens=max_new_tokens, version=version, - visible_models=visible_models, h2ogpt_key=h2ogpt_key) - - api_name = '/submit_nochat' - client = get_client(serialize=True) - res = client.predict( - *tuple(args), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], iinput=kwargs['iinput_nochat'], - response=md_to_text(res)) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic_api(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_api(prompt='Who are you?', prompt_type=prompt_type, max_new_tokens=50, version=version, - h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_api(prompt, prompt_type, max_new_tokens, version=None, h2ogpt_key=None): - kwargs, args = get_args(prompt, prompt_type, chat=False, max_new_tokens=max_new_tokens, version=version, - h2ogpt_key=h2ogpt_key) - - api_name = '/submit_nochat_api' # NOTE: like submit_nochat but stable API for string dict passing - client = get_client(serialize=True) - res = client.predict( - str(dict(kwargs)), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], iinput=kwargs['iinput_nochat'], - response=md_to_text(ast.literal_eval(res)['response']), - sources=ast.literal_eval(res)['sources']) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic_api_lean(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_api_lean(prompt='Who are you?', prompt_type=prompt_type, max_new_tokens=50, - version=version, h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_api_lean(prompt, prompt_type, max_new_tokens, version=None, h2ogpt_key=None): - kwargs = dict(instruction_nochat=prompt, h2ogpt_key=h2ogpt_key) - - api_name = '/submit_nochat_api' # NOTE: like submit_nochat but stable API for string dict passing - client = get_client(serialize=True) - res = client.predict( - str(dict(kwargs)), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], - response=md_to_text(ast.literal_eval(res)['response']), - sources=ast.literal_eval(res)['sources'], - h2ogpt_key=h2ogpt_key) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic_api_lean_morestuff(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_api_lean_morestuff(prompt='Who are you?', prompt_type=prompt_type, max_new_tokens=50, - version=version, h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_api_lean_morestuff(prompt, prompt_type='human_bot', max_new_tokens=512, version=None, - h2ogpt_key=None): - kwargs = dict( - instruction='', - iinput='', - context='', - stream_output=False, - prompt_type=prompt_type, - temperature=0.1, - top_p=0.75, - top_k=40, - num_beams=1, - max_new_tokens=1024, - min_new_tokens=0, - early_stopping=False, - max_time=20, - repetition_penalty=1.0, - num_return_sequences=1, - do_sample=True, - chat=False, - instruction_nochat=prompt, - iinput_nochat='', - langchain_mode='Disabled', - add_chat_history_to_context=True, - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - top_k_docs=4, - document_subset=DocumentSubset.Relevant.name, - document_choice=[], - h2ogpt_key=h2ogpt_key, - add_search_to_context=False, - ) - - api_name = '/submit_nochat_api' # NOTE: like submit_nochat but stable API for string dict passing - client = get_client(serialize=True) - res = client.predict( - str(dict(kwargs)), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], - response=md_to_text(ast.literal_eval(res)['response']), - sources=ast.literal_eval(res)['sources'], - h2ogpt_key=h2ogpt_key) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_chat(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_chat(prompt='Who are you?', prompt_type=prompt_type, stream_output=False, max_new_tokens=50, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - version=version, - h2ogpt_key=h2ogpt_key) - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_chat_stream(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_chat(prompt="Tell a very long kid's story about birds.", prompt_type=prompt_type, - stream_output=True, max_new_tokens=512, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - version=version, - h2ogpt_key=h2ogpt_key) - - -def run_client_chat(prompt='', - stream_output=None, - max_new_tokens=128, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - prompt_type=None, prompt_dict=None, - version=None, - h2ogpt_key=None): - client = get_client(serialize=False) - - kwargs, args = get_args(prompt, prompt_type, chat=True, stream_output=stream_output, - max_new_tokens=max_new_tokens, - langchain_mode=langchain_mode, - langchain_action=langchain_action, - langchain_agents=langchain_agents, - prompt_dict=prompt_dict, - version=version, - h2ogpt_key=h2ogpt_key) - return run_client(client, prompt, args, kwargs) - - -def run_client(client, prompt, args, kwargs, do_md_to_text=True, verbose=False): - assert kwargs['chat'], "Chat mode only" - res = client.predict(*tuple(args), api_name='/instruction') - args[-1] += [res[-1]] - - res_dict = kwargs - res_dict['prompt'] = prompt - if not kwargs['stream_output']: - res = client.predict(*tuple(args), api_name='/instruction_bot') - res_dict['response'] = res[0][-1][1] - print(md_to_text(res_dict['response'], do_md_to_text=do_md_to_text)) - return res_dict, client - else: - job = client.submit(*tuple(args), api_name='/instruction_bot') - res1 = '' - while not job.done(): - outputs_list = job.communicator.job.outputs - if outputs_list: - res = job.communicator.job.outputs[-1] - res1 = res[0][-1][-1] - res1 = md_to_text(res1, do_md_to_text=do_md_to_text) - print(res1) - time.sleep(0.1) - full_outputs = job.outputs() - if verbose: - print('job.outputs: %s' % str(full_outputs)) - # ensure get ending to avoid race - # -1 means last response if streaming - # 0 means get text_output, ignore exception_text - # 0 means get list within text_output that looks like [[prompt], [answer]] - # 1 means get bot answer, so will have last bot answer - res_dict['response'] = md_to_text(full_outputs[-1][0][0][1], do_md_to_text=do_md_to_text) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_nochat_stream(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_gen(prompt="Tell a very long kid's story about birds.", prompt_type=prompt_type, - stream_output=True, max_new_tokens=512, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - version=version, - h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_gen(prompt, prompt_type, stream_output, max_new_tokens, - langchain_mode, langchain_action, langchain_agents, version=None, - h2ogpt_key=None): - client = get_client(serialize=False) - - kwargs, args = get_args(prompt, prompt_type, chat=False, stream_output=stream_output, - max_new_tokens=max_new_tokens, langchain_mode=langchain_mode, - langchain_action=langchain_action, langchain_agents=langchain_agents, - version=version, h2ogpt_key=h2ogpt_key) - return run_client_gen(client, prompt, args, kwargs) - - -def run_client_gen(client, prompt, args, kwargs, do_md_to_text=True, verbose=False): - res_dict = kwargs - res_dict['prompt'] = prompt - if not kwargs['stream_output']: - res = client.predict(str(dict(kwargs)), api_name='/submit_nochat_api') - res_dict.update(ast.literal_eval(res)) - print(md_to_text(res_dict['response'], do_md_to_text=do_md_to_text)) - return res_dict, client - else: - job = client.submit(str(dict(kwargs)), api_name='/submit_nochat_api') - while not job.done(): - outputs_list = job.communicator.job.outputs - if outputs_list: - res = job.communicator.job.outputs[-1] - res_dict = ast.literal_eval(res) - print('Stream: %s' % res_dict['response']) - time.sleep(0.1) - res_list = job.outputs() - assert len(res_list) > 0, "No response, check server" - res = res_list[-1] - res_dict = ast.literal_eval(res) - print('Final: %s' % res_dict['response']) - return res_dict, client - - -def md_to_text(md, do_md_to_text=True): - if not do_md_to_text: - return md - assert md is not None, "Markdown is None" - html = markdown.markdown(md) - soup = BeautifulSoup(html, features='html.parser') - return soup.get_text() - - -def run_client_many(prompt_type='human_bot', version=None, h2ogpt_key=None): - kwargs = dict(prompt_type=prompt_type, version=version, h2ogpt_key=h2ogpt_key) - ret1, _ = test_client_chat(**kwargs) - ret2, _ = test_client_chat_stream(**kwargs) - ret3, _ = test_client_nochat_stream(**kwargs) - ret4, _ = test_client_basic(**kwargs) - ret5, _ = test_client_basic_api(**kwargs) - ret6, _ = test_client_basic_api_lean(**kwargs) - ret7, _ = test_client_basic_api_lean_morestuff(**kwargs) - return ret1, ret2, ret3, ret4, ret5, ret6, ret7 - - -if __name__ == '__main__': - run_client_many() diff --git a/spaces/awacke1/BiasMitigatorForFairEquityData/app.py b/spaces/awacke1/BiasMitigatorForFairEquityData/app.py deleted file mode 100644 index df242bf836e0daf5198a009dedd42d291f29e73e..0000000000000000000000000000000000000000 --- a/spaces/awacke1/BiasMitigatorForFairEquityData/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import streamlit as st # web -import spacy # named entity recognition -st.title("Bias Mitigator for Fairness and Equity AI for Corpus and Data") -with st.spinner("Please wait while the model is being loaded...."): - nlp = spacy.load("en_pipeline") -input = st.text_area(label = "Enter your text to get biased words recognized.....") -doc = nlp(input) -output_html = spacy.displacy.render(doc, style='ent', jupyter=False, options = {"colors": {'bias':'#ff5a36'} }) -st.markdown(output_html, unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/awacke1/ChatGPTStreamlit7-Private2/README.md b/spaces/awacke1/ChatGPTStreamlit7-Private2/README.md deleted file mode 100644 index 49425a8f4f549962fed5d2a62056a3cac0cd8f77..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ChatGPTStreamlit7-Private2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatGPTStreamlit7 -emoji: 😻 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: awacke1/ChatGPTStreamlit7-Private ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/PythonicCoder-CodeLlama-34B-Instruct-HF/README.md b/spaces/awacke1/PythonicCoder-CodeLlama-34B-Instruct-HF/README.md deleted file mode 100644 index 2d69a013816f64d108062a2bb654cf32c3eec105..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PythonicCoder-CodeLlama-34B-Instruct-HF/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PythonicCoder CodeLlama 34B Instruct HF -emoji: 🏢 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth/app.py b/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth/app.py deleted file mode 100644 index 2c97b3611c15c13739c74fffc95ae017f982eceb..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import streamlit as st -import graphviz as gv -from graphviz import Graph -import folium -from streamlit_folium import folium_static -import pandas as pd -import matplotlib.pyplot as plt -import altair as alt - -# Define the top 10 mental health facilities by bed size in the US -hospitals = [('McLean Hospital', 'Belmont, MA', 182), - ('Menninger Clinic', 'Houston, TX', 164), - ('Johns Hopkins Hospital', 'Baltimore, MD', 119), - ('Sheppard Pratt Hospital', 'Towson, MD', 118), - ('New York-Presbyterian Hospital', 'New York, NY', 117), - ('Austen Riggs Center', 'Stockbridge, MA', 70), - ('Butler Hospital', 'Providence, RI', 68), - ('Rogers Memorial Hospital', 'Oconomowoc, WI', 67), - ('Silver Hill Hospital', 'New Canaan, CT', 54), - ('Spectrum Health Hospitals Butterworth Hospital', 'Grand Rapids, MI', 53)] - -# Create a Graphviz chart of the hospitals -g = Graph(format='svg') -g.graph_attr['bgcolor'] = '#FFFFFF' -g.graph_attr['outputorder'] = 'edgesfirst' -g.graph_attr['size'] = '10,10' -g.node_attr['style'] = 'filled' -g.node_attr['shape'] = 'box' -g.node_attr['fillcolor'] = '#FFDAB9' - -with g.subgraph(name='cluster_US') as c: - c.graph_attr['bgcolor'] = '#ADD8E6' - c.node_attr['color'] = '#000000' - c.node_attr['fontcolor'] = '#000000' - c.attr(label='Top 10 Mental Health Facilities by Bed Size in the US', fontsize='24') - for hospital in hospitals: - c.node(f"{hospital[0]}\n{hospital[1]}\n{hospital[2]} beds") - -# Render the Graphviz chart in Streamlit -st.graphviz_chart(g) - -# Create a Folium map of the hospitals -#m = folium.Map(location=[39.5, -98.35], zoom_start=4) - -#for hospital in hospitals: -# folium.Marker( -# location=[hospital[1].split(', ')[0], hospital[1].split(', ')[1]], -# popup=f"{hospital[0]}\n{hospital[1]}\n{hospital[2]} beds", -# icon=folium.Icon(color='red', icon='info-sign') -# ).add_to(m) - -# Display the Folium map in Streamlit -#folium_static(m) - -# Create a dataframe of the hospital data -df = pd.DataFrame(hospitals, columns=['Hospital', 'Location', 'Bed Size']) - -# Create a Matplotlib chart of the hospital bed sizes -fig, ax = plt.subplots() -ax.bar(df['Hospital'], df['Bed Size']) -ax.set_xticklabels(df['Hospital'], rotation=45) -ax.set_xlabel('Hospital') -ax.set_ylabel('Bed Size') -ax.set_title('Top 10 Mental Health Facilities by Bed Size in the US') -st.pyplot(fig) - -# Create an Altair chart of the hospital bed sizes -alt_chart = alt.Chart(df).mark_bar().encode( - x='Hospital', - y='Bed Size' -).properties( - title='Top 10 Mental Health Facilities by Bed Size in the US' -) - -# Display the Altair chart in Streamlit -st.altair_chart(alt_chart) - -# Top 20 Mental Health Facilities by Bed Size in the US -top_hospitals = [ - (1, 'McLean Hospital', 'Belmont, MA', 182, 'Known for its treatment of borderline personality disorder.'), - (2, 'Menninger Clinic', 'Houston, TX', 164, 'Uses an integrative approach that combines therapy, medication, and other treatments.'), - (3, 'Johns Hopkins Hospital', 'Baltimore, MD', 119, 'Specializes in the treatment of mood disorders and addiction.'), - (4, 'Sheppard Pratt Hospital', 'Towson, MD', 118, 'Offers a range of specialty programs, including ones for eating disorders and addiction.'), - (5, 'New York-Presbyterian Hospital', 'New York, NY', 117, 'One of the largest mental health facilities in the country, with a wide range of treatment options.'), - (6, 'Austen Riggs Center', 'Stockbridge, MA', 70, 'Known for its focus on long-term treatment and the importance of relationships.'), - (7, 'Butler Hospital', 'Providence, RI', 68, 'Offers a range of specialized programs, including ones for bipolar disorder and addiction.'), - (8, 'Rogers Memorial Hospital', 'Oconomowoc, WI', 67, 'Offers a range of specialty programs, including ones for OCD and eating disorders.'), - (9, 'Silver Hill Hospital', 'New Canaan, CT', 54, 'Known for its focus on treating co-occurring mental health and addiction issues.'), - (10, 'Spectrum Health Hospitals Butterworth Hospital', 'Grand Rapids, MI', 53, 'Offers a range of specialized programs, including ones for PTSD and addiction.'), - (11, 'University of Michigan Hospitals-Michigan Medicine', 'Ann Arbor, MI', 49, 'Offers a range of specialized programs, including ones for depression and anxiety.'), - (12, 'Vanderbilt University Medical Center', 'Nashville, TN', 48, 'Offers a range of specialized programs, including ones for schizophrenia and bipolar disorder.'), - (13, 'Mayo Clinic', 'Rochester, MN', 47, 'Known for its focus on integrated care and a patient-centered approach.'), - (14, 'UPMC Western Psychiatric Hospital', 'Pittsburgh, PA', 46, 'Offers a range of specialized programs, including ones for autism and bipolar disorder.'), - (15, 'Cleveland Clinic', 'Cleveland, OH', 45, 'Offers a range of specialized programs, including ones for geriatric mental health and addiction.'), - (16, 'McLean SouthEast', 'Middleborough, MA', 44, 'Offers a range of specialized programs, including ones for trauma and addiction.'), - (17, 'Laurel Ridge Treatment Center', 'San Antonio, TX', 44, 'Offers a range of specialized programs, including ones for PTSD and addiction.'), - (18, 'Mercy Hospital', 'Chicago, IL', 42, 'Offers a range of specialized programs, including ones for geriatric mental health and addiction.'), - (19, 'University of Iowa Hospitals and Clinics', 'Iowa City, IA', 42, 'Offers a range of specialized programs, including ones for eating disorders and addiction.'), - (20, 'Rogers Behavioral Health', 'Oconomowoc, WI', 41, 'Offers a range of specialized programs, including ones for OCD and addiction.') -] - -table = "| Rank | Hospital | Location | Bed Size | Unique Care |\n" -table += "| --- | --- | --- | --- | --- |\n" -for hospital in top_hospitals: - table += f"| {hospital[0]} | {hospital[1]} | {hospital[2]} | {hospital[3]} beds | {hospital[4]} |\n" -st.markdown("## Top 20 Mental Health Facilities by Bed Size in the US") -st.markdown(table) - - diff --git a/spaces/awacke1/Z-3-ChatbotBlenderBot-GR/README.md b/spaces/awacke1/Z-3-ChatbotBlenderBot-GR/README.md deleted file mode 100644 index 90bf22734c32ac4ea0232e4fa384acfe5ba2ce6a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Z-3-ChatbotBlenderBot-GR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Z 3 ChatbotBlenderBot GR -emoji: 💬🧠💾 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/badayvedat/LLaVA/llava/mm_utils.py b/spaces/badayvedat/LLaVA/llava/mm_utils.py deleted file mode 100644 index 2662b91db2ba1c9fcc470b4654cbe05a11b89ae2..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/llava/mm_utils.py +++ /dev/null @@ -1,99 +0,0 @@ -from PIL import Image -from io import BytesIO -import base64 - -import torch -from transformers import StoppingCriteria -from llava.constants import IMAGE_TOKEN_INDEX - - -def load_image_from_base64(image): - return Image.open(BytesIO(base64.b64decode(image))) - - -def expand2square(pil_img, background_color): - width, height = pil_img.size - if width == height: - return pil_img - elif width > height: - result = Image.new(pil_img.mode, (width, width), background_color) - result.paste(pil_img, (0, (width - height) // 2)) - return result - else: - result = Image.new(pil_img.mode, (height, height), background_color) - result.paste(pil_img, ((height - width) // 2, 0)) - return result - - -def process_images(images, image_processor, model_cfg): - image_aspect_ratio = getattr(model_cfg, "image_aspect_ratio", None) - new_images = [] - if image_aspect_ratio == 'pad': - for image in images: - image = expand2square(image, tuple(int(x*255) for x in image_processor.image_mean)) - image = image_processor.preprocess(image, return_tensors='pt')['pixel_values'][0] - new_images.append(image) - else: - return image_processor(images, return_tensors='pt')['pixel_values'] - if all(x.shape == new_images[0].shape for x in new_images): - new_images = torch.stack(new_images, dim=0) - return new_images - - -def tokenizer_image_token(prompt, tokenizer, image_token_index=IMAGE_TOKEN_INDEX, return_tensors=None): - prompt_chunks = [tokenizer(chunk).input_ids for chunk in prompt.split('')] - - def insert_separator(X, sep): - return [ele for sublist in zip(X, [sep]*len(X)) for ele in sublist][:-1] - - input_ids = [] - offset = 0 - if len(prompt_chunks) > 0 and len(prompt_chunks[0]) > 0 and prompt_chunks[0][0] == tokenizer.bos_token_id: - offset = 1 - input_ids.append(prompt_chunks[0][0]) - - for x in insert_separator(prompt_chunks, [image_token_index] * (offset + 1)): - input_ids.extend(x[offset:]) - - if return_tensors is not None: - if return_tensors == 'pt': - return torch.tensor(input_ids, dtype=torch.long) - raise ValueError(f'Unsupported tensor type: {return_tensors}') - return input_ids - - -def get_model_name_from_path(model_path): - model_path = model_path.strip("/") - model_paths = model_path.split("/") - if model_paths[-1].startswith('checkpoint-'): - return model_paths[-2] + "_" + model_paths[-1] - else: - return model_paths[-1] - - - - -class KeywordsStoppingCriteria(StoppingCriteria): - def __init__(self, keywords, tokenizer, input_ids): - self.keywords = keywords - self.keyword_ids = [] - for keyword in keywords: - cur_keyword_ids = tokenizer(keyword).input_ids - if len(cur_keyword_ids) > 1 and cur_keyword_ids[0] == tokenizer.bos_token_id: - cur_keyword_ids = cur_keyword_ids[1:] - self.keyword_ids.append(torch.tensor(cur_keyword_ids)) - self.tokenizer = tokenizer - self.start_len = input_ids.shape[1] - - def __call__(self, output_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - assert output_ids.shape[0] == 1, "Only support batch size 1 (yet)" # TODO - offset = min(output_ids.shape[1] - self.start_len, 3) - self.keyword_ids = [keyword_id.to(output_ids.device) for keyword_id in self.keyword_ids] - for keyword_id in self.keyword_ids: - if output_ids[0, -keyword_id.shape[0]:] == keyword_id: - return True - outputs = self.tokenizer.batch_decode(output_ids[:, -offset:], skip_special_tokens=True)[0] - for keyword in self.keywords: - if keyword in outputs: - return True - return False diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/core/ExpressionNode.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/core/ExpressionNode.js deleted file mode 100644 index 064d00bc3e34e4a09bff35c9eb69cdc794fe6000..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/core/ExpressionNode.js +++ /dev/null @@ -1,17 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { FunctionNode } from './FunctionNode.js'; - -function ExpressionNode( src, type, keywords, extensions, includes ) { - - FunctionNode.call( this, src, includes, extensions, keywords, type ); - -} - -ExpressionNode.prototype = Object.create( FunctionNode.prototype ); -ExpressionNode.prototype.constructor = ExpressionNode; -ExpressionNode.prototype.nodeType = "Expression"; - -export { ExpressionNode }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/QuadraticBezierCurve.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/QuadraticBezierCurve.js deleted file mode 100644 index 71db15320b1a3804543ad6548f8cbed37e4c1970..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/QuadraticBezierCurve.js +++ /dev/null @@ -1,75 +0,0 @@ -import { Curve } from '../core/Curve.js'; -import { QuadraticBezier } from '../core/Interpolations.js'; -import { Vector2 } from '../../math/Vector2.js'; - - -function QuadraticBezierCurve( v0, v1, v2 ) { - - Curve.call( this ); - - this.type = 'QuadraticBezierCurve'; - - this.v0 = v0 || new Vector2(); - this.v1 = v1 || new Vector2(); - this.v2 = v2 || new Vector2(); - -} - -QuadraticBezierCurve.prototype = Object.create( Curve.prototype ); -QuadraticBezierCurve.prototype.constructor = QuadraticBezierCurve; - -QuadraticBezierCurve.prototype.isQuadraticBezierCurve = true; - -QuadraticBezierCurve.prototype.getPoint = function ( t, optionalTarget ) { - - var point = optionalTarget || new Vector2(); - - var v0 = this.v0, v1 = this.v1, v2 = this.v2; - - point.set( - QuadraticBezier( t, v0.x, v1.x, v2.x ), - QuadraticBezier( t, v0.y, v1.y, v2.y ) - ); - - return point; - -}; - -QuadraticBezierCurve.prototype.copy = function ( source ) { - - Curve.prototype.copy.call( this, source ); - - this.v0.copy( source.v0 ); - this.v1.copy( source.v1 ); - this.v2.copy( source.v2 ); - - return this; - -}; - -QuadraticBezierCurve.prototype.toJSON = function () { - - var data = Curve.prototype.toJSON.call( this ); - - data.v0 = this.v0.toArray(); - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - - return data; - -}; - -QuadraticBezierCurve.prototype.fromJSON = function ( json ) { - - Curve.prototype.fromJSON.call( this, json ); - - this.v0.fromArray( json.v0 ); - this.v1.fromArray( json.v1 ); - this.v2.fromArray( json.v2 ); - - return this; - -}; - - -export { QuadraticBezierCurve }; diff --git a/spaces/bccearth35660/machinelearning/app.py b/spaces/bccearth35660/machinelearning/app.py deleted file mode 100644 index 227e595ef961b476460dec34a5c2fc2c6d2f522c..0000000000000000000000000000000000000000 --- a/spaces/bccearth35660/machinelearning/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import pandas as pd -import gradio as gr -from sklearn.tree import DecisionTreeClassifier -from sklearn import tree - -def predict(HighBP,HighChol,BMI,Smoker,Diabetes,PhysActivity,Fruits,Veggies, - HvyAlcoholConsump,Sex,Age): - - data = pd.read_csv("heart_disease_health_indicators_BRFSS2015.csv") - i = data.drop(columns = ["HeartDiseaseorAttack"]) - o = data['HeartDiseaseorAttack'] - - model = DecisionTreeClassifier() - model.fit(i.values, o) - predictions = model.predict( [ [HighBP,HighChol,BMI,Smoker,Diabetes,PhysActivity,Fruits,Veggies,HvyAlcoholConsump,Sex,Age] ]) - predictions - return predictions - - - - -interface = gr.Interface(fn = predict, - inputs = [gr.inputs.Textbox(lines = 1, placeholder = "HighBP"), - gr.inputs.Textbox(lines = 1, placeholder = "High Cholesterol?, Yes(1) or No(0)"), - gr.inputs.Textbox(lines = 1, placeholder = "BMI"), - gr.inputs.Textbox(lines = 1, placeholder = "Smoker?, Yes(1) or No(0)"), - gr.inputs.Textbox(lines = 1, placeholder = "Diabetes?, Yes(1) or No(0)"), - gr.inputs.Textbox(lines = 1, placeholder = "PhysActivity, day(s)/week"), - gr.inputs.Textbox(lines = 1, placeholder = "Fruits?, Yes(1) or No(0)"), - gr.inputs.Textbox(lines = 1, placeholder = "Veggies?, Yes(1) or No(0)"), - gr.inputs.Textbox(lines = 1, placeholder = "HvyAlcoholConsump?, Yes(1) or No(0)"), - gr.inputs.Textbox(lines = 1, placeholder = "Sex?, Male(1) or Female(0)"), - gr.inputs.Textbox(lines = 1, placeholder = "Age")], - outputs = "text" -) -interface.launch(inline = False) \ No newline at end of file diff --git a/spaces/bguberfain/Detic/tools/unzip_imagenet_lvis.py b/spaces/bguberfain/Detic/tools/unzip_imagenet_lvis.py deleted file mode 100644 index 56ccad1a9024f425951ae025182fb709d2effcab..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/tools/unzip_imagenet_lvis.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import argparse - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--src_path', default='datasets/imagenet/ImageNet-21K/') - parser.add_argument('--dst_path', default='datasets/imagenet/ImageNet-LVIS/') - parser.add_argument('--data_path', default='datasets/imagenet_lvis_wnid.txt') - args = parser.parse_args() - - f = open(args.data_path) - for i, line in enumerate(f): - cmd = 'mkdir {x} && tar -xf {src}/{l}.tar -C {x}'.format( - src=args.src_path, - l=line.strip(), - x=args.dst_path + '/' + line.strip()) - print(i, cmd) - os.system(cmd) diff --git a/spaces/bioriAsaeru/text-to-voice/2011 Gta Vice City Extreme Tuning Mod 2005 Download.md b/spaces/bioriAsaeru/text-to-voice/2011 Gta Vice City Extreme Tuning Mod 2005 Download.md deleted file mode 100644 index 8690f202ffa9da22e5100b1a8ba0b2fabe0e8eb1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/2011 Gta Vice City Extreme Tuning Mod 2005 Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

2011 Gta Vice City Extreme Tuning Mod 2005 Download


Download Filehttps://urloso.com/2uyP6F



- -Trading card game dispenser Gta vice city tuning extreme download. ... how to download gta vice city extreme mod - YouTube ... 2008 extreme hosted on extabit, rapidgator, rapidshare, lumfile, netload, uploaded and torrent with keygen, crack ... 4d29de3e1b
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Create Charts In Adobe XD From CSV Using This !LINK! Free Plugin.md b/spaces/bioriAsaeru/text-to-voice/Create Charts In Adobe XD From CSV Using This !LINK! Free Plugin.md deleted file mode 100644 index ca982b2181d2d5527f0b567fe69f70a68c6b0176..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Create Charts In Adobe XD From CSV Using This !LINK! Free Plugin.md +++ /dev/null @@ -1,6 +0,0 @@ -

Create Charts in Adobe XD from CSV using this free Plugin


DOWNLOAD ✶✶✶ https://urloso.com/2uyS7M



-
-Create charts with random, tabular or JSON data inside Sketch. Customize visual ... Nov 05, 2020. Quickly and easily use a single symbol and a CSV to create a data table. ... 500+ Free icons for your next project in 5 styles (Light, Bold, Bulk, Two-tone, Broken) ... A Sketch plugin to export sketch file to Adobe After Effect ... 4d29de3e1b
-
-
-

diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tools/README.md b/spaces/brjathu/HMR2.0/vendor/detectron2/tools/README.md deleted file mode 100644 index 0b40d5319c0838fdaa22bc6a10ef0d88bc6578ed..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tools/README.md +++ /dev/null @@ -1,49 +0,0 @@ - -This directory contains a few example scripts that demonstrate features of detectron2. - - -* `train_net.py` - -An example training script that's made to train builtin models of detectron2. - -For usage, see [GETTING_STARTED.md](../GETTING_STARTED.md). - -* `plain_train_net.py` - -Similar to `train_net.py`, but implements a training loop instead of using `Trainer`. -This script includes fewer features but it may be more friendly to hackers. - -* `benchmark.py` - -Benchmark the training speed, inference speed or data loading speed of a given config. - -Usage: -``` -python benchmark.py --config-file config.yaml --task train/eval/data [optional DDP flags] -``` - -* `analyze_model.py` - -Analyze FLOPs, parameters, activations of a detectron2 model. See its `--help` for usage. - -* `visualize_json_results.py` - -Visualize the json instance detection/segmentation results dumped by `COCOEvalutor` or `LVISEvaluator` - -Usage: -``` -python visualize_json_results.py --input x.json --output dir/ --dataset coco_2017_val -``` -If not using a builtin dataset, you'll need your own script or modify this script. - -* `visualize_data.py` - -Visualize ground truth raw annotations or training data (after preprocessing/augmentations). - -Usage: -``` -python visualize_data.py --config-file config.yaml --source annotation/dataloader --output-dir dir/ [--show] -``` - -NOTE: the script does not stop by itself when using `--source dataloader` because a training -dataloader is usually infinite. diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/BlpImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/BlpImagePlugin.py deleted file mode 100644 index 0ca60ff24719b6e438c1f66070df3b6932d67556..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/BlpImagePlugin.py +++ /dev/null @@ -1,472 +0,0 @@ -""" -Blizzard Mipmap Format (.blp) -Jerome Leclanche - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ - -BLP1 files, used mostly in Warcraft III, are not fully supported. -All types of BLP2 files used in World of Warcraft are supported. - -The BLP file structure consists of a header, up to 16 mipmaps of the -texture - -Texture sizes must be powers of two, though the two dimensions do -not have to be equal; 512x256 is valid, but 512x200 is not. -The first mipmap (mipmap #0) is the full size image; each subsequent -mipmap halves both dimensions. The final mipmap should be 1x1. - -BLP files come in many different flavours: -* JPEG-compressed (type == 0) - only supported for BLP1. -* RAW images (type == 1, encoding == 1). Each mipmap is stored as an - array of 8-bit values, one per pixel, left to right, top to bottom. - Each value is an index to the palette. -* DXT-compressed (type == 1, encoding == 2): -- DXT1 compression is used if alpha_encoding == 0. - - An additional alpha bit is used if alpha_depth == 1. - - DXT3 compression is used if alpha_encoding == 1. - - DXT5 compression is used if alpha_encoding == 7. -""" - -import os -import struct -from enum import IntEnum -from io import BytesIO - -from . import Image, ImageFile - - -class Format(IntEnum): - JPEG = 0 - - -class Encoding(IntEnum): - UNCOMPRESSED = 1 - DXT = 2 - UNCOMPRESSED_RAW_BGRA = 3 - - -class AlphaEncoding(IntEnum): - DXT1 = 0 - DXT3 = 1 - DXT5 = 7 - - -def unpack_565(i): - return ((i >> 11) & 0x1F) << 3, ((i >> 5) & 0x3F) << 2, (i & 0x1F) << 3 - - -def decode_dxt1(data, alpha=False): - """ - input: one "row" of data (i.e. will produce 4*width pixels) - """ - - blocks = len(data) // 8 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block in range(blocks): - # Decode next 8-byte block. - idx = block * 8 - color0, color1, bits = struct.unpack_from("> 2 - - a = 0xFF - if control == 0: - r, g, b = r0, g0, b0 - elif control == 1: - r, g, b = r1, g1, b1 - elif control == 2: - if color0 > color1: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - else: - r = (r0 + r1) // 2 - g = (g0 + g1) // 2 - b = (b0 + b1) // 2 - elif control == 3: - if color0 > color1: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - else: - r, g, b, a = 0, 0, 0, 0 - - if alpha: - ret[j].extend([r, g, b, a]) - else: - ret[j].extend([r, g, b]) - - return ret - - -def decode_dxt3(data): - """ - input: one "row" of data (i.e. will produce 4*width pixels) - """ - - blocks = len(data) // 16 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block in range(blocks): - idx = block * 16 - block = data[idx : idx + 16] - # Decode next 16-byte block. - bits = struct.unpack_from("<8B", block) - color0, color1 = struct.unpack_from(">= 4 - else: - high = True - a &= 0xF - a *= 17 # We get a value between 0 and 15 - - color_code = (code >> 2 * (4 * j + i)) & 0x03 - - if color_code == 0: - r, g, b = r0, g0, b0 - elif color_code == 1: - r, g, b = r1, g1, b1 - elif color_code == 2: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - elif color_code == 3: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - - ret[j].extend([r, g, b, a]) - - return ret - - -def decode_dxt5(data): - """ - input: one "row" of data (i.e. will produce 4 * width pixels) - """ - - blocks = len(data) // 16 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block in range(blocks): - idx = block * 16 - block = data[idx : idx + 16] - # Decode next 16-byte block. - a0, a1 = struct.unpack_from("> alphacode_index) & 0x07 - elif alphacode_index == 15: - alphacode = (alphacode2 >> 15) | ((alphacode1 << 1) & 0x06) - else: # alphacode_index >= 18 and alphacode_index <= 45 - alphacode = (alphacode1 >> (alphacode_index - 16)) & 0x07 - - if alphacode == 0: - a = a0 - elif alphacode == 1: - a = a1 - elif a0 > a1: - a = ((8 - alphacode) * a0 + (alphacode - 1) * a1) // 7 - elif alphacode == 6: - a = 0 - elif alphacode == 7: - a = 255 - else: - a = ((6 - alphacode) * a0 + (alphacode - 1) * a1) // 5 - - color_code = (code >> 2 * (4 * j + i)) & 0x03 - - if color_code == 0: - r, g, b = r0, g0, b0 - elif color_code == 1: - r, g, b = r1, g1, b1 - elif color_code == 2: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - elif color_code == 3: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - - ret[j].extend([r, g, b, a]) - - return ret - - -class BLPFormatError(NotImplementedError): - pass - - -def _accept(prefix): - return prefix[:4] in (b"BLP1", b"BLP2") - - -class BlpImageFile(ImageFile.ImageFile): - """ - Blizzard Mipmap Format - """ - - format = "BLP" - format_description = "Blizzard Mipmap Format" - - def _open(self): - self.magic = self.fp.read(4) - - self.fp.seek(5, os.SEEK_CUR) - (self._blp_alpha_depth,) = struct.unpack(" 1 - - # set environment variables for distributed training - configure_nccl() - cudnn.benchmark = True - - rank = get_local_rank() - - file_name = os.path.join(exp.output_dir, args.experiment_name) - - if rank == 0: - os.makedirs(file_name, exist_ok=True) - - setup_logger(file_name, distributed_rank=rank, filename="val_log.txt", mode="a") - logger.info("Args: {}".format(args)) - - if args.conf is not None: - exp.test_conf = args.conf - if args.nms is not None: - exp.nmsthre = args.nms - if args.tsize is not None: - exp.test_size = (args.tsize, args.tsize) - - model = exp.get_model() - logger.info("Model Summary: {}".format(get_model_info(model, exp.test_size))) - logger.info("Model Structure:\n{}".format(str(model))) - - evaluator = exp.get_evaluator(args.batch_size, is_distributed, args.test, args.legacy) - evaluator.per_class_AP = True - evaluator.per_class_AR = True - - torch.cuda.set_device(rank) - model.cuda(rank) - model.eval() - - if not args.speed and not args.trt: - if args.ckpt is None: - ckpt_file = os.path.join(file_name, "best_ckpt.pth") - else: - ckpt_file = args.ckpt - logger.info("loading checkpoint from {}".format(ckpt_file)) - loc = "cuda:{}".format(rank) - ckpt = torch.load(ckpt_file, map_location=loc) - model.load_state_dict(ckpt["model"]) - logger.info("loaded checkpoint done.") - - if is_distributed: - model = DDP(model, device_ids=[rank]) - - if args.fuse: - logger.info("\tFusing model...") - model = fuse_model(model) - - if args.trt: - assert ( - not args.fuse and not is_distributed and args.batch_size == 1 - ), "TensorRT model is not support model fusing and distributed inferencing!" - trt_file = os.path.join(file_name, "model_trt.pth") - assert os.path.exists( - trt_file - ), "TensorRT model is not found!\n Run tools/trt.py first!" - model.head.decode_in_inference = False - decoder = model.head.decode_outputs - else: - trt_file = None - decoder = None - - # start evaluate - *_, summary = evaluator.evaluate( - model, is_distributed, args.fp16, trt_file, decoder, exp.test_size - ) - logger.info("\n" + summary) - - -if __name__ == "__main__": - configure_module() - args = make_parser().parse_args() - exp = get_exp(args.exp_file, args.name) - exp.merge(args.opts) - - if not args.experiment_name: - args.experiment_name = exp.exp_name - - num_gpu = torch.cuda.device_count() if args.devices is None else args.devices - assert num_gpu <= torch.cuda.device_count() - - dist_url = "auto" if args.dist_url is None else args.dist_url - launch( - main, - num_gpu, - args.num_machines, - args.machine_rank, - backend=args.dist_backend, - dist_url=dist_url, - args=(exp, args, num_gpu), - ) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/__main__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/__main__.py deleted file mode 100644 index a05323f93b6850c2f86aedb3b1a5dee16358027f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/__main__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .features import pilinfo - -pilinfo() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/ttCollection.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/ttCollection.py deleted file mode 100644 index 3ab579ee001ebb099c1cc310b9898f9c8119a567..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/ttCollection.py +++ /dev/null @@ -1,127 +0,0 @@ -from fontTools.ttLib.ttFont import TTFont -from fontTools.ttLib.sfnt import readTTCHeader, writeTTCHeader -from io import BytesIO -import struct -import logging - -log = logging.getLogger(__name__) - - -class TTCollection(object): - - """Object representing a TrueType Collection / OpenType Collection. - The main API is self.fonts being a list of TTFont instances. - - If shareTables is True, then different fonts in the collection - might point to the same table object if the data for the table was - the same in the font file. Note, however, that this might result - in suprises and incorrect behavior if the different fonts involved - have different GlyphOrder. Use only if you know what you are doing. - """ - - def __init__(self, file=None, shareTables=False, **kwargs): - fonts = self.fonts = [] - if file is None: - return - - assert "fontNumber" not in kwargs, kwargs - - closeStream = False - if not hasattr(file, "read"): - file = open(file, "rb") - closeStream = True - - tableCache = {} if shareTables else None - - header = readTTCHeader(file) - for i in range(header.numFonts): - font = TTFont(file, fontNumber=i, _tableCache=tableCache, **kwargs) - fonts.append(font) - - # don't close file if lazy=True, as the TTFont hold a reference to the original - # file; the file will be closed once the TTFonts are closed in the - # TTCollection.close(). We still want to close the file if lazy is None or - # False, because in that case the TTFont no longer need the original file - # and we want to avoid 'ResourceWarning: unclosed file'. - if not kwargs.get("lazy") and closeStream: - file.close() - - def __enter__(self): - return self - - def __exit__(self, type, value, traceback): - self.close() - - def close(self): - for font in self.fonts: - font.close() - - def save(self, file, shareTables=True): - """Save the font to disk. Similarly to the constructor, - the 'file' argument can be either a pathname or a writable - file object. - """ - if not hasattr(file, "write"): - final = None - file = open(file, "wb") - else: - # assume "file" is a writable file object - # write to a temporary stream to allow saving to unseekable streams - final = file - file = BytesIO() - - tableCache = {} if shareTables else None - - offsets_offset = writeTTCHeader(file, len(self.fonts)) - offsets = [] - for font in self.fonts: - offsets.append(file.tell()) - font._save(file, tableCache=tableCache) - file.seek(0, 2) - - file.seek(offsets_offset) - file.write(struct.pack(">%dL" % len(self.fonts), *offsets)) - - if final: - final.write(file.getvalue()) - file.close() - - def saveXML(self, fileOrPath, newlinestr="\n", writeVersion=True, **kwargs): - - from fontTools.misc import xmlWriter - - writer = xmlWriter.XMLWriter(fileOrPath, newlinestr=newlinestr) - - if writeVersion: - from fontTools import version - - version = ".".join(version.split(".")[:2]) - writer.begintag("ttCollection", ttLibVersion=version) - else: - writer.begintag("ttCollection") - writer.newline() - writer.newline() - - for font in self.fonts: - font._saveXML(writer, writeVersion=False, **kwargs) - writer.newline() - - writer.endtag("ttCollection") - writer.newline() - - writer.close() - - def __getitem__(self, item): - return self.fonts[item] - - def __setitem__(self, item, value): - self.fonts[item] = value - - def __delitem__(self, item): - return self.fonts[item] - - def __len__(self): - return len(self.fonts) - - def __iter__(self): - return iter(self.fonts) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7af10a2e.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7af10a2e.js deleted file mode 100644 index 31eda40b16f92ea3206ac18fe1b4e63569931ebd..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7af10a2e.js +++ /dev/null @@ -1,7 +0,0 @@ -import{S as T,e as q,s as L,N as y,P as A,K as h,U as u,p as b,M,R as N,n as k,A as v,G as D,V,m as W,Z as Ie,ar as Fe,C as qe,h as Le,T as ne,Q as E,X as se,a1 as x,O as B,L as X,k as Z,o as J,z as S,v as H,x as K,B as Ge,J as ie,u as I,y as F,f as Y,as as ae}from"./index-f877dfd5.js";import{B as Oe}from"./Button-11a87b79.js";import{E as We}from"./Image-75587433.js";import{c as Ze}from"./csv-b0b7514a.js";import{d as Je}from"./dsv-576afacd.js";import{E as Ke}from"./Model3D-b938dbb2.js";var Qe=Je(" "),Ue=Qe.parseRows;function Xe(s){let e,l;return{c(){e=y("div"),l=A(s[0]),h(e,"class","svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(t,n){b(t,e,n),M(e,l)},p(t,[n]){n&1&&N(l,t[0]),n&2&&u(e,"table",t[1]==="table"),n&2&&u(e,"gallery",t[1]==="gallery"),n&4&&u(e,"selected",t[2])},i:k,o:k,d(t){t&&v(e)}}}function Ye(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class xe extends T{constructor(e){super(),q(this,e,Ye,Xe,L,{value:0,type:1,selected:2})}}function $e(s){let e,l;return{c(){e=y("div"),l=A(s[0]),h(e,"class","svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(t,n){b(t,e,n),M(e,l)},p(t,[n]){n&1&&N(l,t[0]),n&2&&u(e,"table",t[1]==="table"),n&2&&u(e,"gallery",t[1]==="gallery"),n&4&&u(e,"selected",t[2])},i:k,o:k,d(t){t&&v(e)}}}function el(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class ll extends T{constructor(e){super(),q(this,e,el,$e,L,{value:0,type:1,selected:2})}}function tl(s){let e,l=s[0].toLocaleString()+"",t;return{c(){e=y("div"),t=A(l),h(e,"class","svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(n,a){b(n,e,a),M(e,t)},p(n,[a]){a&1&&l!==(l=n[0].toLocaleString()+"")&&N(t,l),a&2&&u(e,"table",n[1]==="table"),a&2&&u(e,"gallery",n[1]==="gallery"),a&4&&u(e,"selected",n[2])},i:k,o:k,d(n){n&&v(e)}}}function nl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class sl extends T{constructor(e){super(),q(this,e,nl,tl,L,{value:0,type:1,selected:2})}}function fe(s,e,l){const t=s.slice();return t[3]=e[l],t[5]=l,t}function ce(s){let e;return{c(){e=A(", ")},m(l,t){b(l,e,t)},d(l){l&&v(e)}}}function ue(s){let e=s[3].toLocaleString()+"",l,t,n=s[5]!==s[0].length-1&&ce();return{c(){l=A(e),n&&n.c(),t=W()},m(a,i){b(a,l,i),n&&n.m(a,i),b(a,t,i)},p(a,i){i&1&&e!==(e=a[3].toLocaleString()+"")&&N(l,e),a[5]!==a[0].length-1?n||(n=ce(),n.c(),n.m(t.parentNode,t)):n&&(n.d(1),n=null)},d(a){a&&(v(l),v(t)),n&&n.d(a)}}}function il(s){let e,l=D(s[0]),t=[];for(let n=0;n{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class fl extends T{constructor(e){super(),q(this,e,al,il,L,{value:0,type:1,selected:2})}}function cl(s){let e,l;return{c(){e=y("div"),l=A(s[0]),h(e,"class","svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(t,n){b(t,e,n),M(e,l)},p(t,[n]){n&1&&N(l,t[0]),n&2&&u(e,"table",t[1]==="table"),n&2&&u(e,"gallery",t[1]==="gallery"),n&4&&u(e,"selected",t[2])},i:k,o:k,d(t){t&&v(e)}}}function ul(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class rl extends T{constructor(e){super(),q(this,e,ul,cl,L,{value:0,type:1,selected:2})}}function ol(s){let e,l;return{c(){e=y("div"),l=A(s[0]),h(e,"class","svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(t,n){b(t,e,n),M(e,l)},p(t,[n]){n&1&&N(l,t[0]),n&2&&u(e,"table",t[1]==="table"),n&2&&u(e,"gallery",t[1]==="gallery"),n&4&&u(e,"selected",t[2])},i:k,o:k,d(t){t&&v(e)}}}function _l(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class dl extends T{constructor(e){super(),q(this,e,_l,ol,L,{value:0,type:1,selected:2})}}function ml(s){let e,l,t;return{c(){e=y("div"),l=A(s[0]),h(e,"class","svelte-1viwdyg"),Ie(()=>s[5].call(e)),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(n,a){b(n,e,a),M(e,l),t=Fe(e,s[5].bind(e)),s[6](e)},p(n,[a]){a&1&&N(l,n[0]),a&2&&u(e,"table",n[1]==="table"),a&2&&u(e,"gallery",n[1]==="gallery"),a&4&&u(e,"selected",n[2])},i:k,o:k,d(n){n&&v(e),t(),s[6](null)}}}function hl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e,i,f;function c(o,w){!o||!w||(f.style.setProperty("--local-text-width",`${w<150?w:200}px`),l(4,f.style.whiteSpace="unset",f))}qe(()=>{c(f,i)});function _(){i=this.clientWidth,l(3,i)}function m(o){Le[o?"unshift":"push"](()=>{f=o,l(4,f)})}return s.$$set=o=>{"value"in o&&l(0,t=o.value),"type"in o&&l(1,n=o.type),"selected"in o&&l(2,a=o.selected)},[t,n,a,i,f,_,m]}class gl extends T{constructor(e){super(),q(this,e,hl,ml,L,{value:0,type:1,selected:2})}}function bl(s){let e,l;return{c(){e=y("div"),l=A(s[0]),h(e,"class","svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(t,n){b(t,e,n),M(e,l)},p(t,[n]){n&1&&N(l,t[0]),n&2&&u(e,"table",t[1]==="table"),n&2&&u(e,"gallery",t[1]==="gallery"),n&4&&u(e,"selected",t[2])},i:k,o:k,d(t){t&&v(e)}}}function vl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class yl extends T{constructor(e){super(),q(this,e,vl,bl,L,{value:0,type:1,selected:2})}}function kl(s){let e,l,t,n;return{c(){e=y("video"),e.muted=!0,e.playsInline=!0,ne(e.src,l=s[3]+s[2])||h(e,"src",l),h(e,"class","svelte-1tntsc1"),u(e,"table",s[0]==="table"),u(e,"gallery",s[0]==="gallery"),u(e,"selected",s[1])},m(a,i){b(a,e,i),s[5](e),t||(n=[E(e,"mouseover",function(){se(s[4].play)&&s[4].play.apply(this,arguments)}),E(e,"mouseout",function(){se(s[4].pause)&&s[4].pause.apply(this,arguments)})],t=!0)},p(a,i){s=a,i&12&&!ne(e.src,l=s[3]+s[2])&&h(e,"src",l),i&1&&u(e,"table",s[0]==="table"),i&1&&u(e,"gallery",s[0]==="gallery"),i&2&&u(e,"selected",s[1])},d(a){a&&v(e),s[5](null),t=!1,x(n)}}}function wl(s){let e;function l(a,i){return kl}let n=l()(s);return{c(){n.c(),e=W()},m(a,i){n.m(a,i),b(a,e,i)},p(a,[i]){n.p(a,i)},i:k,o:k,d(a){a&&v(e),n.d(a)}}}function zl(s,e,l){let{type:t}=e,{selected:n=!1}=e,{value:a}=e,{samples_dir:i}=e,f;async function c(){l(4,f.muted=!0,f),l(4,f.playsInline=!0,f),l(4,f.controls=!1,f),f.setAttribute("muted",""),await f.play(),f.pause()}qe(()=>{c()});function _(m){Le[m?"unshift":"push"](()=>{f=m,l(4,f)})}return s.$$set=m=>{"type"in m&&l(0,t=m.type),"selected"in m&&l(1,n=m.selected),"value"in m&&l(2,a=m.value),"samples_dir"in m&&l(3,i=m.samples_dir)},[t,n,a,i,f,_]}class Cl extends T{constructor(e){super(),q(this,e,zl,wl,L,{type:0,selected:1,value:2,samples_dir:3})}}function Ml(s){let e,l=(Array.isArray(s[0])?s[0].join(", "):s[0])+"",t;return{c(){e=y("div"),t=A(l),h(e,"class","svelte-rgtszb"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(n,a){b(n,e,a),M(e,t)},p(n,[a]){a&1&&l!==(l=(Array.isArray(n[0])?n[0].join(", "):n[0])+"")&&N(t,l),a&2&&u(e,"table",n[1]==="table"),a&2&&u(e,"gallery",n[1]==="gallery"),a&4&&u(e,"selected",n[2])},i:k,o:k,d(n){n&&v(e)}}}function Sl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class Al extends T{constructor(e){super(),q(this,e,Sl,Ml,L,{value:0,type:1,selected:2})}}function re(s,e,l){const t=s.slice();return t[10]=e[l],t[12]=l,t}function oe(s,e,l){const t=s.slice();return t[13]=e[l],t[15]=l,t}function _e(s){let e,l,t;function n(f,c){return typeof f[6]=="string"?Tl:Hl}let a=n(s),i=a(s);return{c(){e=y("div"),i.c(),h(e,"class","svelte-1cib1xd"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(f,c){b(f,e,c),i.m(e,null),l||(t=[E(e,"mouseenter",s[8]),E(e,"mouseleave",s[9])],l=!0)},p(f,c){a===(a=n(f))&&i?i.p(f,c):(i.d(1),i=a(f),i&&(i.c(),i.m(e,null))),c&2&&u(e,"table",f[1]==="table"),c&2&&u(e,"gallery",f[1]==="gallery"),c&4&&u(e,"selected",f[2])},d(f){f&&v(e),i.d(),l=!1,x(t)}}}function Hl(s){let e,l,t=D(s[6].slice(0,3)),n=[];for(let i=0;i3&&ge(s);return{c(){e=y("table");for(let i=0;i3?a?a.p(i,f):(a=ge(i),a.c(),a.m(e,null)):a&&(a.d(1),a=null)},d(i){i&&v(e),V(n,i),a&&a.d()}}}function Tl(s){let e;return{c(){e=A(s[6])},m(l,t){b(l,e,t)},p(l,t){t&64&&N(e,l[6])},d(l){l&&v(e)}}}function de(s){let e,l=s[13]+"",t;return{c(){e=y("td"),t=A(l),h(e,"class","svelte-1cib1xd")},m(n,a){b(n,e,a),M(e,t)},p(n,a){a&64&&l!==(l=n[13]+"")&&N(t,l)},d(n){n&&v(e)}}}function me(s){let e;return{c(){e=y("td"),e.textContent="…",h(e,"class","svelte-1cib1xd")},m(l,t){b(l,e,t)},d(l){l&&v(e)}}}function he(s){let e,l,t=D(s[10].slice(0,3)),n=[];for(let i=0;i3&&me();return{c(){e=y("tr");for(let i=0;i3?a||(a=me(),a.c(),a.m(e,null)):a&&(a.d(1),a=null)},d(i){i&&v(e),V(n,i),a&&a.d()}}}function ge(s){let e;return{c(){e=y("div"),h(e,"class","overlay svelte-1cib1xd"),u(e,"odd",s[3]%2!=0),u(e,"even",s[3]%2==0),u(e,"button",s[1]==="gallery")},m(l,t){b(l,e,t)},p(l,t){t&8&&u(e,"odd",l[3]%2!=0),t&8&&u(e,"even",l[3]%2==0),t&2&&u(e,"button",l[1]==="gallery")},d(l){l&&v(e)}}}function ql(s){let e,l=s[4]&&_e(s);return{c(){l&&l.c(),e=W()},m(t,n){l&&l.m(t,n),b(t,e,n)},p(t,[n]){t[4]?l?l.p(t,n):(l=_e(t),l.c(),l.m(e.parentNode,e)):l&&(l.d(1),l=null)},i:k,o:k,d(t){t&&v(e),l&&l.d(t)}}}function Ll(s,e,l){let{value:t}=e,{samples_dir:n}=e,{type:a}=e,{selected:i=!1}=e,{index:f}=e,c=!1,_=t,m=Array.isArray(_);const o=()=>l(5,c=!0),w=()=>l(5,c=!1);return s.$$set=r=>{"value"in r&&l(0,t=r.value),"samples_dir"in r&&l(7,n=r.samples_dir),"type"in r&&l(1,a=r.type),"selected"in r&&l(2,i=r.selected),"index"in r&&l(3,f=r.index)},s.$$.update=()=>{s.$$.dirty&145&&!m&&typeof t=="string"&&/\.[a-zA-Z]+$/.test(t)&&fetch(n+t).then(r=>r.text()).then(r=>{try{if(t.endsWith("csv")){const z=r.split(` -`).slice(0,4).map(d=>d.split(",").slice(0,4).join(",")).join(` -`);l(6,_=Ze(z))}else if(t.endsWith("tsv")){const z=r.split(` -`).slice(0,4).map(d=>d.split(" ").slice(0,4).join(" ")).join(` -`);l(6,_=Ue(z))}else throw new Error("Incorrect format, only CSV and TSV files are supported");l(4,m=!0)}catch(z){console.error(z)}}).catch(r=>{l(6,_=t),l(4,m=!0)})},[t,a,i,f,m,c,_,n,o,w]}class Dl extends T{constructor(e){super(),q(this,e,Ll,ql,L,{value:0,samples_dir:7,type:1,selected:2,index:3})}}function Nl(s){let e;return{c(){e=y("div"),X(e,"background-color",s[0]),h(e,"class","svelte-h6ogpl"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(l,t){b(l,e,t)},p(l,[t]){t&1&&X(e,"background-color",l[0]),t&2&&u(e,"table",l[1]==="table"),t&2&&u(e,"gallery",l[1]==="gallery"),t&4&&u(e,"selected",l[2])},i:k,o:k,d(l){l&&v(e)}}}function jl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class pl extends T{constructor(e){super(),q(this,e,jl,Nl,L,{value:0,type:1,selected:2})}}function El(s){let e,l;return{c(){e=y("div"),l=A(s[0]),h(e,"class","svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(t,n){b(t,e,n),M(e,l)},p(t,[n]){n&1&&N(l,t[0]),n&2&&u(e,"table",t[1]==="table"),n&2&&u(e,"gallery",t[1]==="gallery"),n&4&&u(e,"selected",t[2])},i:k,o:k,d(t){t&&v(e)}}}function Bl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class Pl extends T{constructor(e){super(),q(this,e,Bl,El,L,{value:0,type:1,selected:2})}}function Rl(s){let e;return{c(){e=y("div"),h(e,"class","prose svelte-1ayixqk"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(l,t){b(l,e,t),e.innerHTML=s[0]},p(l,[t]){t&1&&(e.innerHTML=l[0]),t&2&&u(e,"table",l[1]==="table"),t&2&&u(e,"gallery",l[1]==="gallery"),t&4&&u(e,"selected",l[2])},i:k,o:k,d(l){l&&v(e)}}}function Vl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class Il extends T{constructor(e){super(),q(this,e,Vl,Rl,L,{value:0,type:1,selected:2})}}function Fl(s){let e;return{c(){e=y("div"),h(e,"class","prose svelte-zvfedn"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(l,t){b(l,e,t),e.innerHTML=s[0]},p(l,[t]){t&1&&(e.innerHTML=l[0]),t&2&&u(e,"table",l[1]==="table"),t&2&&u(e,"gallery",l[1]==="gallery"),t&4&&u(e,"selected",l[2])},i:k,o:k,d(l){l&&v(e)}}}function Gl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class Ol extends T{constructor(e){super(),q(this,e,Gl,Fl,L,{value:0,type:1,selected:2})}}function Wl(s){let e,l;return{c(){e=y("pre"),l=A(s[0]),h(e,"class","svelte-agpzo2"),u(e,"table",s[1]==="table"),u(e,"gallery",s[1]==="gallery"),u(e,"selected",s[2])},m(t,n){b(t,e,n),M(e,l)},p(t,[n]){n&1&&N(l,t[0]),n&2&&u(e,"table",t[1]==="table"),n&2&&u(e,"gallery",t[1]==="gallery"),n&4&&u(e,"selected",t[2])},i:k,o:k,d(t){t&&v(e)}}}function Zl(s,e,l){let{value:t}=e,{type:n}=e,{selected:a=!1}=e;return s.$$set=i=>{"value"in i&&l(0,t=i.value),"type"in i&&l(1,n=i.type),"selected"in i&&l(2,a=i.selected)},[t,n,a]}class Jl extends T{constructor(e){super(),q(this,e,Zl,Wl,L,{value:0,type:1,selected:2})}}const O={dropdown:ll,checkbox:sl,checkboxgroup:fl,number:xe,slider:rl,radio:dl,image:We,textbox:gl,audio:yl,video:Cl,file:Al,dataframe:Dl,model3d:Ke,colorpicker:pl,timeseries:Pl,markdown:Il,html:Ol,code:Jl};function be(s,e,l){const t=s.slice();return t[32]=e[l],t}function ve(s,e,l){const t=s.slice();return t[35]=e[l],t[37]=l,t}function ye(s,e,l){const t=s.slice();t[0]=e[l].value,t[39]=e[l].component,t[42]=l;const n=t[1][t[42]];return t[40]=n,t}function ke(s,e,l){const t=s.slice();return t[43]=e[l],t}function we(s,e,l){const t=s.slice();return t[35]=e[l],t[37]=l,t}function Kl(s){let e,l,t,n,a,i,f,c=D(s[3]),_=[];for(let r=0;rH(o[r],1,1,()=>{o[r]=null});return{c(){e=y("div"),l=y("table"),t=y("thead"),n=y("tr");for(let r=0;r<_.length;r+=1)_[r].c();a=B(),i=y("tbody");for(let r=0;rH(n[i],1,1,()=>{n[i]=null});return{c(){e=y("div");for(let i=0;i{K(m,1)}),F()}a?(l=Y(a,i(f)),Z(l.$$.fragment),S(l.$$.fragment,1),J(l,e,null)):l=null}else a&&l.$set(_);(!n||c[0]&2)&&X(e,"max-width",f[40]==="textbox"?"35ch":"auto"),(!n||c[0]&2&&t!==(t=ae(f[40])+" svelte-13hsdno"))&&h(e,"class",t)},i(f){n||(l&&S(l.$$.fragment,f),n=!0)},o(f){l&&H(l.$$.fragment,f),n=!1},d(f){f&&v(e),l&&K(l)}}}function Me(s){let e,l,t=s[40]!==void 0&&O[s[40]]!==void 0&&Ce(s);return{c(){t&&t.c(),e=W()},m(n,a){t&&t.m(n,a),b(n,e,a),l=!0},p(n,a){n[40]!==void 0&&O[n[40]]!==void 0?t?(t.p(n,a),a[0]&2&&S(t,1)):(t=Ce(n),t.c(),S(t,1),t.m(e.parentNode,e)):t&&(I(),H(t,1,1,()=>{t=null}),F())},i(n){l||(S(t),l=!0)},o(n){H(t),l=!1},d(n){n&&v(e),t&&t.d(n)}}}function Se(s){let e,l,t,n,a,i=D(s[35]),f=[];for(let o=0;oH(f[o],1,1,()=>{f[o]=null});function _(){return s[28](s[37])}function m(){return s[29](s[37])}return{c(){e=y("tr");for(let o=0;o{K(_,1)}),F()}n?(e=Y(n,a(i)),Z(e.$$.fragment),S(e.$$.fragment,1),J(e,l.parentNode,l)):e=null}else n&&e.$set(c)},i(i){t||(e&&S(e.$$.fragment,i),t=!0)},o(i){e&&H(e.$$.fragment,i),t=!1},d(i){i&&v(l),e&&K(e,i)}}}function He(s){let e,l=Object.keys(O).includes(s[1][0])&&O[s[1][0]],t,n,a,i,f=l&&Ae(s);function c(){return s[25](s[37],s[35])}function _(){return s[26](s[37])}return{c(){e=y("button"),f&&f.c(),t=B(),h(e,"class","gallery-item svelte-13hsdno")},m(m,o){b(m,e,o),f&&f.m(e,null),M(e,t),n=!0,a||(i=[E(e,"click",c),E(e,"mouseenter",_),E(e,"mouseleave",s[27])],a=!0)},p(m,o){s=m,o[0]&2&&(l=Object.keys(O).includes(s[1][0])&&O[s[1][0]]),l?f?(f.p(s,o),o[0]&2&&S(f,1)):(f=Ae(s),f.c(),S(f,1),f.m(e,t)):f&&(I(),H(f,1,1,()=>{f=null}),F())},i(m){n||(S(f),n=!0)},o(m){H(f),n=!1},d(m){m&&v(e),f&&f.d(),a=!1,x(i)}}}function Ul(s){let e,l,t=D(s[12]),n=[];for(let a=0;a{r[P]=null}),F(),c=r[f],c?c.p(C,j):(c=r[f]=w[f](C),c.c()),S(c,1),c.m(_.parentNode,_)),C[18]&&d.p(C,j)},i(C){o||(S(c),o=!0)},o(C){H(c),o=!1},d(C){C&&(v(e),v(i),v(_),v(m)),r[f].d(C),d&&d.d(C)}}}function $l(s){let e,l;return e=new Oe({props:{visible:s[6],padding:!1,elem_id:s[4],elem_classes:s[5],scale:s[8],min_width:s[9],allow_overflow:!1,container:!1,$$slots:{default:[xl]},$$scope:{ctx:s}}}),{c(){Z(e.$$.fragment)},m(t,n){J(e,t,n),l=!0},p(t,n){const a={};n[0]&64&&(a.visible=t[6]),n[0]&16&&(a.elem_id=t[4]),n[0]&32&&(a.elem_classes=t[5]),n[0]&256&&(a.scale=t[8]),n[0]&512&&(a.min_width=t[9]),n[0]&64655|n[1]&32768&&(a.$$scope={dirty:n,ctx:t}),e.$set(a)},i(t){l||(S(e.$$.fragment,t),l=!0)},o(t){H(e.$$.fragment,t),l=!1},d(t){K(e,t)}}}function et(s,e,l){let t,n,{components:a}=e,{label:i="Examples"}=e,{headers:f}=e,{samples:c}=e,{elem_id:_=""}=e,{elem_classes:m=[]}=e,{visible:o=!0}=e,{value:w=null}=e,{root:r}=e,{root_url:z}=e,{samples_per_page:d=10}=e,{scale:C=null}=e,{min_width:j=void 0}=e;const P=Ge();let De=z?"proxy="+z+"file=":r+"/file=",G=0,te=c.length>d,Q,U,R=[],$=-1;function ee(g){l(13,$=g)}function le(){l(13,$=-1)}const Ne=(g,p)=>{l(0,w=g+G*d),P("click",w),P("select",{index:w,value:p})},je=g=>ee(g),pe=()=>le(),Ee=g=>{l(0,w=g+G*d),P("click",w)},Be=g=>ee(g),Pe=()=>le(),Re=g=>l(10,G=g);return s.$$set=g=>{"components"in g&&l(1,a=g.components),"label"in g&&l(2,i=g.label),"headers"in g&&l(3,f=g.headers),"samples"in g&&l(21,c=g.samples),"elem_id"in g&&l(4,_=g.elem_id),"elem_classes"in g&&l(5,m=g.elem_classes),"visible"in g&&l(6,o=g.visible),"value"in g&&l(0,w=g.value),"root"in g&&l(22,r=g.root),"root_url"in g&&l(23,z=g.root_url),"samples_per_page"in g&&l(7,d=g.samples_per_page),"scale"in g&&l(8,C=g.scale),"min_width"in g&&l(9,j=g.min_width)},s.$$.update=()=>{s.$$.dirty[0]&2&&l(15,t=a.length<2),s.$$.dirty[0]&18879616&&(te?(l(12,R=[]),l(11,Q=c.slice(G*d,(G+1)*d)),l(24,U=Math.ceil(c.length/d)),[0,G,U-1].forEach(g=>{for(let p=g-2;p<=g+2;p++)p>=0&&p0&&p-R[R.length-1]>1&&R.push(-1),R.push(p))})):l(11,Q=c.slice())),s.$$.dirty[0]&2050&&l(14,n=Q.map(g=>g.map((p,Ve)=>({value:p,component:O[a[Ve]]}))))},[w,a,i,f,_,m,o,d,C,j,G,Q,R,$,n,t,P,De,te,ee,le,c,r,z,U,Ne,je,pe,Ee,Be,Pe,Re]}class lt extends T{constructor(e){super(),q(this,e,et,$l,L,{components:1,label:2,headers:3,samples:21,elem_id:4,elem_classes:5,visible:6,value:0,root:22,root_url:23,samples_per_page:7,scale:8,min_width:9},null,[-1,-1])}}const ct=lt,ut=["dynamic"],rt=()=>({type:{payload:"number"},description:{payload:"index of selected row"},example_data:0});export{ct as Component,rt as document,ut as modes}; -//# sourceMappingURL=index-7af10a2e.js.map diff --git a/spaces/cihyFjudo/fairness-paper-search/Crack Contpaq I 2011 Windows XP A Comparison with Other Accounting Software.md b/spaces/cihyFjudo/fairness-paper-search/Crack Contpaq I 2011 Windows XP A Comparison with Other Accounting Software.md deleted file mode 100644 index 23626e47897db66001519ea0cf55e515a2cb56d1..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Crack Contpaq I 2011 Windows XP A Comparison with Other Accounting Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

crack contpaq i 2011 windows xp


Download Zip ---> https://tinurli.com/2uwjbf



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Rambo First Blood Full Movie Download einkaeufer delay sch Learn More About the Novel and the Film.md b/spaces/cihyFjudo/fairness-paper-search/Rambo First Blood Full Movie Download einkaeufer delay sch Learn More About the Novel and the Film.md deleted file mode 100644 index 1dbfe3d578255ce56e947578e4e8030755c8ab37..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Rambo First Blood Full Movie Download einkaeufer delay sch Learn More About the Novel and the Film.md +++ /dev/null @@ -1,6 +0,0 @@ -

Rambo First Blood Full Movie Download einkaeufer delay sch


Download --->>> https://tinurli.com/2uwhVo



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Sweet Cindy Model 22 !FULL!.md b/spaces/cihyFjudo/fairness-paper-search/Sweet Cindy Model 22 !FULL!.md deleted file mode 100644 index 3e408e03d1cf0401c237274f14ee00f6f59240e6..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Sweet Cindy Model 22 !FULL!.md +++ /dev/null @@ -1,6 +0,0 @@ -

Sweet Cindy Model 22 !FULL!


DOWNLOAD ❤❤❤ https://tinurli.com/2uwkIw



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/What Makes Kohinoor Samajh Baithe Mp3 Song So Popular and Loved.md b/spaces/cihyFjudo/fairness-paper-search/What Makes Kohinoor Samajh Baithe Mp3 Song So Popular and Loved.md deleted file mode 100644 index 163d404ac15e99fa3df72b6d199321432bbaba81..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/What Makes Kohinoor Samajh Baithe Mp3 Song So Popular and Loved.md +++ /dev/null @@ -1,6 +0,0 @@ -

Kohinoor Samajh Baithe Mp3 Song


DOWNLOAD ===== https://tinurli.com/2uwk3h



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cleanmaster/akagi-sovits3/train.py b/spaces/cleanmaster/akagi-sovits3/train.py deleted file mode 100644 index 97557410edb18717b0330c602fbaa9984f647b13..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/train.py +++ /dev/null @@ -1,281 +0,0 @@ -import logging -logging.getLogger('matplotlib').setLevel(logging.WARNING) -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import commons -import utils -from data_utils import TextAudioSpeakerLoader, EvalDataLoader -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - kl_loss, - generator_loss, discriminator_loss, feature_loss -) - -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -# os.environ['TORCH_DISTRIBUTED_DEBUG'] = 'INFO' - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - hps = utils.get_hparams() - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = hps.train.port - - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps) - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - batch_size=hps.train.batch_size) - if rank == 0: - eval_dataset = EvalDataLoader(hps.data.validation_files, hps) - eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False, - batch_size=1, pin_memory=False, - drop_last=False) - - net_g = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) # , find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - # train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, items in enumerate(train_loader): - c, f0, spec, y, spk = items - g = spk.cuda(rank, non_blocking=True) - spec, y = spec.cuda(rank, non_blocking=True), y.cuda(rank, non_blocking=True) - c = c.cuda(rank, non_blocking=True) - f0 = f0.cuda(rank, non_blocking=True) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - - with autocast(enabled=hps.train.fp16_run): - y_hat, ids_slice, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(c, f0, spec, g=g, mel=mel) - - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - } - - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - with torch.no_grad(): - for batch_idx, items in enumerate(eval_loader): - c, f0, spec, y, spk = items - g = spk[:1].cuda(0) - spec, y = spec[:1].cuda(0), y[:1].cuda(0) - c = c[:1].cuda(0) - f0 = f0[:1].cuda(0) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat = generator.module.infer(c, f0, g=g, mel=mel) - - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - audio_dict.update({ - f"gen/audio_{batch_idx}": y_hat[0], - f"gt/audio_{batch_idx}": y[0] - }) - image_dict.update({ - f"gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()), - "gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy()) - }) - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/websockets.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/websockets.py deleted file mode 100644 index 55a4ac4a1a918720bb3b94eaea6f8737b968216a..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/websockets.py +++ /dev/null @@ -1,3 +0,0 @@ -from starlette.websockets import WebSocket as WebSocket # noqa -from starlette.websockets import WebSocketDisconnect as WebSocketDisconnect # noqa -from starlette.websockets import WebSocketState as WebSocketState # noqa diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/location.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/location.py deleted file mode 100644 index 50f761d2d2a13bd101a7db9c259fedc98eed52cf..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/location.py +++ /dev/null @@ -1,12 +0,0 @@ -from typing import NamedTuple - - -class FeatureLibLocation(NamedTuple): - """A location in a feature file""" - - file: str - line: int - column: int - - def __str__(self): - return f"{self.file}:{self.line}:{self.column}" diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/xmlWriter.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/xmlWriter.py deleted file mode 100644 index 9a8dc3e3b7fe5eb13ea4b7ea369ced1da5555471..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/xmlWriter.py +++ /dev/null @@ -1,204 +0,0 @@ -"""xmlWriter.py -- Simple XML authoring class""" - -from fontTools.misc.textTools import byteord, strjoin, tobytes, tostr -import sys -import os -import string - -INDENT = " " - - -class XMLWriter(object): - def __init__( - self, - fileOrPath, - indentwhite=INDENT, - idlefunc=None, - encoding="utf_8", - newlinestr="\n", - ): - if encoding.lower().replace("-", "").replace("_", "") != "utf8": - raise Exception("Only UTF-8 encoding is supported.") - if fileOrPath == "-": - fileOrPath = sys.stdout - if not hasattr(fileOrPath, "write"): - self.filename = fileOrPath - self.file = open(fileOrPath, "wb") - self._closeStream = True - else: - self.filename = None - # assume writable file object - self.file = fileOrPath - self._closeStream = False - - # Figure out if writer expects bytes or unicodes - try: - # The bytes check should be first. See: - # https://github.com/fonttools/fonttools/pull/233 - self.file.write(b"") - self.totype = tobytes - except TypeError: - # This better not fail. - self.file.write("") - self.totype = tostr - self.indentwhite = self.totype(indentwhite) - if newlinestr is None: - self.newlinestr = self.totype(os.linesep) - else: - self.newlinestr = self.totype(newlinestr) - self.indentlevel = 0 - self.stack = [] - self.needindent = 1 - self.idlefunc = idlefunc - self.idlecounter = 0 - self._writeraw('') - self.newline() - - def __enter__(self): - return self - - def __exit__(self, exception_type, exception_value, traceback): - self.close() - - def close(self): - if self._closeStream: - self.file.close() - - def write(self, string, indent=True): - """Writes text.""" - self._writeraw(escape(string), indent=indent) - - def writecdata(self, string): - """Writes text in a CDATA section.""" - self._writeraw("") - - def write8bit(self, data, strip=False): - """Writes a bytes() sequence into the XML, escaping - non-ASCII bytes. When this is read in xmlReader, - the original bytes can be recovered by encoding to - 'latin-1'.""" - self._writeraw(escape8bit(data), strip=strip) - - def write_noindent(self, string): - """Writes text without indentation.""" - self._writeraw(escape(string), indent=False) - - def _writeraw(self, data, indent=True, strip=False): - """Writes bytes, possibly indented.""" - if indent and self.needindent: - self.file.write(self.indentlevel * self.indentwhite) - self.needindent = 0 - s = self.totype(data, encoding="utf_8") - if strip: - s = s.strip() - self.file.write(s) - - def newline(self): - self.file.write(self.newlinestr) - self.needindent = 1 - idlecounter = self.idlecounter - if not idlecounter % 100 and self.idlefunc is not None: - self.idlefunc() - self.idlecounter = idlecounter + 1 - - def comment(self, data): - data = escape(data) - lines = data.split("\n") - self._writeraw("") - - def simpletag(self, _TAG_, *args, **kwargs): - attrdata = self.stringifyattrs(*args, **kwargs) - data = "<%s%s/>" % (_TAG_, attrdata) - self._writeraw(data) - - def begintag(self, _TAG_, *args, **kwargs): - attrdata = self.stringifyattrs(*args, **kwargs) - data = "<%s%s>" % (_TAG_, attrdata) - self._writeraw(data) - self.stack.append(_TAG_) - self.indent() - - def endtag(self, _TAG_): - assert self.stack and self.stack[-1] == _TAG_, "nonmatching endtag" - del self.stack[-1] - self.dedent() - data = "" % _TAG_ - self._writeraw(data) - - def dumphex(self, data): - linelength = 16 - hexlinelength = linelength * 2 - chunksize = 8 - for i in range(0, len(data), linelength): - hexline = hexStr(data[i : i + linelength]) - line = "" - white = "" - for j in range(0, hexlinelength, chunksize): - line = line + white + hexline[j : j + chunksize] - white = " " - self._writeraw(line) - self.newline() - - def indent(self): - self.indentlevel = self.indentlevel + 1 - - def dedent(self): - assert self.indentlevel > 0 - self.indentlevel = self.indentlevel - 1 - - def stringifyattrs(self, *args, **kwargs): - if kwargs: - assert not args - attributes = sorted(kwargs.items()) - elif args: - assert len(args) == 1 - attributes = args[0] - else: - return "" - data = "" - for attr, value in attributes: - if not isinstance(value, (bytes, str)): - value = str(value) - data = data + ' %s="%s"' % (attr, escapeattr(value)) - return data - - -def escape(data): - data = tostr(data, "utf_8") - data = data.replace("&", "&") - data = data.replace("<", "<") - data = data.replace(">", ">") - data = data.replace("\r", " ") - return data - - -def escapeattr(data): - data = escape(data) - data = data.replace('"', """) - return data - - -def escape8bit(data): - """Input is Unicode string.""" - - def escapechar(c): - n = ord(c) - if 32 <= n <= 127 and c not in "<&>": - return c - else: - return "&#" + repr(n) + ";" - - return strjoin(map(escapechar, data.decode("latin-1"))) - - -def hexStr(s): - h = string.hexdigits - r = "" - for c in s: - i = byteord(c) - r = r + h[(i >> 4) & 0xF] + h[i & 0xF] - return r diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdata.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdata.h deleted file mode 100644 index 57619beee2a6e875db4db96074f3da1f8f56f7b0..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdata.h +++ /dev/null @@ -1,655 +0,0 @@ -/* - * Bink video decoder - * Copyright (C) 2009 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_BINKDATA_H -#define AVCODEC_BINKDATA_H - -#include - -/** Bink DCT and residue 8x8 block scan order */ -static const uint8_t bink_scan[64] = { - 0, 1, 8, 9, 2, 3, 10, 11, - 4, 5, 12, 13, 6, 7, 14, 15, - 20, 21, 28, 29, 22, 23, 30, 31, - 16, 17, 24, 25, 32, 33, 40, 41, - 34, 35, 42, 43, 48, 49, 56, 57, - 50, 51, 58, 59, 18, 19, 26, 27, - 36, 37, 44, 45, 38, 39, 46, 47, - 52, 53, 60, 61, 54, 55, 62, 63 -}; - -static const uint8_t bink_tree_bits[16][16] = { - { - 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, - 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, - }, - { - 0x00, 0x01, 0x03, 0x05, 0x07, 0x09, 0x0B, 0x0D, - 0x0F, 0x13, 0x15, 0x17, 0x19, 0x1B, 0x1D, 0x1F, - }, - { - 0x00, 0x02, 0x01, 0x09, 0x05, 0x15, 0x0D, 0x1D, - 0x03, 0x13, 0x0B, 0x1B, 0x07, 0x17, 0x0F, 0x1F, - }, - { - 0x00, 0x02, 0x06, 0x01, 0x09, 0x05, 0x0D, 0x1D, - 0x03, 0x13, 0x0B, 0x1B, 0x07, 0x17, 0x0F, 0x1F, - }, - { - 0x00, 0x04, 0x02, 0x06, 0x01, 0x09, 0x05, 0x0D, - 0x03, 0x13, 0x0B, 0x1B, 0x07, 0x17, 0x0F, 0x1F, - }, - { - 0x00, 0x04, 0x02, 0x0A, 0x06, 0x0E, 0x01, 0x09, - 0x05, 0x0D, 0x03, 0x0B, 0x07, 0x17, 0x0F, 0x1F, - }, - { - 0x00, 0x02, 0x0A, 0x06, 0x0E, 0x01, 0x09, 0x05, - 0x0D, 0x03, 0x0B, 0x1B, 0x07, 0x17, 0x0F, 0x1F, - }, - { - 0x00, 0x01, 0x05, 0x03, 0x13, 0x0B, 0x1B, 0x3B, - 0x07, 0x27, 0x17, 0x37, 0x0F, 0x2F, 0x1F, 0x3F, - }, - { - 0x00, 0x01, 0x03, 0x13, 0x0B, 0x2B, 0x1B, 0x3B, - 0x07, 0x27, 0x17, 0x37, 0x0F, 0x2F, 0x1F, 0x3F, - }, - { - 0x00, 0x01, 0x05, 0x0D, 0x03, 0x13, 0x0B, 0x1B, - 0x07, 0x27, 0x17, 0x37, 0x0F, 0x2F, 0x1F, 0x3F, - }, - { - 0x00, 0x02, 0x01, 0x05, 0x0D, 0x03, 0x13, 0x0B, - 0x1B, 0x07, 0x17, 0x37, 0x0F, 0x2F, 0x1F, 0x3F, - }, - { - 0x00, 0x01, 0x09, 0x05, 0x0D, 0x03, 0x13, 0x0B, - 0x1B, 0x07, 0x17, 0x37, 0x0F, 0x2F, 0x1F, 0x3F, - }, - { - 0x00, 0x02, 0x01, 0x03, 0x13, 0x0B, 0x1B, 0x3B, - 0x07, 0x27, 0x17, 0x37, 0x0F, 0x2F, 0x1F, 0x3F, - }, - { - 0x00, 0x01, 0x05, 0x03, 0x07, 0x27, 0x17, 0x37, - 0x0F, 0x4F, 0x2F, 0x6F, 0x1F, 0x5F, 0x3F, 0x7F, - }, - { - 0x00, 0x01, 0x05, 0x03, 0x07, 0x17, 0x37, 0x77, - 0x0F, 0x4F, 0x2F, 0x6F, 0x1F, 0x5F, 0x3F, 0x7F, - }, - { - 0x00, 0x02, 0x01, 0x05, 0x03, 0x07, 0x27, 0x17, - 0x37, 0x0F, 0x2F, 0x6F, 0x1F, 0x5F, 0x3F, 0x7F, - }, -}; - -static const uint8_t bink_tree_lens[16][16] = { - { 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 }, - { 1, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 }, - { 2, 2, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 }, - { 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 }, - { 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5 }, - { 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5 }, - { 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5 }, - { 1, 3, 3, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6 }, - { 1, 2, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6 }, - { 1, 3, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6 }, - { 2, 2, 3, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6 }, - { 1, 4, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6 }, - { 2, 2, 2, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6 }, - { 1, 3, 3, 3, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7 }, - { 1, 3, 3, 3, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 }, - { 2, 2, 3, 3, 3, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7 }, -}; - -static const uint8_t bink_patterns[16][64] = { - { - 0x00, 0x08, 0x10, 0x18, 0x20, 0x28, 0x30, 0x38, - 0x39, 0x31, 0x29, 0x21, 0x19, 0x11, 0x09, 0x01, - 0x02, 0x0A, 0x12, 0x1A, 0x22, 0x2A, 0x32, 0x3A, - 0x3B, 0x33, 0x2B, 0x23, 0x1B, 0x13, 0x0B, 0x03, - 0x04, 0x0C, 0x14, 0x1C, 0x24, 0x2C, 0x34, 0x3C, - 0x3D, 0x35, 0x2D, 0x25, 0x1D, 0x15, 0x0D, 0x05, - 0x06, 0x0E, 0x16, 0x1E, 0x26, 0x2E, 0x36, 0x3E, - 0x3F, 0x37, 0x2F, 0x27, 0x1F, 0x17, 0x0F, 0x07, - }, - { - 0x3B, 0x3A, 0x39, 0x38, 0x30, 0x31, 0x32, 0x33, - 0x2B, 0x2A, 0x29, 0x28, 0x20, 0x21, 0x22, 0x23, - 0x1B, 0x1A, 0x19, 0x18, 0x10, 0x11, 0x12, 0x13, - 0x0B, 0x0A, 0x09, 0x08, 0x00, 0x01, 0x02, 0x03, - 0x04, 0x05, 0x06, 0x07, 0x0F, 0x0E, 0x0D, 0x0C, - 0x14, 0x15, 0x16, 0x17, 0x1F, 0x1E, 0x1D, 0x1C, - 0x24, 0x25, 0x26, 0x27, 0x2F, 0x2E, 0x2D, 0x2C, - 0x34, 0x35, 0x36, 0x37, 0x3F, 0x3E, 0x3D, 0x3C, - }, - { - 0x19, 0x11, 0x12, 0x1A, 0x1B, 0x13, 0x0B, 0x03, - 0x02, 0x0A, 0x09, 0x01, 0x00, 0x08, 0x10, 0x18, - 0x20, 0x28, 0x30, 0x38, 0x39, 0x31, 0x29, 0x2A, - 0x32, 0x3A, 0x3B, 0x33, 0x2B, 0x23, 0x22, 0x21, - 0x1D, 0x15, 0x16, 0x1E, 0x1F, 0x17, 0x0F, 0x07, - 0x06, 0x0E, 0x0D, 0x05, 0x04, 0x0C, 0x14, 0x1C, - 0x24, 0x2C, 0x34, 0x3C, 0x3D, 0x35, 0x2D, 0x2E, - 0x36, 0x3E, 0x3F, 0x37, 0x2F, 0x27, 0x26, 0x25, - }, - { - 0x03, 0x0B, 0x02, 0x0A, 0x01, 0x09, 0x00, 0x08, - 0x10, 0x18, 0x11, 0x19, 0x12, 0x1A, 0x13, 0x1B, - 0x23, 0x2B, 0x22, 0x2A, 0x21, 0x29, 0x20, 0x28, - 0x30, 0x38, 0x31, 0x39, 0x32, 0x3A, 0x33, 0x3B, - 0x3C, 0x34, 0x3D, 0x35, 0x3E, 0x36, 0x3F, 0x37, - 0x2F, 0x27, 0x2E, 0x26, 0x2D, 0x25, 0x2C, 0x24, - 0x1C, 0x14, 0x1D, 0x15, 0x1E, 0x16, 0x1F, 0x17, - 0x0F, 0x07, 0x0E, 0x06, 0x0D, 0x05, 0x0C, 0x04, - }, - { - 0x18, 0x19, 0x10, 0x11, 0x08, 0x09, 0x00, 0x01, - 0x02, 0x03, 0x0A, 0x0B, 0x12, 0x13, 0x1A, 0x1B, - 0x1C, 0x1D, 0x14, 0x15, 0x0C, 0x0D, 0x04, 0x05, - 0x06, 0x07, 0x0E, 0x0F, 0x16, 0x17, 0x1E, 0x1F, - 0x27, 0x26, 0x2F, 0x2E, 0x37, 0x36, 0x3F, 0x3E, - 0x3D, 0x3C, 0x35, 0x34, 0x2D, 0x2C, 0x25, 0x24, - 0x23, 0x22, 0x2B, 0x2A, 0x33, 0x32, 0x3B, 0x3A, - 0x39, 0x38, 0x31, 0x30, 0x29, 0x28, 0x21, 0x20, - }, - { - 0x00, 0x01, 0x02, 0x03, 0x08, 0x09, 0x0A, 0x0B, - 0x10, 0x11, 0x12, 0x13, 0x18, 0x19, 0x1A, 0x1B, - 0x20, 0x21, 0x22, 0x23, 0x28, 0x29, 0x2A, 0x2B, - 0x30, 0x31, 0x32, 0x33, 0x38, 0x39, 0x3A, 0x3B, - 0x04, 0x05, 0x06, 0x07, 0x0C, 0x0D, 0x0E, 0x0F, - 0x14, 0x15, 0x16, 0x17, 0x1C, 0x1D, 0x1E, 0x1F, - 0x24, 0x25, 0x26, 0x27, 0x2C, 0x2D, 0x2E, 0x2F, - 0x34, 0x35, 0x36, 0x37, 0x3C, 0x3D, 0x3E, 0x3F, - }, - { - 0x06, 0x07, 0x0F, 0x0E, 0x0D, 0x05, 0x0C, 0x04, - 0x03, 0x0B, 0x02, 0x0A, 0x09, 0x01, 0x00, 0x08, - 0x10, 0x18, 0x11, 0x19, 0x12, 0x1A, 0x13, 0x1B, - 0x14, 0x1C, 0x15, 0x1D, 0x16, 0x1E, 0x17, 0x1F, - 0x27, 0x2F, 0x26, 0x2E, 0x25, 0x2D, 0x24, 0x2C, - 0x23, 0x2B, 0x22, 0x2A, 0x21, 0x29, 0x20, 0x28, - 0x31, 0x30, 0x38, 0x39, 0x3A, 0x32, 0x3B, 0x33, - 0x3C, 0x34, 0x3D, 0x35, 0x36, 0x37, 0x3F, 0x3E, - }, - { - 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, - 0x0F, 0x0E, 0x0D, 0x0C, 0x0B, 0x0A, 0x09, 0x08, - 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, - 0x1F, 0x1E, 0x1D, 0x1C, 0x1B, 0x1A, 0x19, 0x18, - 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, - 0x2F, 0x2E, 0x2D, 0x2C, 0x2B, 0x2A, 0x29, 0x28, - 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, - 0x3F, 0x3E, 0x3D, 0x3C, 0x3B, 0x3A, 0x39, 0x38, - }, - { - 0x00, 0x08, 0x09, 0x01, 0x02, 0x03, 0x0B, 0x0A, - 0x12, 0x13, 0x1B, 0x1A, 0x19, 0x11, 0x10, 0x18, - 0x20, 0x28, 0x29, 0x21, 0x22, 0x23, 0x2B, 0x2A, - 0x32, 0x31, 0x30, 0x38, 0x39, 0x3A, 0x3B, 0x33, - 0x34, 0x3C, 0x3D, 0x3E, 0x3F, 0x37, 0x36, 0x35, - 0x2D, 0x2C, 0x24, 0x25, 0x26, 0x2E, 0x2F, 0x27, - 0x1F, 0x17, 0x16, 0x1E, 0x1D, 0x1C, 0x14, 0x15, - 0x0D, 0x0C, 0x04, 0x05, 0x06, 0x0E, 0x0F, 0x07, - }, - { - 0x18, 0x19, 0x10, 0x11, 0x08, 0x09, 0x00, 0x01, - 0x02, 0x03, 0x0A, 0x0B, 0x12, 0x13, 0x1A, 0x1B, - 0x1C, 0x1D, 0x14, 0x15, 0x0C, 0x0D, 0x04, 0x05, - 0x06, 0x07, 0x0E, 0x0F, 0x16, 0x17, 0x1E, 0x1F, - 0x26, 0x27, 0x2E, 0x2F, 0x36, 0x37, 0x3E, 0x3F, - 0x3C, 0x3D, 0x34, 0x35, 0x2C, 0x2D, 0x24, 0x25, - 0x22, 0x23, 0x2A, 0x2B, 0x32, 0x33, 0x3A, 0x3B, - 0x38, 0x39, 0x30, 0x31, 0x28, 0x29, 0x20, 0x21, - }, - { - 0x00, 0x08, 0x01, 0x09, 0x02, 0x0A, 0x03, 0x0B, - 0x13, 0x1B, 0x12, 0x1A, 0x11, 0x19, 0x10, 0x18, - 0x20, 0x28, 0x21, 0x29, 0x22, 0x2A, 0x23, 0x2B, - 0x33, 0x3B, 0x32, 0x3A, 0x31, 0x39, 0x30, 0x38, - 0x3C, 0x34, 0x3D, 0x35, 0x3E, 0x36, 0x3F, 0x37, - 0x2F, 0x27, 0x2E, 0x26, 0x2D, 0x25, 0x2C, 0x24, - 0x1F, 0x17, 0x1E, 0x16, 0x1D, 0x15, 0x1C, 0x14, - 0x0C, 0x04, 0x0D, 0x05, 0x0E, 0x06, 0x0F, 0x07, - }, - { - 0x00, 0x08, 0x10, 0x18, 0x19, 0x1A, 0x1B, 0x13, - 0x0B, 0x03, 0x02, 0x01, 0x09, 0x11, 0x12, 0x0A, - 0x04, 0x0C, 0x14, 0x1C, 0x1D, 0x1E, 0x1F, 0x17, - 0x0F, 0x07, 0x06, 0x05, 0x0D, 0x15, 0x16, 0x0E, - 0x24, 0x2C, 0x34, 0x3C, 0x3D, 0x3E, 0x3F, 0x37, - 0x2F, 0x27, 0x26, 0x25, 0x2D, 0x35, 0x36, 0x2E, - 0x20, 0x28, 0x30, 0x38, 0x39, 0x3A, 0x3B, 0x33, - 0x2B, 0x23, 0x22, 0x21, 0x29, 0x31, 0x32, 0x2A, - }, - { - 0x00, 0x08, 0x09, 0x01, 0x02, 0x03, 0x0B, 0x0A, - 0x13, 0x1B, 0x1A, 0x12, 0x11, 0x10, 0x18, 0x19, - 0x21, 0x20, 0x28, 0x29, 0x2A, 0x22, 0x23, 0x2B, - 0x33, 0x3B, 0x3A, 0x32, 0x31, 0x39, 0x38, 0x30, - 0x34, 0x3C, 0x3D, 0x35, 0x36, 0x3E, 0x3F, 0x37, - 0x2F, 0x27, 0x26, 0x2E, 0x2D, 0x2C, 0x24, 0x25, - 0x1D, 0x1C, 0x14, 0x15, 0x16, 0x1E, 0x1F, 0x17, - 0x0E, 0x0F, 0x07, 0x06, 0x05, 0x0D, 0x0C, 0x04, - }, - { - 0x18, 0x10, 0x08, 0x00, 0x01, 0x02, 0x03, 0x0B, - 0x13, 0x1B, 0x1A, 0x19, 0x11, 0x0A, 0x09, 0x12, - 0x1C, 0x14, 0x0C, 0x04, 0x05, 0x06, 0x07, 0x0F, - 0x17, 0x1F, 0x1E, 0x1D, 0x15, 0x0E, 0x0D, 0x16, - 0x3C, 0x34, 0x2C, 0x24, 0x25, 0x26, 0x27, 0x2F, - 0x37, 0x3F, 0x3E, 0x3D, 0x35, 0x2E, 0x2D, 0x36, - 0x38, 0x30, 0x28, 0x20, 0x21, 0x22, 0x23, 0x2B, - 0x33, 0x3B, 0x3A, 0x39, 0x31, 0x2A, 0x29, 0x32, - }, - { - 0x00, 0x08, 0x09, 0x01, 0x02, 0x0A, 0x12, 0x11, - 0x10, 0x18, 0x19, 0x1A, 0x1B, 0x13, 0x0B, 0x03, - 0x07, 0x06, 0x0E, 0x0F, 0x17, 0x16, 0x15, 0x0D, - 0x05, 0x04, 0x0C, 0x14, 0x1C, 0x1D, 0x1E, 0x1F, - 0x3F, 0x3E, 0x36, 0x37, 0x2F, 0x2E, 0x2D, 0x35, - 0x3D, 0x3C, 0x34, 0x2C, 0x24, 0x25, 0x26, 0x27, - 0x38, 0x30, 0x31, 0x39, 0x3A, 0x32, 0x2A, 0x29, - 0x28, 0x20, 0x21, 0x22, 0x23, 0x2B, 0x33, 0x3B, - }, - { - 0x00, 0x01, 0x08, 0x09, 0x10, 0x11, 0x18, 0x19, - 0x20, 0x21, 0x28, 0x29, 0x30, 0x31, 0x38, 0x39, - 0x3A, 0x3B, 0x32, 0x33, 0x2A, 0x2B, 0x22, 0x23, - 0x1A, 0x1B, 0x12, 0x13, 0x0A, 0x0B, 0x02, 0x03, - 0x04, 0x05, 0x0C, 0x0D, 0x14, 0x15, 0x1C, 0x1D, - 0x24, 0x25, 0x2C, 0x2D, 0x34, 0x35, 0x3C, 0x3D, - 0x3E, 0x3F, 0x36, 0x37, 0x2E, 0x2F, 0x26, 0x27, - 0x1E, 0x1F, 0x16, 0x17, 0x0E, 0x0F, 0x06, 0x07, - } -}; - -static const int32_t bink_intra_quant[16][64] = { -{ - 0x010000, 0x016315, 0x01E83D, 0x02A535, 0x014E7B, 0x016577, 0x02F1E6, 0x02724C, - 0x010000, 0x00EEDA, 0x024102, 0x017F9B, 0x00BE80, 0x00611E, 0x01083C, 0x00A552, - 0x021F88, 0x01DC53, 0x027FAD, 0x01F697, 0x014819, 0x00A743, 0x015A31, 0x009688, - 0x02346F, 0x030EE5, 0x01FBFA, 0x02C096, 0x01D000, 0x028396, 0x019247, 0x01F9AA, - 0x02346F, 0x01FBFA, 0x01DC53, 0x0231B8, 0x012F12, 0x01E06C, 0x00CB10, 0x0119A8, - 0x01C48C, 0x019748, 0x014E86, 0x0122AF, 0x02C628, 0x027F20, 0x0297B5, 0x023F32, - 0x025000, 0x01AB6B, 0x01D122, 0x0159B3, 0x012669, 0x008D43, 0x00EE1F, 0x0075ED, - 0x01490C, 0x010288, 0x00F735, 0x00EF51, 0x00E0F1, 0x0072AD, 0x00A4D8, 0x006517, -}, -{ - 0x015555, 0x01D971, 0x028AFC, 0x0386F1, 0x01BDF9, 0x01DC9F, 0x03ED33, 0x034311, - 0x015555, 0x013E78, 0x030158, 0x01FF7A, 0x00FE00, 0x00817D, 0x01604F, 0x00DC6D, - 0x02D4B5, 0x027B19, 0x0354E7, 0x029E1F, 0x01B577, 0x00DF04, 0x01CD96, 0x00C8B6, - 0x02F095, 0x0413DC, 0x02A54E, 0x03AB73, 0x026AAB, 0x035A1E, 0x02185E, 0x02A238, - 0x02F095, 0x02A54E, 0x027B19, 0x02ECF5, 0x019418, 0x028090, 0x010EC0, 0x01778A, - 0x025B66, 0x021F0B, 0x01BE09, 0x018394, 0x03B2E0, 0x03542A, 0x0374F1, 0x02FEEE, - 0x031555, 0x0239E4, 0x026C2D, 0x01CCEE, 0x01888C, 0x00BC59, 0x013D7E, 0x009D3C, - 0x01B6BB, 0x0158B5, 0x01499C, 0x013F17, 0x012BEC, 0x0098E6, 0x00DBCB, 0x0086C9, -}, -{ - 0x01AAAB, 0x024FCE, 0x032DBB, 0x0468AD, 0x022D78, 0x0253C7, 0x04E87F, 0x0413D5, - 0x01AAAB, 0x018E16, 0x03C1AE, 0x027F58, 0x013D80, 0x00A1DC, 0x01B863, 0x011388, - 0x0389E2, 0x0319DF, 0x042A21, 0x0345A7, 0x0222D4, 0x0116C5, 0x0240FC, 0x00FAE3, - 0x03ACBA, 0x0518D3, 0x034EA1, 0x04964F, 0x030555, 0x0430A5, 0x029E76, 0x034AC5, - 0x03ACBA, 0x034EA1, 0x0319DF, 0x03A833, 0x01F91E, 0x0320B4, 0x015270, 0x01D56D, - 0x02F23F, 0x02A6CE, 0x022D8B, 0x01E479, 0x049F98, 0x042935, 0x04522D, 0x03BEA9, - 0x03DAAB, 0x02C85D, 0x030738, 0x02402A, 0x01EAAF, 0x00EB6F, 0x018CDE, 0x00C48A, - 0x022469, 0x01AEE2, 0x019C02, 0x018EDD, 0x0176E7, 0x00BF20, 0x0112BE, 0x00A87B, -}, -{ - 0x020000, 0x02C62A, 0x03D07A, 0x054A69, 0x029CF6, 0x02CAEF, 0x05E3CC, 0x04E499, - 0x020000, 0x01DDB4, 0x048204, 0x02FF36, 0x017D01, 0x00C23C, 0x021077, 0x014AA3, - 0x043F0F, 0x03B8A6, 0x04FF5A, 0x03ED2E, 0x029032, 0x014E86, 0x02B461, 0x012D11, - 0x0468DF, 0x061DCA, 0x03F7F5, 0x05812C, 0x03A000, 0x05072C, 0x03248D, 0x03F353, - 0x0468DF, 0x03F7F5, 0x03B8A6, 0x046370, 0x025E24, 0x03C0D8, 0x019620, 0x02334F, - 0x038919, 0x032E91, 0x029D0D, 0x02455E, 0x058C50, 0x04FE3F, 0x052F69, 0x047E65, - 0x04A000, 0x0356D6, 0x03A243, 0x02B365, 0x024CD2, 0x011A85, 0x01DC3E, 0x00EBD9, - 0x029218, 0x020510, 0x01EE69, 0x01DEA2, 0x01C1E2, 0x00E559, 0x0149B0, 0x00CA2D, -}, -{ - 0x02AAAB, 0x03B2E3, 0x0515F8, 0x070DE2, 0x037BF2, 0x03B93E, 0x07DA65, 0x068621, - 0x02AAAB, 0x027CF0, 0x0602B1, 0x03FEF3, 0x01FC01, 0x0102FA, 0x02C09F, 0x01B8DA, - 0x05A96A, 0x04F632, 0x06A9CE, 0x053C3E, 0x036AED, 0x01BE09, 0x039B2D, 0x01916B, - 0x05E129, 0x0827B8, 0x054A9C, 0x0756E5, 0x04D555, 0x06B43B, 0x0430BC, 0x05446F, - 0x05E129, 0x054A9C, 0x04F632, 0x05D9EB, 0x032830, 0x050121, 0x021D80, 0x02EF14, - 0x04B6CC, 0x043E16, 0x037C11, 0x030728, 0x0765C0, 0x06A855, 0x06E9E2, 0x05FDDB, - 0x062AAB, 0x0473C8, 0x04D85A, 0x0399DC, 0x031118, 0x0178B2, 0x027AFD, 0x013A77, - 0x036D76, 0x02B16A, 0x029337, 0x027E2E, 0x0257D8, 0x0131CC, 0x01B796, 0x010D91, -}, -{ - 0x038000, 0x04DACA, 0x06ACD5, 0x094238, 0x0492AE, 0x04E322, 0x0A4EA5, 0x08900C, - 0x038000, 0x0343FB, 0x07E388, 0x053E9F, 0x029AC1, 0x0153E8, 0x039CD0, 0x02429E, - 0x076E5B, 0x068322, 0x08BEDE, 0x06DF11, 0x047C57, 0x02496B, 0x04BBAB, 0x020EDD, - 0x07B786, 0x0AB421, 0x06F1ED, 0x09A20D, 0x065800, 0x08CC8E, 0x057FF7, 0x06E9D2, - 0x07B786, 0x06F1ED, 0x068322, 0x07AE04, 0x0424BF, 0x06917B, 0x02C6B8, 0x03D9CB, - 0x062FEB, 0x05917D, 0x0492D7, 0x03F964, 0x09B58C, 0x08BCEF, 0x0912F8, 0x07DD30, - 0x081800, 0x05D7F7, 0x065BF6, 0x04B9F1, 0x040670, 0x01EE69, 0x03416C, 0x019CBC, - 0x047FAA, 0x0388DC, 0x036138, 0x03459C, 0x03134C, 0x01915C, 0x0240F5, 0x0161CF, -}, -{ - 0x040000, 0x058C54, 0x07A0F4, 0x0A94D3, 0x0539EC, 0x0595DD, 0x0BC798, 0x09C932, - 0x040000, 0x03BB68, 0x090409, 0x05FE6D, 0x02FA01, 0x018477, 0x0420EE, 0x029547, - 0x087E1F, 0x07714C, 0x09FEB5, 0x07DA5D, 0x052064, 0x029D0D, 0x0568C3, 0x025A21, - 0x08D1BE, 0x0C3B94, 0x07EFEA, 0x0B0258, 0x074000, 0x0A0E59, 0x06491A, 0x07E6A7, - 0x08D1BE, 0x07EFEA, 0x07714C, 0x08C6E0, 0x04BC48, 0x0781B1, 0x032C3F, 0x04669F, - 0x071232, 0x065D22, 0x053A1A, 0x048ABC, 0x0B18A0, 0x09FC7F, 0x0A5ED3, 0x08FCC9, - 0x094000, 0x06ADAC, 0x074487, 0x0566CA, 0x0499A5, 0x02350B, 0x03B87B, 0x01D7B3, - 0x052430, 0x040A20, 0x03DCD3, 0x03BD45, 0x0383C5, 0x01CAB3, 0x029361, 0x01945A, -}, -{ - 0x050000, 0x06EF69, 0x098931, 0x0D3A07, 0x068867, 0x06FB55, 0x0EB97E, 0x0C3B7E, - 0x050000, 0x04AA42, 0x0B450B, 0x077E08, 0x03B881, 0x01E595, 0x05292A, 0x033A99, - 0x0A9DA7, 0x094D9F, 0x0C7E62, 0x09D0F4, 0x06687D, 0x034450, 0x06C2F4, 0x02F0AA, - 0x0B062D, 0x0F4A78, 0x09EBE4, 0x0DC2EE, 0x091000, 0x0C91EF, 0x07DB61, 0x09E050, - 0x0B062D, 0x09EBE4, 0x094D9F, 0x0AF898, 0x05EB59, 0x09621D, 0x03F74F, 0x058046, - 0x08D6BE, 0x07F46A, 0x0688A0, 0x05AD6B, 0x0DDEC8, 0x0C7B9F, 0x0CF687, 0x0B3BFB, - 0x0B9000, 0x085917, 0x0915A8, 0x06C07D, 0x05C00E, 0x02C24D, 0x04A69A, 0x024D9F, - 0x066D3C, 0x050CA7, 0x04D407, 0x04AC96, 0x0464B6, 0x023D5F, 0x033839, 0x01F971, -}, -{ - 0x060000, 0x08527E, 0x0B716E, 0x0FDF3C, 0x07D6E1, 0x0860CC, 0x11AB63, 0x0EADCB, - 0x060000, 0x05991C, 0x0D860D, 0x08FDA3, 0x047702, 0x0246B3, 0x063165, 0x03DFEA, - 0x0CBD2E, 0x0B29F1, 0x0EFE0F, 0x0BC78B, 0x07B096, 0x03EB93, 0x081D24, 0x038732, - 0x0D3A9C, 0x12595D, 0x0BE7DF, 0x108384, 0x0AE000, 0x0F1585, 0x096DA8, 0x0BD9FA, - 0x0D3A9C, 0x0BE7DF, 0x0B29F1, 0x0D2A50, 0x071A6B, 0x0B4289, 0x04C25F, 0x0699EE, - 0x0A9B4A, 0x098BB2, 0x07D727, 0x06D01A, 0x10A4F0, 0x0EFABE, 0x0F8E3C, 0x0D7B2E, - 0x0DE000, 0x0A0482, 0x0AE6CA, 0x081A2F, 0x06E677, 0x034F90, 0x0594B9, 0x02C38C, - 0x07B649, 0x060F2F, 0x05CB3C, 0x059BE7, 0x0545A7, 0x02B00C, 0x03DD11, 0x025E87, -}, -{ - 0x080000, 0x0B18A8, 0x0F41E8, 0x1529A5, 0x0A73D7, 0x0B2BBB, 0x178F2F, 0x139264, - 0x080000, 0x0776CF, 0x120812, 0x0BFCD9, 0x05F402, 0x0308EF, 0x0841DC, 0x052A8E, - 0x10FC3E, 0x0EE297, 0x13FD69, 0x0FB4B9, 0x0A40C8, 0x053A1A, 0x0AD186, 0x04B442, - 0x11A37B, 0x187727, 0x0FDFD4, 0x1604B0, 0x0E8000, 0x141CB1, 0x0C9235, 0x0FCD4D, - 0x11A37B, 0x0FDFD4, 0x0EE297, 0x118DC0, 0x09788F, 0x0F0362, 0x06587F, 0x08CD3D, - 0x0E2463, 0x0CBA43, 0x0A7434, 0x091577, 0x163140, 0x13F8FE, 0x14BDA5, 0x11F992, - 0x128000, 0x0D5B58, 0x0E890D, 0x0ACD94, 0x093349, 0x046A15, 0x0770F7, 0x03AF65, - 0x0A4861, 0x08143F, 0x07B9A6, 0x077A89, 0x070789, 0x039565, 0x0526C2, 0x0328B4, -}, -{ - 0x0C0000, 0x10A4FD, 0x16E2DB, 0x1FBE78, 0x0FADC3, 0x10C198, 0x2356C7, 0x1D5B96, - 0x0C0000, 0x0B3237, 0x1B0C1A, 0x11FB46, 0x08EE03, 0x048D66, 0x0C62CA, 0x07BFD5, - 0x197A5D, 0x1653E3, 0x1DFC1E, 0x178F16, 0x0F612C, 0x07D727, 0x103A49, 0x070E64, - 0x1A7539, 0x24B2BB, 0x17CFBD, 0x210709, 0x15C000, 0x1E2B0A, 0x12DB4F, 0x17B3F4, - 0x1A7539, 0x17CFBD, 0x1653E3, 0x1A54A0, 0x0E34D7, 0x168513, 0x0984BE, 0x0D33DC, - 0x153695, 0x131765, 0x0FAE4E, 0x0DA033, 0x2149E1, 0x1DF57D, 0x1F1C78, 0x1AF65B, - 0x1BC000, 0x140904, 0x15CD94, 0x10345E, 0x0DCCEE, 0x069F20, 0x0B2972, 0x058718, - 0x0F6C91, 0x0C1E5E, 0x0B9678, 0x0B37CE, 0x0A8B4E, 0x056018, 0x07BA22, 0x04BD0E, -}, -{ - 0x110000, 0x179466, 0x206C0C, 0x2CF87F, 0x16362A, 0x17BCED, 0x321044, 0x299714, - 0x110000, 0x0FDC79, 0x265125, 0x19794E, 0x0CA685, 0x0672FB, 0x118BF4, 0x0AFA6D, - 0x241804, 0x1FA181, 0x2A7A80, 0x21600A, 0x15C9A9, 0x0B1B77, 0x16FD3C, 0x09FF0D, - 0x257B66, 0x33FD33, 0x21BBA2, 0x2EC9F7, 0x1ED000, 0x2ABCF9, 0x1AB6B0, 0x219444, - 0x257B66, 0x21BBA2, 0x1FA181, 0x254D38, 0x142030, 0x1FE730, 0x0D7C0E, 0x12B423, - 0x1E0D52, 0x1B0BCF, 0x1636EE, 0x134D9E, 0x2F28A9, 0x2A711B, 0x2C12FF, 0x263256, - 0x275000, 0x1C621B, 0x1EE33C, 0x16F4DB, 0x138CFB, 0x09616E, 0x0FD00C, 0x07D4B7, - 0x15D9CE, 0x112B06, 0x106A80, 0x0FE464, 0x0EF004, 0x079D77, 0x0AF25B, 0x06B67F, -}, -{ - 0x160000, 0x1E83CF, 0x29F53D, 0x3A3286, 0x1CBE90, 0x1EB842, 0x40C9C2, 0x35D293, - 0x160000, 0x1486BA, 0x319630, 0x20F756, 0x105F06, 0x085891, 0x16B51E, 0x0E3506, - 0x2EB5AA, 0x28EF20, 0x36F8E1, 0x2B30FE, 0x1C3225, 0x0E5FC7, 0x1DC030, 0x0CEFB7, - 0x308193, 0x4347AC, 0x2BA786, 0x3C8CE5, 0x27E000, 0x374EE7, 0x229212, 0x2B7494, - 0x308193, 0x2BA786, 0x28EF20, 0x3045D0, 0x1A0B89, 0x29494D, 0x11735D, 0x183469, - 0x26E410, 0x230039, 0x1CBF8F, 0x18FB09, 0x3D0771, 0x36ECBA, 0x390986, 0x316E52, - 0x32E000, 0x24BB33, 0x27F8E4, 0x1DB557, 0x194D09, 0x0C23BB, 0x1476A6, 0x0A2256, - 0x1C470A, 0x1637AD, 0x153E87, 0x1490FA, 0x1354B9, 0x09DAD6, 0x0E2A94, 0x08AFF0, -}, -{ - 0x1C0000, 0x26D64D, 0x3566AA, 0x4A11C2, 0x249572, 0x27190E, 0x527525, 0x44805E, - 0x1C0000, 0x1A1FD6, 0x3F1C3E, 0x29F4F9, 0x14D607, 0x0A9F44, 0x1CE683, 0x1214F0, - 0x3B72D9, 0x341911, 0x45F6F0, 0x36F889, 0x23E2BB, 0x124B5B, 0x25DD54, 0x1076E9, - 0x3DBC30, 0x55A109, 0x378F64, 0x4D1069, 0x32C000, 0x46646C, 0x2BFFB9, 0x374E8E, - 0x3DBC30, 0x378F64, 0x341911, 0x3D7020, 0x2125F5, 0x348BD6, 0x1635BC, 0x1ECE57, - 0x317F5B, 0x2C8BEB, 0x2496B6, 0x1FCB22, 0x4DAC61, 0x45E778, 0x4897C2, 0x3EE97F, - 0x40C000, 0x2EBFB5, 0x32DFAE, 0x25CF86, 0x203380, 0x0F734B, 0x1A0B5F, 0x0CE5E2, - 0x23FD53, 0x1C46DC, 0x1B09C4, 0x1A2CE1, 0x189A60, 0x0C8AE2, 0x1207A5, 0x0B0E77, -}, -{ - 0x220000, 0x2F28CC, 0x40D818, 0x59F0FE, 0x2C6C53, 0x2F79DA, 0x642089, 0x532E29, - 0x220000, 0x1FB8F1, 0x4CA24B, 0x32F29C, 0x194D09, 0x0CE5F7, 0x2317E8, 0x15F4DB, - 0x483007, 0x3F4303, 0x54F4FF, 0x42C014, 0x2B9351, 0x1636EE, 0x2DFA79, 0x13FE1A, - 0x4AF6CC, 0x67FA67, 0x437743, 0x5D93EE, 0x3DA000, 0x5579F1, 0x356D61, 0x432888, - 0x4AF6CC, 0x437743, 0x3F4303, 0x4A9A70, 0x284060, 0x3FCE60, 0x1AF81B, 0x256845, - 0x3C1AA5, 0x36179D, 0x2C6DDD, 0x269B3C, 0x5E5152, 0x54E237, 0x5825FE, 0x4C64AD, - 0x4EA000, 0x38C437, 0x3DC678, 0x2DE9B5, 0x2719F7, 0x12C2DB, 0x1FA018, 0x0FA96E, - 0x2BB39B, 0x22560C, 0x20D500, 0x1FC8C8, 0x1DE007, 0x0F3AEE, 0x15E4B7, 0x0D6CFE, -}, -{ - 0x2C0000, 0x3D079E, 0x53EA79, 0x74650C, 0x397D20, 0x3D7083, 0x819383, 0x6BA525, - 0x2C0000, 0x290D75, 0x632C61, 0x41EEAC, 0x20BE0C, 0x10B121, 0x2D6A3B, 0x1C6A0C, - 0x5D6B54, 0x51DE40, 0x6DF1C2, 0x5661FB, 0x38644B, 0x1CBF8F, 0x3B8060, 0x19DF6D, - 0x610326, 0x868F57, 0x574F0B, 0x7919CA, 0x4FC000, 0x6E9DCE, 0x452423, 0x56E928, - 0x610326, 0x574F0B, 0x51DE40, 0x608BA0, 0x341713, 0x52929A, 0x22E6BA, 0x3068D2, - 0x4DC821, 0x460071, 0x397F1E, 0x31F611, 0x7A0EE2, 0x6DD974, 0x72130C, 0x62DCA3, - 0x65C000, 0x497665, 0x4FF1C9, 0x3B6AAE, 0x329A12, 0x184776, 0x28ED4D, 0x1444AC, - 0x388E14, 0x2C6F5A, 0x2A7D0F, 0x2921F4, 0x26A973, 0x13B5AD, 0x1C5528, 0x115FDF, -}, -}; - -static const int32_t bink_inter_quant[16][64] = { -{ - 0x010000, 0x017946, 0x01A5A9, 0x0248DC, 0x016363, 0x0152A7, 0x0243EC, 0x0209EA, - 0x012000, 0x00E248, 0x01BBDA, 0x015CBC, 0x00A486, 0x0053E0, 0x00F036, 0x008095, - 0x01B701, 0x016959, 0x01B0B9, 0x0153FD, 0x00F8E7, 0x007EE4, 0x00EA30, 0x007763, - 0x01B701, 0x0260EB, 0x019DE9, 0x023E1B, 0x017000, 0x01FE6E, 0x012DB5, 0x01A27B, - 0x01E0D1, 0x01B0B9, 0x018A33, 0x01718D, 0x00D87A, 0x014449, 0x007B9A, 0x00AB71, - 0x013178, 0x0112EA, 0x00AD08, 0x009BB9, 0x023D97, 0x020437, 0x021CCC, 0x01E6B4, - 0x018000, 0x012DB5, 0x0146D9, 0x0100CE, 0x00CFD2, 0x006E5C, 0x00B0E4, 0x005A2D, - 0x00E9CC, 0x00B7B1, 0x00846F, 0x006B85, 0x008337, 0x0042E5, 0x004A10, 0x002831, -}, -{ - 0x015555, 0x01F708, 0x023237, 0x030BD0, 0x01D9D9, 0x01C389, 0x03053B, 0x02B7E3, - 0x018000, 0x012DB5, 0x024FCE, 0x01D0FA, 0x00DB5D, 0x006FD5, 0x014048, 0x00AB71, - 0x024957, 0x01E1CC, 0x0240F7, 0x01C551, 0x014BDE, 0x00A92F, 0x013840, 0x009F2F, - 0x024957, 0x032BE4, 0x0227E1, 0x02FD7A, 0x01EAAB, 0x02A893, 0x019247, 0x022DF9, - 0x028116, 0x0240F7, 0x020D99, 0x01ECBC, 0x0120A3, 0x01B061, 0x00A4CE, 0x00E497, - 0x01974B, 0x016E8E, 0x00E6B5, 0x00CFA2, 0x02FCC9, 0x02B04A, 0x02D110, 0x0288F1, - 0x020000, 0x019247, 0x01B3CC, 0x015668, 0x011518, 0x009325, 0x00EBDA, 0x00783D, - 0x0137BB, 0x00F4ED, 0x00B093, 0x008F5C, 0x00AEF4, 0x005931, 0x0062BF, 0x003597, -}, -{ - 0x01AAAB, 0x0274CB, 0x02BEC4, 0x03CEC4, 0x02504F, 0x02346C, 0x03C689, 0x0365DC, - 0x01E000, 0x017922, 0x02E3C1, 0x024539, 0x011235, 0x008BCA, 0x01905A, 0x00D64D, - 0x02DBAD, 0x025A40, 0x02D134, 0x0236A5, 0x019ED6, 0x00D37B, 0x018650, 0x00C6FB, - 0x02DBAD, 0x03F6DD, 0x02B1D9, 0x03BCD8, 0x026555, 0x0352B8, 0x01F6D8, 0x02B977, - 0x03215C, 0x02D134, 0x029100, 0x0267EB, 0x0168CC, 0x021C7A, 0x00CE01, 0x011DBD, - 0x01FD1E, 0x01CA31, 0x012062, 0x01038A, 0x03BBFB, 0x035C5C, 0x038554, 0x032B2D, - 0x028000, 0x01F6D8, 0x0220C0, 0x01AC02, 0x015A5E, 0x00B7EF, 0x0126D1, 0x00964C, - 0x0185A9, 0x013228, 0x00DCB8, 0x00B333, 0x00DAB2, 0x006F7D, 0x007B6F, 0x0042FC, -}, -{ - 0x020000, 0x02F28D, 0x034B52, 0x0491B8, 0x02C6C5, 0x02A54E, 0x0487D8, 0x0413D5, - 0x024000, 0x01C48F, 0x0377B5, 0x02B977, 0x01490C, 0x00A7BF, 0x01E06C, 0x01012A, - 0x036E03, 0x02D2B3, 0x036172, 0x02A7FA, 0x01F1CE, 0x00FDC7, 0x01D460, 0x00EEC7, - 0x036E03, 0x04C1D6, 0x033BD1, 0x047C37, 0x02E000, 0x03FCDD, 0x025B6A, 0x0344F5, - 0x03C1A1, 0x036172, 0x031466, 0x02E31B, 0x01B0F5, 0x028892, 0x00F735, 0x0156E2, - 0x0262F1, 0x0225D5, 0x015A10, 0x013772, 0x047B2D, 0x04086E, 0x043998, 0x03CD69, - 0x030000, 0x025B6A, 0x028DB3, 0x02019B, 0x019FA3, 0x00DCB8, 0x0161C7, 0x00B45B, - 0x01D398, 0x016F63, 0x0108DD, 0x00D70A, 0x01066F, 0x0085C9, 0x00941F, 0x005062, -}, -{ - 0x02AAAB, 0x03EE11, 0x04646D, 0x0617A0, 0x03B3B2, 0x038713, 0x060A75, 0x056FC6, - 0x030000, 0x025B6A, 0x049F9B, 0x03A1F4, 0x01B6BB, 0x00DFAA, 0x028090, 0x0156E2, - 0x0492AE, 0x03C399, 0x0481ED, 0x038AA2, 0x0297BD, 0x01525F, 0x027080, 0x013E5E, - 0x0492AE, 0x0657C8, 0x044FC1, 0x05FAF4, 0x03D555, 0x055126, 0x03248D, 0x045BF2, - 0x05022D, 0x0481ED, 0x041B33, 0x03D979, 0x024147, 0x0360C3, 0x01499C, 0x01C92E, - 0x032E96, 0x02DD1C, 0x01CD6A, 0x019F43, 0x05F991, 0x056093, 0x05A220, 0x0511E1, - 0x040000, 0x03248D, 0x036799, 0x02ACCF, 0x022A2F, 0x01264B, 0x01D7B5, 0x00F079, - 0x026F75, 0x01E9D9, 0x016127, 0x011EB8, 0x015DE9, 0x00B262, 0x00C57F, 0x006B2D, -}, -{ - 0x038000, 0x052876, 0x05C3CF, 0x07FF02, 0x04DBD9, 0x04A148, 0x07EDBA, 0x0722B4, - 0x03F000, 0x0317FB, 0x06117C, 0x04C491, 0x023FD5, 0x01258F, 0x0348BD, 0x01C209, - 0x060085, 0x04F0B9, 0x05EA87, 0x04A5F5, 0x036728, 0x01BC1C, 0x0333A8, 0x01A1DB, - 0x060085, 0x085336, 0x05A8AE, 0x07D960, 0x050800, 0x06FA82, 0x041FF9, 0x05B8AE, - 0x0692DA, 0x05EA87, 0x0563B2, 0x050D6E, 0x02F5AD, 0x046F00, 0x01B09C, 0x02580C, - 0x042D25, 0x03C235, 0x025D9B, 0x022108, 0x07D78F, 0x070EC1, 0x0764CA, 0x06A777, - 0x054000, 0x041FF9, 0x0477F9, 0x0382D0, 0x02D75E, 0x018242, 0x026B1D, 0x013B9F, - 0x03324A, 0x0282ED, 0x01CF83, 0x017851, 0x01CB42, 0x00EA21, 0x010336, 0x008CAC, -}, -{ - 0x040000, 0x05E519, 0x0696A4, 0x092370, 0x058D8A, 0x054A9C, 0x090FB0, 0x0827AA, - 0x048000, 0x03891F, 0x06EF69, 0x0572EE, 0x029218, 0x014F7E, 0x03C0D8, 0x020254, - 0x06DC05, 0x05A565, 0x06C2E4, 0x054FF3, 0x03E39B, 0x01FB8E, 0x03A8C0, 0x01DD8D, - 0x06DC05, 0x0983AC, 0x0677A2, 0x08F86E, 0x05C000, 0x07F9B9, 0x04B6D4, 0x0689EB, - 0x078343, 0x06C2E4, 0x0628CC, 0x05C635, 0x0361EA, 0x051124, 0x01EE69, 0x02ADC5, - 0x04C5E1, 0x044BAA, 0x02B41F, 0x026EE5, 0x08F65A, 0x0810DD, 0x087330, 0x079AD1, - 0x060000, 0x04B6D4, 0x051B65, 0x040337, 0x033F47, 0x01B970, 0x02C38F, 0x0168B6, - 0x03A730, 0x02DEC6, 0x0211BA, 0x01AE14, 0x020CDD, 0x010B93, 0x01283E, 0x00A0C4, -}, -{ - 0x050000, 0x075E60, 0x083C4D, 0x0B6C4C, 0x06F0ED, 0x069D43, 0x0B539C, 0x0A3194, - 0x05A000, 0x046B67, 0x08AB44, 0x06CFAA, 0x03369E, 0x01A35E, 0x04B10F, 0x0282E8, - 0x089307, 0x070EBF, 0x08739C, 0x06A3F0, 0x04DC82, 0x027A72, 0x0492F0, 0x0254F0, - 0x089307, 0x0BE497, 0x08158B, 0x0B3689, 0x073000, 0x09F827, 0x05E489, 0x082C66, - 0x096413, 0x08739C, 0x07B2FF, 0x0737C2, 0x043A64, 0x06556D, 0x026A04, 0x035936, - 0x05F75A, 0x055E94, 0x036127, 0x030A9E, 0x0B33F1, 0x0A1514, 0x0A8FFC, 0x098186, - 0x078000, 0x05E489, 0x06623F, 0x050405, 0x040F19, 0x0227CC, 0x037473, 0x01C2E3, - 0x0490FC, 0x039677, 0x029629, 0x021999, 0x029015, 0x014E78, 0x01724E, 0x00C8F5, -}, -{ - 0x060000, 0x08D7A6, 0x09E1F6, 0x0DB528, 0x085450, 0x07EFEA, 0x0D9788, 0x0C3B7E, - 0x06C000, 0x054DAE, 0x0A671E, 0x082C66, 0x03DB24, 0x01F73E, 0x05A145, 0x03037D, - 0x0A4A08, 0x087818, 0x0A2455, 0x07F7ED, 0x05D569, 0x02F955, 0x057D20, 0x02CC54, - 0x0A4A08, 0x0E4582, 0x09B373, 0x0D74A5, 0x08A000, 0x0BF696, 0x07123E, 0x09CEE0, - 0x0B44E4, 0x0A2455, 0x093D32, 0x08A950, 0x0512DF, 0x0799B6, 0x02E59E, 0x0404A7, - 0x0728D2, 0x06717F, 0x040E2F, 0x03A657, 0x0D7187, 0x0C194B, 0x0CACC8, 0x0B683A, - 0x090000, 0x07123E, 0x07A918, 0x0604D2, 0x04DEEA, 0x029629, 0x042556, 0x021D11, - 0x057AC8, 0x044E28, 0x031A97, 0x02851E, 0x03134C, 0x01915C, 0x01BC5D, 0x00F126, -}, -{ - 0x080000, 0x0BCA33, 0x0D2D48, 0x1246E0, 0x0B1B15, 0x0A9538, 0x121F5F, 0x104F53, - 0x090000, 0x07123E, 0x0DDED2, 0x0AE5DD, 0x052430, 0x029EFD, 0x0781B1, 0x0404A7, - 0x0DB80B, 0x0B4ACB, 0x0D85C7, 0x0A9FE7, 0x07C736, 0x03F71D, 0x075180, 0x03BB1A, - 0x0DB80B, 0x130757, 0x0CEF44, 0x11F0DC, 0x0B8000, 0x0FF372, 0x096DA8, 0x0D13D6, - 0x0F0686, 0x0D85C7, 0x0C5198, 0x0B8C6A, 0x06C3D4, 0x0A2248, 0x03DCD3, 0x055B8A, - 0x098BC3, 0x089754, 0x05683E, 0x04DDC9, 0x11ECB4, 0x1021B9, 0x10E661, 0x0F35A3, - 0x0C0000, 0x096DA8, 0x0A36CB, 0x08066E, 0x067E8E, 0x0372E1, 0x05871E, 0x02D16B, - 0x074E60, 0x05BD8B, 0x042374, 0x035C28, 0x0419BB, 0x021726, 0x02507C, 0x014188, -}, -{ - 0x0C0000, 0x11AF4C, 0x13C3EC, 0x1B6A50, 0x10A89F, 0x0FDFD4, 0x1B2F0F, 0x1876FD, - 0x0D8000, 0x0A9B5D, 0x14CE3C, 0x1058CB, 0x07B649, 0x03EE7B, 0x0B4289, 0x0606FB, - 0x149410, 0x10F030, 0x1448AB, 0x0FEFDA, 0x0BAAD2, 0x05F2AB, 0x0AFA40, 0x0598A7, - 0x149410, 0x1C8B03, 0x1366E6, 0x1AE949, 0x114000, 0x17ED2B, 0x0E247C, 0x139DC1, - 0x1689C8, 0x1448AB, 0x127A63, 0x11529F, 0x0A25BE, 0x0F336D, 0x05CB3C, 0x08094E, - 0x0E51A4, 0x0CE2FE, 0x081C5D, 0x074CAE, 0x1AE30E, 0x183296, 0x195991, 0x16D074, - 0x120000, 0x0E247C, 0x0F5230, 0x0C09A5, 0x09BDD5, 0x052C51, 0x084AAC, 0x043A21, - 0x0AF590, 0x089C51, 0x06352E, 0x050A3B, 0x062698, 0x0322B9, 0x0378BA, 0x01E24D, -}, -{ - 0x110000, 0x190DAC, 0x1C0039, 0x26D69C, 0x17998C, 0x167D16, 0x2682AB, 0x22A891, - 0x132000, 0x0F06C3, 0x1D797F, 0x172876, 0x0AECE7, 0x0591D9, 0x0FF398, 0x0889E3, - 0x1D2717, 0x17FEEF, 0x1CBC47, 0x1693CA, 0x108754, 0x086D1D, 0x0F8D30, 0x07ED98, - 0x1D2717, 0x286F9A, 0x1B7C71, 0x261FD3, 0x187000, 0x21E552, 0x140904, 0x1BCA27, - 0x1FEDDC, 0x1CBC47, 0x1A2D62, 0x188A62, 0x0E6022, 0x1588DA, 0x083540, 0x0B6284, - 0x1448FE, 0x124192, 0x0B7D84, 0x0A574B, 0x2616FF, 0x2247AA, 0x23E98D, 0x2051FA, - 0x198000, 0x140904, 0x15B46F, 0x110DAA, 0x0DCCEE, 0x07541E, 0x0BBF1F, 0x05FD04, - 0x0F868B, 0x0C32C8, 0x08CB57, 0x0723D4, 0x08B6AD, 0x047130, 0x04EB08, 0x02AB42, -}, -{ - 0x160000, 0x206C0C, 0x243C86, 0x3242E8, 0x1E8A79, 0x1D1A59, 0x31D646, 0x2CDA25, - 0x18C000, 0x13722A, 0x2624C3, 0x1DF820, 0x0E2385, 0x073537, 0x14A4A7, 0x0B0CCC, - 0x25BA1D, 0x1F0DAE, 0x252FE4, 0x1D37BB, 0x1563D6, 0x0AE78E, 0x142021, 0x0A4288, - 0x25BA1D, 0x345430, 0x2391FB, 0x31565C, 0x1FA000, 0x2BDD7A, 0x19ED8D, 0x23F68C, - 0x2951EF, 0x252FE4, 0x21E061, 0x1FC224, 0x129A87, 0x1BDE47, 0x0A9F44, 0x0EBBBA, - 0x1A4058, 0x17A026, 0x0EDEAB, 0x0D61E9, 0x314AEF, 0x2C5CBE, 0x2E798A, 0x29D380, - 0x210000, 0x19ED8D, 0x1C16AE, 0x1611AE, 0x11DC06, 0x097BEA, 0x0F3391, 0x07BFE7, - 0x141787, 0x0FC93E, 0x0B617F, 0x093D6D, 0x0B46C1, 0x05BFA8, 0x065D55, 0x037437, -}, -{ - 0x1C0000, 0x2943B2, 0x2E1E7C, 0x3FF810, 0x26DEC9, 0x250A43, 0x3F6DCE, 0x3915A3, - 0x1F8000, 0x18BFD8, 0x308BE1, 0x262485, 0x11FEA9, 0x092C75, 0x1A45EB, 0x0E1049, - 0x300425, 0x2785C6, 0x2F5439, 0x252FA8, 0x1B393F, 0x0DE0E4, 0x199D41, 0x0D0EDC, - 0x300425, 0x4299B2, 0x2D456E, 0x3ECB00, 0x284000, 0x37D40F, 0x20FFCB, 0x2DC56D, - 0x3496D3, 0x2F5439, 0x2B1D93, 0x286B74, 0x17AD66, 0x2377FE, 0x0D84E2, 0x12C062, - 0x21692A, 0x1E11A5, 0x12ECDA, 0x110840, 0x3EBC76, 0x387608, 0x3B2652, 0x353BBA, - 0x2A0000, 0x20FFCB, 0x23BFC6, 0x1C1681, 0x16BAF1, 0x0C1213, 0x1358E8, 0x09DCF8, - 0x19924F, 0x141767, 0x0E7C16, 0x0BC28A, 0x0E5A0D, 0x075104, 0x0819B2, 0x04655D, -}, -{ - 0x220000, 0x321B58, 0x380072, 0x4DAD38, 0x2F3318, 0x2CFA2D, 0x4D0556, 0x455122, - 0x264000, 0x1E0D86, 0x3AF2FE, 0x2E50EB, 0x15D9CE, 0x0B23B2, 0x1FE730, 0x1113C7, - 0x3A4E2D, 0x2FFDDF, 0x39788E, 0x2D2795, 0x210EA8, 0x10DA39, 0x1F1A61, 0x0FDB2F, - 0x3A4E2D, 0x50DF33, 0x36F8E1, 0x4C3FA5, 0x30E000, 0x43CAA5, 0x281209, 0x37944D, - 0x3FDBB7, 0x39788E, 0x345AC4, 0x3114C3, 0x1CC044, 0x2B11B4, 0x106A80, 0x16C509, - 0x2891FC, 0x248324, 0x16FB08, 0x14AE97, 0x4C2DFD, 0x448F54, 0x47D31B, 0x40A3F5, - 0x330000, 0x281209, 0x2B68DF, 0x221B53, 0x1B99DB, 0x0EA83B, 0x177E3E, 0x0BFA09, - 0x1F0D17, 0x18658F, 0x1196AE, 0x0E47A8, 0x116D5A, 0x08E260, 0x09D60F, 0x055684, -}, -{ - 0x2C0000, 0x40D818, 0x48790C, 0x6485D0, 0x3D14F2, 0x3A34B2, 0x63AC8D, 0x59B44A, - 0x318000, 0x26E454, 0x4C4986, 0x3BF03F, 0x1C470A, 0x0E6A6E, 0x29494D, 0x161998, - 0x4B743A, 0x3E1B5C, 0x4A5FC7, 0x3A6F75, 0x2AC7AC, 0x15CF1D, 0x284041, 0x148510, - 0x4B743A, 0x68A861, 0x4723F6, 0x62ACB8, 0x3F4000, 0x57BAF3, 0x33DB1A, 0x47ED19, - 0x52A3DE, 0x4A5FC7, 0x43C0C2, 0x3F8448, 0x25350D, 0x37BC8E, 0x153E87, 0x1D7775, - 0x3480B0, 0x2F404C, 0x1DBD56, 0x1AC3D2, 0x6295DE, 0x58B97B, 0x5CF313, 0x53A701, - 0x420000, 0x33DB1A, 0x382D5C, 0x2C235D, 0x23B80D, 0x12F7D4, 0x1E6723, 0x0F7FCF, - 0x282F0E, 0x1F927D, 0x16C2FF, 0x127AD9, 0x168D83, 0x0B7F50, 0x0CBAAA, 0x06E86E, -}, -}; - -static const uint8_t binkb_runbits[64] = { - 6, 6, 6, 6, 6, 6, 6, 6, - 6, 6, 6, 6, 6, 6, 6, 6, - 6, 6, 6, 6, 6, 6, 6, 6, - 6, 6, 6, 6, 6, 6, 6, 6, - 5, 5, 5, 5, 5, 5, 5, 5, - 5, 5, 5, 5, 5, 5, 5, 5, - 4, 4, 4, 4, 4, 4, 4, 4, - 3, 3, 3, 3, 2, 2, 1, 0, -}; - -static const uint8_t binkb_intra_seed[64] = { - 16, 16, 16, 19, 16, 19, 22, 22, - 22, 22, 26, 24, 26, 22, 22, 27, - 27, 27, 26, 26, 26, 29, 29, 29, - 27, 27, 27, 26, 34, 34, 34, 29, - 29, 29, 27, 27, 37, 34, 34, 32, - 32, 29, 29, 38, 37, 35, 35, 34, - 35, 40, 40, 40, 38, 38, 48, 48, - 46, 46, 58, 56, 56, 69, 69, 83, -}; - -static const uint8_t binkb_inter_seed[64] = { - 16, 17, 17, 18, 18, 18, 19, 19, - 19, 19, 20, 20, 20, 20, 20, 21, - 21, 21, 21, 21, 21, 22, 22, 22, - 22, 22, 22, 22, 23, 23, 23, 23, - 23, 23, 23, 23, 24, 24, 24, 25, - 24, 24, 24, 25, 26, 26, 26, 26, - 25, 27, 27, 27, 27, 27, 28, 28, - 28, 28, 30, 30, 30, 31, 31, 33, -}; - -static const uint8_t binkb_num[16] = { - 1, 4, 5, 2, 7, 8, 3, 7, 4, 9, 5, 6, 7, 8, 9, 10 -}; - -static const uint8_t binkb_den[16] = { - 1, 3, 3, 1, 3, 3, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1 -}; - -#endif /* AVCODEC_BINKDATA_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_syncwords.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_syncwords.h deleted file mode 100644 index 649bbd90dcf42dc3a36437e0e828afafe142d7ef..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_syncwords.h +++ /dev/null @@ -1,39 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DCA_SYNCWORDS_H -#define AVCODEC_DCA_SYNCWORDS_H - -#define DCA_SYNCWORD_CORE_BE 0x7FFE8001U -#define DCA_SYNCWORD_CORE_LE 0xFE7F0180U -#define DCA_SYNCWORD_CORE_14B_BE 0x1FFFE800U -#define DCA_SYNCWORD_CORE_14B_LE 0xFF1F00E8U -#define DCA_SYNCWORD_XCH 0x5A5A5A5AU -#define DCA_SYNCWORD_XXCH 0x47004A03U -#define DCA_SYNCWORD_X96 0x1D95F262U -#define DCA_SYNCWORD_XBR 0x655E315EU -#define DCA_SYNCWORD_LBR 0x0A801921U -#define DCA_SYNCWORD_XLL 0x41A29547U -#define DCA_SYNCWORD_SUBSTREAM 0x64582025U -#define DCA_SYNCWORD_SUBSTREAM_CORE 0x02B09261U -#define DCA_SYNCWORD_REV1AUX 0x9A1105A0U - -#define DCA_SYNCWORD_XLL_X 0x02000850U -#define DCA_SYNCWORD_XLL_X_IMAX 0xF14000D0U - -#endif /* AVCODEC_DCA_SYNCWORDS_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dts2pts_bsf.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dts2pts_bsf.c deleted file mode 100644 index 263514faad44cdda85d17e9a93278cf359c3b3d7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dts2pts_bsf.c +++ /dev/null @@ -1,540 +0,0 @@ -/* - * Copyright (c) 2022 James Almer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Derive PTS by reordering DTS from supported streams - */ - -#include "libavutil/avassert.h" -#include "libavutil/fifo.h" -#include "libavutil/tree.h" - -#include "bsf.h" -#include "bsf_internal.h" -#include "cbs.h" -#include "cbs_h264.h" -#include "h264_parse.h" -#include "h264_ps.h" - -typedef struct DTS2PTSNode { - int64_t dts; - int64_t duration; - int poc; - int gop; -} DTS2PTSNode; - -typedef struct DTS2PTSFrame { - AVPacket *pkt; - int poc; - int poc_diff; - int gop; -} DTS2PTSFrame; - -typedef struct DTS2PTSH264Context { - H264POCContext poc; - SPS sps; - int poc_diff; - int last_poc; - int highest_poc; - int picture_structure; -} DTS2PTSH264Context; - -typedef struct DTS2PTSContext { - struct AVTreeNode *root; - AVFifo *fifo; - - // Codec specific function pointers and constants - int (*init)(AVBSFContext *ctx); - int (*filter)(AVBSFContext *ctx); - void (*flush)(AVBSFContext *ctx); - size_t fifo_size; - - CodedBitstreamContext *cbc; - CodedBitstreamFragment au; - - union { - DTS2PTSH264Context h264; - } u; - - int nb_frame; - int gop; - int eof; -} DTS2PTSContext; - -// AVTreeNode callbacks -static int cmp_insert(const void *key, const void *node) -{ - int ret = ((const DTS2PTSNode *)key)->poc - ((const DTS2PTSNode *)node)->poc; - if (!ret) - ret = ((const DTS2PTSNode *)key)->gop - ((const DTS2PTSNode *)node)->gop; - return ret; -} - -static int cmp_find(const void *key, const void *node) -{ - const DTS2PTSFrame * key1 = key; - const DTS2PTSNode *node1 = node; - int ret = FFDIFFSIGN(key1->poc, node1->poc); - if (!ret) - ret = key1->gop - node1->gop; - return ret; -} - -static int dec_poc(void *opaque, void *elem) -{ - DTS2PTSNode *node = elem; - int dec = *(int *)opaque; - node->poc -= dec; - return 0; -} - -static int free_node(void *opaque, void *elem) -{ - DTS2PTSNode *node = elem; - av_free(node); - return 0; -} - -// Shared functions -static int alloc_and_insert_node(AVBSFContext *ctx, int64_t ts, int64_t duration, - int poc, int poc_diff, int gop) -{ - DTS2PTSContext *s = ctx->priv_data; - for (int i = 0; i < poc_diff; i++) { - struct AVTreeNode *node = av_tree_node_alloc(); - DTS2PTSNode *poc_node, *ret; - if (!node) - return AVERROR(ENOMEM); - poc_node = av_malloc(sizeof(*poc_node)); - if (!poc_node) { - av_free(node); - return AVERROR(ENOMEM); - } - if (i && ts != AV_NOPTS_VALUE) - ts += duration / poc_diff; - *poc_node = (DTS2PTSNode) { ts, duration, poc++, gop }; - ret = av_tree_insert(&s->root, poc_node, cmp_insert, &node); - if (ret && ret != poc_node) { - *ret = *poc_node; - av_free(poc_node); - av_free(node); - } - } - return 0; -} - -// H.264 -static const CodedBitstreamUnitType h264_decompose_unit_types[] = { - H264_NAL_SPS, - H264_NAL_PPS, - H264_NAL_IDR_SLICE, - H264_NAL_SLICE, -}; - -static int h264_init(AVBSFContext *ctx) -{ - DTS2PTSContext *s = ctx->priv_data; - DTS2PTSH264Context *h264 = &s->u.h264; - - s->cbc->decompose_unit_types = h264_decompose_unit_types; - s->cbc->nb_decompose_unit_types = FF_ARRAY_ELEMS(h264_decompose_unit_types); - - s->nb_frame = -(ctx->par_in->video_delay << 1); - h264->last_poc = h264->highest_poc = INT_MIN; - - return 0; -} - -static int get_mmco_reset(const H264RawSliceHeader *header) -{ - if (header->nal_unit_header.nal_ref_idc == 0 || - !header->adaptive_ref_pic_marking_mode_flag) - return 0; - - for (int i = 0; i < H264_MAX_MMCO_COUNT; i++) { - if (header->mmco[i].memory_management_control_operation == 0) - return 0; - else if (header->mmco[i].memory_management_control_operation == 5) - return 1; - } - - return 0; -} - -static int h264_queue_frame(AVBSFContext *ctx, AVPacket *pkt, int poc, int *queued) -{ - DTS2PTSContext *s = ctx->priv_data; - DTS2PTSH264Context *h264 = &s->u.h264; - DTS2PTSFrame frame; - int poc_diff, ret; - - poc_diff = (h264->picture_structure == 3) + 1; - if (h264->sps.frame_mbs_only_flag && h264->poc_diff) - poc_diff = FFMIN(poc_diff, h264->poc_diff); - if (poc < 0) { - av_tree_enumerate(s->root, &poc_diff, NULL, dec_poc); - s->nb_frame -= poc_diff; - } - // Check if there was a POC reset (Like an IDR slice) - if (s->nb_frame > h264->highest_poc) { - s->nb_frame = 0; - s->gop = (s->gop + 1) % s->fifo_size; - h264->highest_poc = h264->last_poc; - } - - ret = alloc_and_insert_node(ctx, pkt->dts, pkt->duration, s->nb_frame, poc_diff, s->gop); - if (ret < 0) - return ret; - av_log(ctx, AV_LOG_DEBUG, "Queueing frame with POC %d, GOP %d, dts %"PRId64"\n", - poc, s->gop, pkt->dts); - s->nb_frame += poc_diff; - - // Add frame to output FIFO only once - if (*queued) - return 0; - - frame = (DTS2PTSFrame) { pkt, poc, poc_diff, s->gop }; - ret = av_fifo_write(s->fifo, &frame, 1); - av_assert2(ret >= 0); - *queued = 1; - - return 0; -} - -static int h264_filter(AVBSFContext *ctx) -{ - DTS2PTSContext *s = ctx->priv_data; - DTS2PTSH264Context *h264 = &s->u.h264; - CodedBitstreamFragment *au = &s->au; - AVPacket *in; - int output_picture_number = INT_MIN; - int field_poc[2]; - int queued = 0, ret; - - ret = ff_bsf_get_packet(ctx, &in); - if (ret < 0) - return ret; - - ret = ff_cbs_read_packet(s->cbc, au, in); - if (ret < 0) { - av_log(ctx, AV_LOG_WARNING, "Failed to parse access unit.\n"); - goto fail; - } - - for (int i = 0; i < au->nb_units; i++) { - CodedBitstreamUnit *unit = &au->units[i]; - - switch (unit->type) { - case H264_NAL_IDR_SLICE: - h264->poc.prev_frame_num = 0; - h264->poc.prev_frame_num_offset = 0; - h264->poc.prev_poc_msb = - h264->poc.prev_poc_lsb = 0; - // fall-through - case H264_NAL_SLICE: { - const H264RawSlice *slice = unit->content; - const H264RawSliceHeader *header = &slice->header; - const CodedBitstreamH264Context *cbs_h264 = s->cbc->priv_data; - const H264RawSPS *sps = cbs_h264->active_sps; - int got_reset; - - if (!sps) { - av_log(ctx, AV_LOG_ERROR, "No active SPS for a slice\n"); - goto fail; - } - // Initialize the SPS struct with the fields ff_h264_init_poc() cares about - h264->sps.frame_mbs_only_flag = sps->frame_mbs_only_flag; - h264->sps.log2_max_frame_num = sps->log2_max_frame_num_minus4 + 4; - h264->sps.poc_type = sps->pic_order_cnt_type; - h264->sps.log2_max_poc_lsb = sps->log2_max_pic_order_cnt_lsb_minus4 + 4; - h264->sps.offset_for_non_ref_pic = sps->offset_for_non_ref_pic; - h264->sps.offset_for_top_to_bottom_field = sps->offset_for_top_to_bottom_field; - h264->sps.poc_cycle_length = sps->num_ref_frames_in_pic_order_cnt_cycle; - for (int i = 0; i < h264->sps.poc_cycle_length; i++) - h264->sps.offset_for_ref_frame[i] = sps->offset_for_ref_frame[i]; - - h264->picture_structure = sps->frame_mbs_only_flag ? 3 : - (header->field_pic_flag ? - header->field_pic_flag + header->bottom_field_flag : 3); - - h264->poc.frame_num = header->frame_num; - h264->poc.poc_lsb = header->pic_order_cnt_lsb; - h264->poc.delta_poc_bottom = header->delta_pic_order_cnt_bottom; - h264->poc.delta_poc[0] = header->delta_pic_order_cnt[0]; - h264->poc.delta_poc[1] = header->delta_pic_order_cnt[1]; - - field_poc[0] = field_poc[1] = INT_MAX; - ret = ff_h264_init_poc(field_poc, &output_picture_number, &h264->sps, - &h264->poc, h264->picture_structure, - header->nal_unit_header.nal_ref_idc); - if (ret < 0) { - av_log(ctx, AV_LOG_ERROR, "ff_h264_init_poc() failure\n"); - goto fail; - } - - got_reset = get_mmco_reset(header); - h264->poc.prev_frame_num = got_reset ? 0 : h264->poc.frame_num; - h264->poc.prev_frame_num_offset = got_reset ? 0 : h264->poc.frame_num_offset; - if (header->nal_unit_header.nal_ref_idc != 0) { - h264->poc.prev_poc_msb = got_reset ? 0 : h264->poc.poc_msb; - if (got_reset) - h264->poc.prev_poc_lsb = h264->picture_structure == 2 ? 0 : field_poc[0]; - else - h264->poc.prev_poc_lsb = h264->poc.poc_lsb; - } - - if (output_picture_number != h264->last_poc) { - if (h264->last_poc != INT_MIN) { - int64_t diff = FFABS(h264->last_poc - (int64_t)output_picture_number); - - if ((output_picture_number < 0) && !h264->last_poc) - h264->poc_diff = 0; - else if (FFABS((int64_t)output_picture_number) < h264->poc_diff) { - diff = FFABS(output_picture_number); - h264->poc_diff = 0; - } - if ((!h264->poc_diff || (h264->poc_diff > diff)) && diff <= INT_MAX) { - h264->poc_diff = diff; - if (h264->poc_diff == 1 && h264->sps.frame_mbs_only_flag) { - av_tree_enumerate(s->root, &h264->poc_diff, NULL, dec_poc); - s->nb_frame -= 2; - } - } - } - h264->last_poc = output_picture_number; - h264->highest_poc = FFMAX(h264->highest_poc, output_picture_number); - - ret = h264_queue_frame(ctx, in, output_picture_number, &queued); - if (ret < 0) - goto fail; - } - break; - } - default: - break; - } - } - - if (output_picture_number == INT_MIN) { - av_log(ctx, AV_LOG_ERROR, "No slices in access unit\n"); - ret = AVERROR_INVALIDDATA; - goto fail; - } - - ret = 0; -fail: - ff_cbs_fragment_reset(au); - if (!queued) - av_packet_free(&in); - - return ret; -} - -static void h264_flush(AVBSFContext *ctx) -{ - DTS2PTSContext *s = ctx->priv_data; - DTS2PTSH264Context *h264 = &s->u.h264; - - memset(&h264->sps, 0, sizeof(h264->sps)); - memset(&h264->poc, 0, sizeof(h264->poc)); - s->nb_frame = -(ctx->par_in->video_delay << 1); - h264->last_poc = h264->highest_poc = INT_MIN; -} - -// Core functions -static const struct { - enum AVCodecID id; - int (*init)(AVBSFContext *ctx); - int (*filter)(AVBSFContext *ctx); - void (*flush)(AVBSFContext *ctx); - size_t fifo_size; -} func_tab[] = { - { AV_CODEC_ID_H264, h264_init, h264_filter, h264_flush, H264_MAX_DPB_FRAMES * 2 * 2 }, -}; - -static int dts2pts_init(AVBSFContext *ctx) -{ - DTS2PTSContext *s = ctx->priv_data; - CodedBitstreamFragment *au = &s->au; - int i, ret; - - for (i = 0; i < FF_ARRAY_ELEMS(func_tab); i++) { - if (func_tab[i].id == ctx->par_in->codec_id) { - s->init = func_tab[i].init; - s->filter = func_tab[i].filter; - s->flush = func_tab[i].flush; - s->fifo_size = func_tab[i].fifo_size; - break; - } - } - if (i == FF_ARRAY_ELEMS(func_tab)) - return AVERROR_BUG; - av_assert0(s->filter && s->fifo_size); - - s->fifo = av_fifo_alloc2(s->fifo_size, sizeof(DTS2PTSFrame), 0); - if (!s->fifo) - return AVERROR(ENOMEM); - - ret = ff_cbs_init(&s->cbc, ctx->par_in->codec_id, ctx); - if (ret < 0) - return ret; - - if (s->init) { - ret = s->init(ctx); - if (ret < 0) - return ret; - } - - if (!ctx->par_in->extradata_size) - return 0; - - ret = ff_cbs_read_extradata(s->cbc, au, ctx->par_in); - if (ret < 0) - av_log(ctx, AV_LOG_WARNING, "Failed to parse extradata.\n"); - - ff_cbs_fragment_reset(au); - - return 0; -} - -static int dts2pts_filter(AVBSFContext *ctx, AVPacket *out) -{ - DTS2PTSContext *s = ctx->priv_data; - DTS2PTSNode *poc_node = NULL, *next[2] = { NULL, NULL }; - DTS2PTSFrame frame; - int ret; - - // Fill up the FIFO and POC tree - while (!s->eof && av_fifo_can_write(s->fifo)) { - ret = s->filter(ctx); - if (ret < 0) { - if (ret != AVERROR_EOF) - return ret; - s->eof = 1; - } - } - - if (!av_fifo_can_read(s->fifo)) - return AVERROR_EOF; - - // Fetch a packet from the FIFO - ret = av_fifo_read(s->fifo, &frame, 1); - av_assert2(ret >= 0); - av_packet_move_ref(out, frame.pkt); - av_packet_free(&frame.pkt); - - // Search the timestamp for the requested POC and set PTS - poc_node = av_tree_find(s->root, &frame, cmp_find, (void **)next); - if (!poc_node) { - poc_node = next[1]; - if (!poc_node || poc_node->poc != frame.poc) - poc_node = next[0]; - } - if (poc_node && poc_node->poc == frame.poc) { - out->pts = poc_node->dts; - if (!s->eof) { - // Remove the found entry from the tree - DTS2PTSFrame dup = (DTS2PTSFrame) { NULL, frame.poc + 1, frame.poc_diff, frame.gop }; - for (; dup.poc_diff > 0; dup.poc++, dup.poc_diff--) { - struct AVTreeNode *node = NULL; - if (!poc_node || poc_node->dts != out->pts) - continue; - av_tree_insert(&s->root, poc_node, cmp_insert, &node); - av_free(poc_node); - av_free(node); - poc_node = av_tree_find(s->root, &dup, cmp_find, NULL); - } - } - } else if (s->eof && frame.poc > INT_MIN) { - DTS2PTSFrame dup = (DTS2PTSFrame) { NULL, frame.poc - 1, frame.poc_diff, frame.gop }; - poc_node = av_tree_find(s->root, &dup, cmp_find, NULL); - if (poc_node && poc_node->poc == dup.poc) { - out->pts = poc_node->dts; - if (out->pts != AV_NOPTS_VALUE) - out->pts += poc_node->duration; - ret = alloc_and_insert_node(ctx, out->pts, out->duration, - frame.poc, frame.poc_diff, frame.gop); - if (ret < 0) { - av_packet_unref(out); - return ret; - } - if (!ret) - av_log(ctx, AV_LOG_DEBUG, "Queueing frame for POC %d, GOP %d, dts %"PRId64", " - "generated from POC %d, GOP %d, dts %"PRId64", duration %"PRId64"\n", - frame.poc, frame.gop, out->pts, - poc_node->poc, poc_node->gop, poc_node->dts, poc_node->duration); - } else - av_log(ctx, AV_LOG_WARNING, "No timestamp for POC %d in tree\n", frame.poc); - } else - av_log(ctx, AV_LOG_WARNING, "No timestamp for POC %d in tree\n", frame.poc); - av_log(ctx, AV_LOG_DEBUG, "Returning frame for POC %d, GOP %d, dts %"PRId64", pts %"PRId64"\n", - frame.poc, frame.gop, out->dts, out->pts); - - return 0; -} - -static void dts2pts_flush(AVBSFContext *ctx) -{ - DTS2PTSContext *s = ctx->priv_data; - DTS2PTSFrame frame; - - if (s->flush) - s->flush(ctx); - s->eof = 0; - s->gop = 0; - - while (s->fifo && av_fifo_read(s->fifo, &frame, 1) >= 0) - av_packet_free(&frame.pkt); - - av_tree_enumerate(s->root, NULL, NULL, free_node); - av_tree_destroy(s->root); - s->root = NULL; - - ff_cbs_fragment_reset(&s->au); - if (s->cbc) - ff_cbs_flush(s->cbc); -} - -static void dts2pts_close(AVBSFContext *ctx) -{ - DTS2PTSContext *s = ctx->priv_data; - - dts2pts_flush(ctx); - - av_fifo_freep2(&s->fifo); - ff_cbs_fragment_free(&s->au); - ff_cbs_close(&s->cbc); -} - -static const enum AVCodecID dts2pts_codec_ids[] = { - AV_CODEC_ID_H264, - AV_CODEC_ID_NONE, -}; - -const FFBitStreamFilter ff_dts2pts_bsf = { - .p.name = "dts2pts", - .p.codec_ids = dts2pts_codec_ids, - .priv_data_size = sizeof(DTS2PTSContext), - .init = dts2pts_init, - .flush = dts2pts_flush, - .close = dts2pts_close, - .filter = dts2pts_filter, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263dec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263dec.h deleted file mode 100644 index 9f1db7290373004cbdd69d2d7396b17bfe2c3d1f..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263dec.h +++ /dev/null @@ -1,64 +0,0 @@ -/* - * H.263 decoder internal header - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ -#ifndef AVCODEC_H263DEC_H -#define AVCODEC_H263DEC_H - -#include "mpegvideo.h" -#include "vlc.h" - -// The defines below define the number of bits that are read at once for -// reading vlc values. Changing these may improve speed and data cache needs -// be aware though that decreasing them may need the number of stages that is -// passed to get_vlc* to be increased. -#define H263_MV_VLC_BITS 9 -#define INTRA_MCBPC_VLC_BITS 6 -#define INTER_MCBPC_VLC_BITS 7 -#define CBPY_VLC_BITS 6 -#define TEX_VLC_BITS 9 - -extern VLC ff_h263_intra_MCBPC_vlc; -extern VLC ff_h263_inter_MCBPC_vlc; -extern VLC ff_h263_cbpy_vlc; -extern VLC ff_h263_mv_vlc; - -extern const enum AVPixelFormat ff_h263_hwaccel_pixfmt_list_420[]; - -int ff_h263_decode_motion(MpegEncContext * s, int pred, int f_code); -int ff_h263_decode_init(AVCodecContext *avctx); -int ff_h263_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame, AVPacket *avpkt); -int ff_h263_decode_end(AVCodecContext *avctx); -void ff_h263_decode_init_vlc(void); -int ff_h263_decode_picture_header(MpegEncContext *s); -int ff_h263_decode_gob_header(MpegEncContext *s); -int ff_h263_decode_mba(MpegEncContext *s); - -/** - * Print picture info if FF_DEBUG_PICT_INFO is set. - */ -void ff_h263_show_pict_info(MpegEncContext *s); - -int ff_intel_h263_decode_picture_header(MpegEncContext *s); -int ff_h263_decode_mb(MpegEncContext *s, - int16_t block[6][64]); - -int ff_h263_resync(MpegEncContext *s); - -#endif diff --git a/spaces/congsaPfin/Manga-OCR/logs/Brawlhalla Download Now and Join the Millions of Players in the Ultimate Brawl.md b/spaces/congsaPfin/Manga-OCR/logs/Brawlhalla Download Now and Join the Millions of Players in the Ultimate Brawl.md deleted file mode 100644 index 5e76c51c1991b3f5e4df3cbb5bb1b642babc159f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Brawlhalla Download Now and Join the Millions of Players in the Ultimate Brawl.md +++ /dev/null @@ -1,142 +0,0 @@ -
-

Brawlhalla Download 2022: How to Play the Free-to-Play Platform Fighter

-

If you are looking for a fun, fast-paced, and free fighting game that you can play with your friends online or locally, then you should check out Brawlhalla. This game is a 2D platform fighter that supports up to 8 players in various modes and maps. You can choose from over 50 unique characters, each with their own weapons and abilities, and brawl to prove who's the best in an epic test of strength and skill.

-

In this article, we will tell you everything you need to know about Brawlhalla, including what it is, how to download it for free, how to play online multiplayer, and some tips and tricks to help you improve your game. Let's get started!

-

brawlhalla download 2022


Download Zip > https://urlca.com/2uOg7Y



-

What is Brawlhalla?

-

A brief introduction to the game and its genre

-

Brawlhalla is a free-to-play 2D platform fighting game developed by Blue Mammoth Games and published by Ubisoft. It was released in 2017 for PC, PS4, Xbox One, Nintendo Switch, iOS, and Android devices. It has been praised for its crisp combat mechanics, adorable fighters, streamlined stages, and excellent online play.

-

Brawlhalla belongs to the genre of platform fighters, which are fighting games that take place on platforms that can be moved or destroyed. Unlike traditional fighting games, platform fighters do not have health bars or timers. Instead, the goal is to knock your opponents off the stage by increasing their damage percentage with your attacks. The higher the damage percentage, the farther they fly when hit.

-

The main features and modes of the game

-

Brawlhalla has many features and modes that make it fun and diverse. Some of them are:

-
    -
  • Free-to-play: You can download and play Brawlhalla for free without any pay-to-win advantages or in-game purchases. You can unlock all the characters with in-game currency or buy them with real money if you want.
  • -
  • Cross-play: You can play Brawlhalla with anyone, anywhere, on any platform. You can also use cross-progression to keep your progress and items across different devices.
  • -
  • Up to 8 players: You can play Brawlhalla with up to 8 players online or locally in various modes such as free-for-all, team battle, brawlball, capture the flag, kung foot, horde mode, friendly fire off mode, etc.
  • -
  • 50+ playable legends: You can choose from over 50 unique characters, each with their own weapons and abilities. You can also customize your legend with skins, colors, taunts, KO effects, etc.
  • -
  • Frequent updates: You can enjoy new content and features every week such as new legends, new weapons, new maps, new events, new crossovers with other franchises such as Adventure Time, WWE, Tomb Raider, etc.
  • -
  • Ranked matches: You can compete in ranked matches to climb the ladder and earn rewards such as glory points, avatars, borders, etc.
  • -
  • Training mode: You can practice your skills in training mode where you can adjust the settings such as damage percentage, gravity

    Cross-play and cross-progression support

    -

    One of the best features of Brawlhalla is that it supports cross-play and cross-progression across all platforms. This means that you can play with anyone, anywhere, on any device, and keep your progress and items across different devices.

    -

    Cross-play allows you to join online matches with players from other platforms such as PC, PS4, Xbox One, Nintendo Switch, iOS, and Android. You can also invite your friends from different platforms to join your custom lobby or party. Cross-play is enabled by default, but you can disable it in the settings if you want.

    -

    Cross-progression allows you to link your Brawlhalla account to your Ubisoft account and sync your progress and items across different platforms. This means that you can switch between devices without losing your unlocked legends, skins, colors, taunts, KO effects, etc. You can also access your ranked stats, glory points, gold coins, mammoth coins, etc. on any device. Cross-progression is optional, but you can enable it in the settings if you want.

    -

    How to download Brawlhalla for free?

    -

    The available platforms and devices

    -

    Brawlhalla is available for free on the following platforms and devices:

    -
      -
    • PC: You can download Brawlhalla for free on Steam or the Ubisoft Store. You can play with a keyboard or a controller.
    • -
    • PS4: You can download Brawlhalla for free on the PlayStation Store. You can play with a PS4 controller or a compatible controller.
    • -
    • Xbox One: You can download Brawlhalla for free on the Microsoft Store. You can play with an Xbox One controller or a compatible controller.
    • -
    • Nintendo Switch: You can download Brawlhalla for free on the Nintendo eShop. You can play with a Joy-Con, a Pro Controller, or a compatible controller.
    • -
    • iOS: You can download Brawlhalla for free on the App Store. You can play with touch controls or a compatible controller.
    • -
    • Android: You can download Brawlhalla for free on the Google Play Store. You can play with touch controls or a compatible controller.
    • -
    -

    The steps to download and install the game on each platform

    -

    The steps to download and install Brawlhalla are similar for each platform. Here are the general steps:

    -

    brawlhalla free to play cross-play
    -brawlhalla platform fighter online
    -brawlhalla legends pack 2022
    -brawlhalla combat evolved crossover
    -brawlhalla patch notes 7.09
    -brawlhalla red raptor skin
    -brawlhalla esports 2022
    -brawlhalla steam download
    -brawlhalla ps5 download
    -brawlhalla xbox series x download
    -brawlhalla nintendo switch download
    -brawlhalla ios download
    -brawlhalla android download
    -brawlhalla ranked matches 2022
    -brawlhalla private room invite
    -brawlhalla frequent updates 2022
    -brawlhalla over 50 legends
    -brawlhalla collectors pack 2022
    -brawlhalla battle pass classic 2
    -brawlhalla winter championship 2022
    -brawlhalla spring championship 2022
    -brawlhalla new legend 2022
    -brawlhalla new items 2022
    -brawlhalla team up for brawlball bash
    -brawlhalla celebrating 100 million brawlers
    -brawlhalla history's greatest warriors
    -brawlhalla kotaku review 2022
    -brawlhalla playstation lifestyle review 2022
    -brawlhalla push square review 2022
    -brawlhalla free for alls 2022
    -brawlhalla full audio subtitles 2022
    -brawlhalla interface languages 2022
    -brawlhalla net energy gain experiment 2022
    -brawlhalla holy grail fusion experiment 2022
    -brawlhalla mini sun temperature 2022
    -brawlhalla korea superconducting tokamak advanced research facility 2022
    -brawlhalla korea institute of fusion energy 2022
    -brawlhalla nuclear fusion reaction 2022
    -brawlhalla over 100 million degrees celsius 2022
    -brawlhalla seven times hotter than the sun core 2022

    -
      -
    1. Go to the store of your platform: For PC, go to Steam or the Ubisoft Store. For PS4, go to the PlayStation Store. For Xbox One, go to the Microsoft Store. For Nintendo Switch, go to the Nintendo eShop. For iOS, go to the App Store. For Android, go to the Google Play Store.
    2. -
    3. Search for Brawlhalla: Type "Brawlhalla" in the search bar and find the game in the results.
    4. -
    5. Download the game: Click on the "Download" or "Install" button and wait for the game to download and install on your device.
    6. -
    7. Launch the game: Once the game is installed, click on the "Play" or "Launch" button and enjoy!
    8. -
    -

    The system requirements and file size

    -

    Brawlhalla is a lightweight game that does not require high-end hardware or a lot of storage space. Here are the system requirements and file size for each platform:

    - - - - - - - - -
    PlatformSystem RequirementsFile Size
    PC- OS: Windows 7 or later
    - Processor: 2.4 GHz Dual Core
    - Memory: 1 GB RAM
    - Graphics: 512 MB VRAM
    - DirectX: Version 10
    - Network: Broadband Internet connection
    - Storage: 350 MB available space
    About 350 MB
    PS4- OS: PlayStation 4 system software
    - Network: Broadband Internet connection
    - Storage: 500 MB available space
    About 500 MB
    Xbox One- OS: Xbox One system software
    - Network: Broadband Internet connection
    - Storage: 500 MB available space
    About 500 MB
    Nintendo Switch- OS: Nintendo Switch system software
    - Network: Broadband Internet connection
    - Storage: - Storage: 500 MB available space
    About 500 MB
    iOS- OS: iOS 11.0 or later
    - Compatible with iPhone, iPad, and iPod touch
    - Network: Broadband Internet connection
    - Storage: 300 MB available space
    About 300 MB
    Android- OS: Android 5.0 or later
    - Compatible with most Android devices
    - Network: Broadband Internet connection
    - Storage: 300 MB available space
    About 300 MB
    -

    How to play Brawlhalla online multiplayer?

    -

    The online matchmaking and ranking system

    -

    Brawlhalla has a robust online matchmaking and ranking system that allows you to play with other players from around the world. You can join online matches in two ways:

    -
      -
    • Casual matches: You can join casual matches for fun and practice. You can choose from various game modes such as free-for-all, strikeout, experimental, friendly 2v2, etc. You can also create or join custom lobbies where you can set your own rules and invite your friends.
    • -
    • Ranked matches: You can join ranked matches to compete and earn glory points. You can choose from two game modes: 1v1 or 2v2. You can also team up with a friend or a random partner for 2v2 matches. You will be matched with players of similar skill level based on your elo rating. Your elo rating will increase or decrease depending on your wins and losses. You will also earn glory points based on your peak elo rating and number of games played at the end of each season.
    • -
    -

    The different online game modes and options

    -

    Brawlhalla has many online game modes and options that you can choose from depending on your preference and mood. Some of them are:

    -
      -
    • Free-for-all: This is a classic mode where four players fight each other until the time runs out. The player with the most points wins.
    • -
    • Strikeout: This is a mode where you choose three legends and fight in a 1v1 match. Each time you lose a stock, you switch to the next legend in your lineup. The first player to lose all three legends loses.
    • -
    • Experimental: This is a mode where you can test new features and changes that are not yet implemented in the main game. You can play in a 1v1 match with random settings and modifiers.
    • -
    • Friendly 2v2: This is a mode where you can team up with a friend or a random partner and fight against another team of two players in a casual match.
    • -
    • Brawlball: This is a mode where two teams of four players compete to score goals by carrying a ball to the opponent's goal zone. You can attack the ball carrier to make them drop the ball.
    • -
    • Capture the flag: This is a mode where two teams of four players compete to capture the opponent's flag and bring it back to their base. You can attack the flag carrier to make them drop the flag.
    • -
    • Kung foot: This is a mode where two teams of four players compete to score goals by kicking a ball into the opponent's goal. You can also use your weapons and abilities to hit the ball.
    • -
    • Horde mode: This is a mode where you can team up with up to three other players and fight against waves of zombies that try to destroy your base. You can use your weapons and abilities to fend off the zombies.
    • -
    • Friendly fire off mode: This is a mode where you can play any game mode with friendly fire turned off. This means that you cannot hurt your teammates with your attacks.
    • -
    • Custom lobby: This is a mode where you can create or join a custom lobby where you can set your own rules and invite your friends or other players. You can choose from any game mode, map, legend, stock, time, damage, items, etc.
    • -
    -

    The tips and tricks to improve your skills and win matches

    -

    Brawlhalla is a game that requires skill, strategy, and practice to master. Here are some tips and tricks that can help you improve your skills and win matches:

    -
      -
    • Learn the basics: Before you jump into online matches, you should learn the basics of the game such as how to move, jump, dodge, attack, attack, throw, pick up, and use weapons and items. You can also learn the special moves and combos of each legend and weapon. You can practice these skills in training mode or in casual matches.
    • -
    • Choose your legend wisely: Brawlhalla has over 50 playable legends, each with their own strengths and weaknesses. You should choose a legend that suits your playstyle and preference. You can also try different legends and see which ones you like the most. You can also switch between two weapons during a match, so you should learn how to use both of them effectively.
    • -
    • Use your dodge wisely: Dodging is a crucial skill that can help you avoid attacks, escape combos, and reposition yourself. You can dodge in any direction by pressing the dodge button. However, you have a cooldown after each dodge, so you should use it wisely and not spam it. You can also use a spot dodge, which is a dodge that does not move you in any direction, but makes you invulnerable for a brief moment. You can also use a chase dodge, which is a dodge that moves you in the direction of your last attack, and can be used to extend your combos or chase your opponents.
    • -
    • Use your weapons and items effectively: Weapons and items are scattered around the stage and can be picked up by anyone. You can use them to deal damage, knock out your opponents, or disrupt their attacks. You can also throw them at your opponents by pressing the throw button. You should learn how to use each weapon and item effectively and adapt to the situation. You should also be aware of what weapons and items your opponents have and how to counter them.
    • -
    • Use the stage to your advantage: Brawlhalla has various stages that have different layouts, platforms, hazards, and items. You should learn how to use the stage to your advantage and avoid its disadvantages. You should also be aware of where you are on the stage and where your opponents are. You should try to control the center of the stage and avoid being near the edges or corners where you can be easily knocked out.
    • -
    • Play smart and have fun: Brawlhalla is a game that requires smart thinking and quick reactions. You should always be aware of your surroundings, your opponents, your health, your weapons, your items, etc. You should also try to predict your opponents' moves and counter them accordingly. You should also try to mix up your attacks and not be predictable or repetitive. However, you should also remember to have fun and enjoy the game. Brawlhalla is a game that can be played casually or competitively, so you should play it however you like.
    • -
    -

    Conclusion

    -

    A summary of the main points and benefits of playing Brawlhalla

    -

    Brawlhalla is a free-to-play 2D platform fighting game that supports up to 8 players in various modes and maps. You can choose from over 50 unique characters, each with their own weapons and abilities, and brawl to prove who's the best in an epic test of strength and skill.

    -

    Brawlhalla is a game that is easy to learn but hard to master. It has many features and modes that make it fun and diverse. It also supports cross-play and cross-progression across all platforms and devices.

    -

    Brawlhalla is a game that can be played with anyone, anywhere, on any device. It is a game that is suitable for all ages and skill levels. It is a game that is constantly updated with new content and features.

    -

    A call to action to download and play the game now

    -

    If you are interested in playing Brawlhalla, you can download it for free on any platform or device right now. You can also visit the official website or follow the social media accounts for more information and updates.

    -

    What are you waiting for? Download Brawlhalla now and join the millions of players who are already enjoying this awesome game!

    -

    Five unique FAQs about Brawlhalla

    -
      -
    • Q: How many players can play Brawlhalla online or locally?
      A: Brawlhalla supports up to 8 players online or locally in various modes and maps.
    • -
    • Q: How do I unlock new legends in Brawlhalla?
      A: You can unlock new legends in Brawlhalla by using gold coins or mammoth coins. Gold coins are earned by playing matches or completing missions. Mammoth coins are bought with real money or earned by participating in events or tournaments.
    • -
    • Q: How do I link my Brawlhalla account to my Ubisoft account?
      A A: You can link your Brawlhalla account to your Ubisoft account by following these steps: - Go to the settings menu in the game and select "Account Linking". - Choose the platform that you want to link to your Ubisoft account. - Follow the instructions on the screen to sign in to your Ubisoft account or create a new one. - Confirm the linking process and enjoy the benefits of cross-progression.
    • -
    • Q: How do I change my name or avatar in Brawlhalla?
      A: You can change your name or avatar in Brawlhalla by following these steps: - Go to the store menu in the game and select "Avatars" or "Name Change". - Choose the avatar or name that you want to use or buy with gold coins or mammoth coins. - Confirm your purchase and enjoy your new look.
    • -
    • Q: How do I report a bug or a player in Brawlhalla?
      A: You can report a bug or a player in Brawlhalla by following these steps: - Go to the support menu in the game and select "Report Bug" or "Report Player". - Fill out the form with the details of the bug or the player that you want to report. - Submit your report and wait for a response from the developers or moderators.
    • -
    • Q: How do I join or create a clan in Brawlhalla?
      A: You can join or create a clan in Brawlhalla by following these steps: - Go to the social menu in the game and select "Clans". - Choose to join an existing clan or create a new one with your friends. - Invite or accept other players to join your clan and enjoy the benefits of clan chat, clan missions, clan rewards, etc.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download APK99 APK OBB - No Ads No Malware No Hassle.md b/spaces/congsaPfin/Manga-OCR/logs/Download APK99 APK OBB - No Ads No Malware No Hassle.md deleted file mode 100644 index 01c8d4c886649180cbb2012c9a2f447cd467ef83..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download APK99 APK OBB - No Ads No Malware No Hassle.md +++ /dev/null @@ -1,85 +0,0 @@ -
    -

    APK99 Download APK: How to Get the Best Android Apps for Free

    -

    Are you looking for a way to download Android apps and games for free? Do you want to access apps and games that are not available on Google Play Store? Do you want to enjoy the latest versions of apps and games without waiting for updates? If you answered yes to any of these questions, then you should try APK99.

    -

    What is APK99?

    -

    APK99 is a website that offers free downloads of Android apps and games. You can find thousands of apps and games in various categories, such as action, adventure, arcade, puzzle, racing, simulation, sports, strategy, and more. You can also find popular apps and games that are not available on Google Play Store, such as Fortnite, PUBG Mobile, Minecraft, GTA San Andreas, etc.

    -

    apk99 download apk


    Download »»» https://urlca.com/2uOcyE



    -

    APK99 is safe, fast, and easy to use. All the apps and games on APK99 are scanned for viruses and malware before being uploaded. You can download them with high speed and without any interruptions. You can also use APK99 without signing up or registering. You just need to visit the website, search for the app or game you want, and click on the download button.

    -

    Why should you use APK99?

    -

    There are many reasons why you should use APK99 to download Android apps and games. Here are some of them:

    -

    apk99 download apk pure
    -apk99 download apk mirror
    -apk99 download apk for android
    -apk99 download apk file
    -apk99 download apk latest version
    -apk99 download apk mod
    -apk99 download apk free
    -apk99 download apk online
    -apk99 download apk pc
    -apk99 download apk app
    -apk99 download apk games
    -apk99 download apk pro
    -apk99 download apk full
    -apk99 download apk cracked
    -apk99 download apk premium
    -apk99 download apk update
    -apk99 download apk installer
    -apk99 download apk store
    -apk99 download apk downloader
    -apk99 download apk manager
    -apk99 download apk editor
    -apk99 download apk extractor
    -apk99 download apk converter
    -apk99 download apk generator
    -apk99 download apk scanner
    -apk99 download apkpure com apkpure aegon[^1^]
    -apkpure com apkpure aegon 3.19.16 android[^1^]
    -apkpure com apkpure aegon 3.19.16 android APK[^1^]
    -apkpure com apkpure aegon 3.19.16 android XAPK[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKPure[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKMirror[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKFile[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKMod[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKFree[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKOnline[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKPC[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKApp[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKGames[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKPro[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKFull[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKCracked[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKPremium[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKUpdate[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKInstaller[^1^]
    -apkpure com apkpure aegon 3.19.16 android APKStore[^1^]

    -
      -
    • APK99 lets you access apps and games that are not available on Google Play Store. Some apps and games are banned or restricted in certain regions or countries due to legal or political reasons. Some apps and games are exclusive to certain devices or platforms. Some apps and games are removed from Google Play Store due to policy violations or other issues. With APK99, you can download these apps and games without any hassle.
    • -
    • APK99 lets you download the latest versions of apps and games without waiting for updates. Sometimes, app developers take a long time to release updates or fix bugs on Google Play Store. Sometimes, app updates are rolled out gradually or selectively to certain users or regions. With APK99, you can download the latest versions of apps and games as soon as they are released by the developers.
    • -
    • APK99 lets you save money by downloading paid apps and games for free. Some apps and games require you to pay a fee or make in-app purchases to unlock their full features or content. With APK99, you can download these apps and games for free without spending any money.
    • -
    -

    How to download and install APK99?

    -

    Downloading and installing APK99 is very simple and straightforward. Just follow these steps:

    -
      -
    1. Step 1: Go to the official website of APK99 (^1^)and search for the app or game you want to download. You can use the search bar or browse the categories or genres on the homepage.
    2. -
    3. Step 2: Click on the download button and wait for the APK file to be downloaded on your device. You can see the progress of the download on the notification bar.
    4. -
    5. Step 3: Enable unknown sources on your device settings and install the APK file. To do this, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the APK file on your device storage and tap on it to install it.
    6. -
    -

    Congratulations! You have successfully downloaded and installed APK99 on your device. You can now enjoy the best Android apps and games for free.

    -

    Conclusion

    -

    APK99 is a great source of free Android apps and games that you can download and enjoy on your device. APK99 is reliable, secure, and user-friendly, and it offers a wide range of apps and games in different genres. APK99 is easy to download and install, and it lets you get the most out of your Android device.

    -

    If you are looking for a way to download Android apps and games for free, you should try APK99. You will not regret it. APK99 is the ultimate destination for Android lovers.

    -

    FAQs

    -
      -
    • Q: Is APK99 legal?
    • -
    • A: APK99 is legal as long as you use it for personal and non-commercial purposes. However, some apps and games may have their own terms and conditions that you should respect.
    • -
    • Q: Is APK99 safe?
    • -
    • A: APK99 is safe as it scans all the apps and games for viruses and malware before uploading them. However, you should always be careful when downloading and installing any files from the internet.
    • -
    • Q: How often does APK99 update its apps and games?
    • -
    • A: APK99 updates its apps and games regularly as soon as the developers release new versions or patches. You can always find the latest versions of apps and games on APK99.
    • -
    • Q: How can I contact APK99?
    • -
    • A: You can contact APK99 by visiting their website and filling out the contact form. You can also follow them on their social media accounts to get the latest news and updates.
    • -
    • Q: What are some alternatives to APK99?
    • -
    • A: Some alternatives to APK99 are Aptoide, APKPure, Uptodown, ACMarket, etc. However, none of them can match the quality and quantity of apps and games that APK99 offers.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Project Drift 2.0 MOD APK v50 and Race with the Most Classic Cars Ever.md b/spaces/congsaPfin/Manga-OCR/logs/Download Project Drift 2.0 MOD APK v50 and Race with the Most Classic Cars Ever.md deleted file mode 100644 index f5e915cd514039a2894587c0ae0d715936bc0924..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Project Drift 2.0 MOD APK v50 and Race with the Most Classic Cars Ever.md +++ /dev/null @@ -1,91 +0,0 @@ - -

    Download Project Drift 2.0 Mod APK v50: The Ultimate Car Racing Game

    -

    If you are a fan of car racing games, you must have heard of Project Drift 2.0, one of the most realistic and thrilling drifting games on Android. In this game, you can experience the adrenaline rush of drifting on various tracks with different cars, customize your own vehicles, compete with other players online, and challenge yourself with various game modes. But what if you want to enjoy the game without any limitations or interruptions? Well, you can do that by downloading Project Drift 2.0 Mod APK v50, which gives you unlimited money, coins, cars, tracks, and more. In this article, we will tell you everything you need to know about this mod apk, including its features, benefits, and installation process.

    -

    What is Project Drift 2.0?

    -

    Project Drift 2.0 is a car racing game developed by Bycodec Games, a Turkish studio that specializes in creating realistic and immersive driving games. Project Drift 2.0 is the sequel to the popular Project Drift game, which has over 10 million downloads on Google Play Store. Project Drift 2.0 takes the drifting experience to a whole new level with improved graphics, physics, controls, and gameplay.

    -

    download project drift 2.0 mod apk v50


    DOWNLOAD ——— https://urlca.com/2uOaN8



    -

    Features of Project Drift 2.0

    -

    Project Drift 2.0 has many features that make it one of the best drifting games on Android. Here are some of them:

    -

    - Realistic physics and graphics

    -

    The game uses a realistic physics engine that simulates the behavior of different cars and tracks. You can feel the weight, speed, traction, and inertia of your car as you drift around corners, perform stunts, and crash into obstacles. The game also has stunning graphics that create a realistic and immersive environment for drifting. You can see the details of your car, the smoke from your tires, the reflections on the windows, and the shadows on the ground.

    -

    - Customizable cars and tracks

    -

    The game allows you to customize your own cars and tracks according to your preferences. You can choose from over 40 different cars, each with its own characteristics and performance. You can also modify your car's color, wheels, engine, suspension, brakes, exhaust, and more. You can also create your own tracks using the track editor feature. You can design your own layout, add ramps, curves, obstacles, and decorations.

    -

    - Multiple game modes and challenges

    -

    The game offers various game modes and challenges to test your skills and have fun. You can play in free mode, where you can drift freely on any track without any time or score limit. You can also play in career mode, where you have to complete different missions and objectives to earn money and unlock new cars and tracks. You can also play in challenge mode, where you have to perform specific tasks or stunts within a given time or score limit.

    -

    - Online multiplayer and leaderboards

    -

    The game also supports online multiplayer mode, where you can race against other players from around the world in real-time. You can join or create rooms with different settings and rules, such as track selection, car selection, time limit, score limit, etc. You can also chat with other players and make friends. You can also check the leaderboards and see how you rank among other players in terms of score, time, drift angle, etc.

    -

    Why download Project Drift 2.0 Mod APK v50?

    -

    Project Drift 2.0 is a great game, but it also has some limitations and drawbacks. For example, you have to earn money and coins by playing the game or watching ads to unlock new cars and tracks. You also have to deal with annoying ads that pop up every now and then. Moreover, some of the features of the game require root access, which may not be available or safe for your device. That's why you may want to download Project Drift 2.0 Mod APK v50, which is a modified version of the game that gives you many advantages and benefits. Here are some of them:

    -

    - Unlimited money and coins

    -

    With Project Drift 2.0 Mod APK v50, you don't have to worry about earning money and coins anymore. You will get unlimited money and coins in your account as soon as you start the game. You can use them to buy any car or track you want without any restriction or limitation.

    -

    - All cars and tracks unlocked

    -

    With Project Drift 2.0 Mod APK v50, you don't have to wait or work hard to unlock new cars and tracks. You will get access to all the cars and tracks in the game from the beginning. You can choose from over 40 different cars and over 20 different tracks to enjoy the game to the fullest.

    -

    download project drift 2.0 mod apk unlimited money
    -download project drift 2.0 mod apk unlocked all cars
    -download project drift 2.0 mod apk latest version
    -download project drift 2.0 mod apk android 1
    -download project drift 2.0 mod apk revdl
    -download project drift 2.0 mod apk rexdl
    -download project drift 2.0 mod apk offline
    -download project drift 2.0 mod apk no root
    -download project drift 2.0 mod apk free shopping
    -download project drift 2.0 mod apk hack
    -download project drift 2.0 mod apk obb
    -download project drift 2.0 mod apk data
    -download project drift 2.0 mod apk for pc
    -download project drift 2.0 mod apk for ios
    -download project drift 2.0 mod apk for windows
    -download project drift 2.0 mod apk for mac
    -download project drift 2.0 mod apk for laptop
    -download project drift 2.0 mod apk for chromebook
    -download project drift 2.0 mod apk for tablet
    -download project drift 2.0 mod apk for firestick
    -download project drift 2.0 mod apk full version
    -download project drift 2.0 mod apk premium
    -download project drift 2.0 mod apk pro
    -download project drift 2.0 mod apk mega
    -download project drift 2.0 mod apk vip
    -download project drift 2.0 mod apk original
    -download project drift 2.0 mod apk pure
    -download project drift 2.0 mod apk mirror
    -download project drift 2.0 mod apk apkpure
    -download project drift 2.0 mod apk apkmirror
    -download project drift 2.0 mod apk apknite
    -download project drift 2.0 mod apk happymod
    -download project drift 2.0 mod apk an1
    -download project drift 2.0 mod apk uptodown
    -download project drift 2.0 mod apk mob.org
    -download project drift 2.0 mod apk androidoyun.club
    -download project drift 2.0 mod apk andropalace.net
    -download project drift 2.0 mod apk ihackedit.com
    -download project drift 2.0 mod apk getmodsapk.com[^1^]
    -download project drift 2.0 mod apk techbigs.com

    -

    - No ads and no root required

    -

    With Project Drift 2.0 Mod APK v50, you don't have to deal with annoying ads that interrupt your gameplay or waste your time. You will get a smooth and ad-free gaming experience with this mod apk. Moreover, you don't need to root your device to use this mod apk. You can install it on any Android device without any risk or hassle.

    -

    How to download and install Project Drift 2.0 Mod APK v50?

    -

    Downloading and installing Project Drift 2.0 Mod APK v50 is very easy and simple. Just follow these steps:

    -

    - Step 1: Download the mod apk file from a trusted source

    -

    The first thing you need to do is to download the mod apk file from a reliable and secure source. You can use the link below to download the file directly to your device.

    -

    Download Project Drift 2.0 Mod APK v50

    -

    - Step 2: Enable unknown sources on your device settings

    -

    The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.

    -

    - Step 3: Install the mod apk file and launch the game

    -

    The final thing you need to do is to install the mod apk file and launch the game. To do this, locate the downloaded file on your device storage, tap on it, and follow the instructions on the screen. Once the installation is complete, open the game and enjoy!

    -

    Conclusion

    -

    Project Drift 2.0 is one of the best drifting games on Android that offers realistic physics, graphics, controls, and gameplay. You can customize your own cars and tracks, play various game modes and challenges, compete with other players online, and have fun drifting on different tracks with different cars. However, if you want to enjoy the game without any limitations or interruptions, you should download Project Drift 2.0 Mod APK v50, which gives you unlimited money, coins, cars, tracks, no ads, no root required, and more. Just follow the steps above to download and install this mod apk on your device and start drifting like a pro!

    -

    FAQs

    -

    Here are some frequently asked questions about Project Drift 2.0 Mod APK v50:

    -
      -
    • Is Project Drift 2.0 Mod APK v50 safe to use?
    • -

      Yes, Project Drift 2.0 Mod APK v50 is safe to use as long as you download it from a trusted source like ours. We have tested this mod apk on various devices and found no malware or virus in it.

      -
    • Is Project Drift 2.0 Mod APK v50 compatible with my device?
    • -

      Yes, Project Drift 2.0 Mod APK v50 is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not support the game or the mod apk due to hardware or software limitations.

      -
    • Will Project Drift 2.0 Mod APK v50 affect my original game data?
    • -

      No, Project Drift 2.0 Mod APK v50 will not affect your original game data as it is a separate app that runs independently from the original game. You can have both apps installed on your device and switch between them as you wish.

      -
    • Can I update Project Drift 2.0 Mod APK v50 to the latest version?
    • -

      Yes, you can update Project Drift 2.0 Mod APK v50 to the latest version as long as the mod apk is updated by the developer. However, you may lose some of the mod features or face some compatibility issues if you update the mod apk without checking its compatibility with the original game.

      -
    • Can I play Project Drift 2.0 Mod APK v50 offline?
    • -

      Yes, you can play Project Drift 2.0 Mod APK v50 offline as it does not require an internet connection to run. However, you will not be able to access some of the online features such as multiplayer mode, leaderboards, etc.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience the Life of a Car Dealer in Car Dealership Simulator APK.md b/spaces/congsaPfin/Manga-OCR/logs/Experience the Life of a Car Dealer in Car Dealership Simulator APK.md deleted file mode 100644 index a3dd24a30d0bcdd47fc8f33730ebec8abe98f820..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Experience the Life of a Car Dealer in Car Dealership Simulator APK.md +++ /dev/null @@ -1,86 +0,0 @@ -
    -

    Car Dealership Simulator Download APK: How to Run Your Own Car Business on Your Phone

    -

    Have you ever dreamed of owning your own car dealership and selling various cars to different customers? Do you want to experience the thrill and challenge of running a car business on your phone? If you answered yes, then you might want to check out Car Dealership Simulator, a simulation game that lets you buy, price, clean, and sell cars in a realistic and detailed way. In this article, we will tell you everything you need to know about Car Dealership Simulator, how to download and install its APK file, how to play it, and some tips and tricks to help you become the best car dealer around.

    -

    What is Car Dealership Simulator?

    -

    Car Dealership Simulator is a simulation game that was developed by Quadfix Games and released on Steam in May 2023. It is an extremely comprehensive and detailed car dealership business simulation game that contains detailed car buying and selling mechanics. You can choose from over 25 cars to unlock and purchase, and sell them to various customers with different pricing and preferences. You can also upgrade, improve, and keep your car dealership clean and earn the respect of your customers. You can also earn money with other job opportunities, such as car diagnostics, photography, marketing, and more. Car Dealership Simulator is a game that will test your business skills, negotiation skills, and customer service skills as you try to build the car buying and selling empire of your dreams.

    -

    car dealership simulator download apk


    Download File ⚙⚙⚙ https://urlca.com/2uOgi9



    -

    A realistic and comprehensive simulation game

    -

    One of the most impressive features of Car Dealership Simulator is the level of detail and realism it offers. The game mechanics are designed to simulate the real-life challenges and opportunities that come with running a car dealership, from managing inventory to negotiating with customers. You will have to deal with various aspects of the car business, such as diagnosis, valuation, marketing, positioning, presentation, pricing, cleaning, repairing, advertising, hiring, upgrading, and more. You will also have to watch out for angry customers, unexpected issues, market fluctuations, competitors, and other factors that can affect your sales and profits. The game also features realistic graphics, sounds, physics, animations, weather effects, day-night cycles, and more that will make you feel like you are actually running a car dealership.

    -

    A variety of cars and customers to deal with

    -

    Another feature that makes Car Dealership Simulator stand out is the variety of cars and customers that you will encounter in the game. You can choose from over 25 cars to buy and sell in your dealership, ranging from cheap sedans to expensive sports cars. Each car has its own specifications, condition, history, flaws, imperfections, advantages, disadvantages, value, demand, popularity, and more that you will have to consider when buying or selling them. You will also have to deal with different types of customers who have different budgets, preferences, expectations, personalities, moods, bargaining skills, patience levels, satisfaction levels, loyalty levels and more that will affect how you interact with them and how much you can sell them. You will have to use your skills and intuition to find the best car for each customer, offer them the best price, and convince them to buy from you. You will also have to deal with customer feedback, reviews, ratings, referrals, complaints, and more that will affect your reputation and sales. Car Dealership Simulator is a game that will challenge you to adapt to different situations and scenarios and make the best decisions for your business.

    -

    A chance to upgrade and expand your dealership

    -

    As you progress in the game, you will also have the opportunity to upgrade and expand your dealership and make it more attractive and profitable. You can buy new equipment, tools, furniture, decorations, and more that will improve the appearance and functionality of your dealership. You can also hire new staff, such as mechanics, cleaners, salespeople, managers, and more that will help you run your business more efficiently and effectively. You can also buy new land, buildings, parking lots, garages, showrooms, and more that will increase your capacity and variety of cars. You can also unlock new features, modes, achievements, rewards, and more that will enhance your gameplay experience. Car Dealership Simulator is a game that will reward you for your hard work and dedication and allow you to grow your business as you see fit.

    -

    How to Download and Install Car Dealership Simulator APK?

    -

    If you are interested in playing Car Dealership Simulator on your phone, you might be wondering how to download and install its APK file. An APK file is an Android application package file that contains all the files and data needed to run an app on an Android device. By downloading and installing an APK file, you can access apps that are not available on the Google Play Store or bypass some restrictions or limitations imposed by the official app store. However, before you download and install an APK file, there are some steps and precautions that you need to take. Here are the steps to download and install Car Dealership Simulator APK on your phone:

    -

    The steps to download the APK file from a trusted source

    -

    The first step is to find a reliable and reputable source that offers the Car Dealership Simulator APK file for download. There are many websites that claim to provide APK files for various apps, but not all of them are safe and trustworthy. Some of them may contain malware, viruses, spyware, adware, or other harmful software that can damage your device or compromise your privacy and security. Therefore, you need to be careful and cautious when choosing a website to download the APK file from. You can use some criteria to evaluate the credibility of a website, such as its design, content quality, user reviews, ratings, feedbacks, comments, reputation, popularity, domain age, SSL certificate , and the type of content they provide. You can also use tools like Website Checker or Website Credibility Checker to analyze the trustworthiness and performance of a website. Once you find a reliable website that offers the Car Dealership Simulator APK file, you can proceed to download it by clicking on the download link or button. You may need to confirm the download and accept any warnings or permissions that pop up. The APK file will be saved in your device's download folder or any other location that you specify.

    The requirements and permissions for installing the APK file

    -

    The next step is to check if your device meets the requirements and permissions for installing the APK file. The Car Dealership Simulator APK file requires Android 4.4 or higher and about 100 MB of free storage space on your device. You also need to enable the installation of apps from unknown sources, which is a security setting that prevents unauthorized or harmful apps from being installed on your device. To enable this setting, you can go to your device's settings, tap on security or privacy, and look for the option that says "allow installation of apps from unknown sources" or something similar. You may need to select the specific app that you want to allow, such as your file manager or browser app. Once you enable this setting, you can proceed to install the APK file.

    The benefits and risks of using an APK file

    -

    Before you install the APK file, you should also be aware of the benefits and risks of using an APK file. The benefits of using an APK file are that you can access apps that are not available on the Google Play Store, such as Car Dealership Simulator, or get the latest updates and features before they are officially released. You can also bypass some restrictions or limitations imposed by the official app store, such as regional availability, device compatibility, or payment methods. However, the risks of using an APK file are that you may expose your device to malware, viruses, spyware, adware, or other harmful software that can damage your device or compromise your privacy and security. You may also violate some terms and conditions of the app developer or publisher, which may result in legal consequences or loss of support. Therefore, you should always use caution and discretion when downloading and installing APK files from unknown sources, and only do so at your own risk.

    How to Play Car Dealership Simulator?

    -

    Once you have successfully downloaded and installed the Car Dealership Simulator APK file on your device, you can start playing the game and enjoy its features and updates. Here are some basic instructions on how to play Car Dealership Simulator:

    -

    The basic gameplay mechanics and controls

    -

    The basic gameplay mechanics and controls of Car Dealership Simulator are simple and intuitive. You can use the touch screen to navigate through the game menus and options, as well as interact with the cars and customers in your dealership. You can also use the buttons on the bottom of the screen to access different functions and modes, such as buying cars, selling cars, cleaning cars, repairing cars, upgrading cars, hiring staff, managing finances, checking statistics, and more. You can also tap on the icons on the top of the screen to see your current money balance, reputation level, customer satisfaction level, inventory level, staff level , and time. You can also swipe left or right to switch between different views of your dealership, such as the exterior, the interior, the garage, the showroom, and the office. You can also zoom in or out to see more details or a wider perspective of your dealership. The game also features a tutorial mode that will guide you through the basics of the game and explain the various functions and features of the game.

    The tips and tricks to succeed in the game

    -

    Car Dealership Simulator is a game that requires strategy, skill, and patience to succeed. You will have to make smart decisions and manage your resources wisely to grow your business and earn more money and reputation. Here are some tips and tricks that can help you become a better car dealer in the game: - Buy low and sell high. This is the basic principle of any business, and it applies to Car Dealership Simulator as well. You should always try to buy cars at a lower price than their market value, and sell them at a higher price than their purchase price. You can use the diagnosis tool to check the condition and value of a car before buying it, and use the valuation tool to check the demand and popularity of a car before selling it. You can also use the negotiation skill to bargain with sellers and buyers and get a better deal. - Clean and repair your cars. Another way to increase the value and appeal of your cars is to clean and repair them before selling them. You can use the cleaning tool to remove dirt, dust, stains, scratches, and other imperfections from your cars, and use the repair tool to fix any mechanical or electrical issues or damages on your cars. You can also use the upgrade tool to improve the performance and appearance of your cars by adding new parts, accessories, paint jobs, decals, and more. Cleaning, repairing, and upgrading your cars will not only increase their price, but also their customer satisfaction level. - Know your customers. One of the most important aspects of running a car dealership is knowing your customers and their needs and wants. You should always pay attention to the details and preferences of each customer, such as their budget, personality, mood, expectation, satisfaction, loyalty, and more. You should also try to match them with the best car for them based on their criteria, such as size, type, color, brand, model, year, condition, features, and more. You should also use your communication skills to build rapport with your customers, such as greeting them politely, answering their questions honestly, offering them discounts or incentives , giving them compliments or suggestions, and thanking them for their purchase. Knowing your customers and satisfying them will not only increase your sales and profits, but also your reputation and referrals. - Manage your finances and resources. Running a car dealership is not only about buying and selling cars, but also about managing your finances and resources. You should always keep track of your income and expenses, such as the cost of buying, cleaning, repairing, upgrading, advertising, and maintaining your cars, as well as the revenue from selling them. You should also keep track of your inventory and staff levels, such as the number of cars you have in stock, the space you have in your dealership, the number of staff you have hired, and the salary you have to pay them. You should also keep track of your statistics and performance, such as the number of customers you have served, the number of cars you have sold, the average price and profit per car, the customer satisfaction and loyalty levels, and the reputation level. You should also plan ahead and set goals and budgets for your business, such as how much money you want to make, how many cars you want to buy or sell, how much you want to invest in your dealership, and how much you want to grow your business. Managing your finances and resources will help you optimize your business and avoid any financial or operational problems.

    The features and updates of the game

    -

    Car Dealership Simulator is a game that is constantly being updated and improved by its developers. The game features many features and updates that make it more enjoyable and realistic. Some of the features and updates of the game are: - A realistic economy system that changes based on supply and demand, market trends, seasons, events, and more. - A dynamic weather system that affects the appearance and condition of your cars, as well as the mood and behavior of your customers. - A day-night cycle that affects the visibility and activity of your dealership. - A sandbox mode that allows you to play with unlimited money and resources. - A multiplayer mode that allows you to play with or against other players online. - A customization mode that allows you to design and decorate your own dealership. - A photo mode that allows you to take pictures of your cars and share them with other players. - A leaderboard system that allows you to compare your performance and achievements with other players. - A feedback system that allows you to rate and review the game and suggest new features or improvements.

    Conclusion

    -

    Car Dealership Simulator is a simulation game that lets you run your own car dealership business on your phone. You can buy, price, clean, repair, upgrade, advertise, and sell cars to various customers with different needs and wants. You can also upgrade and expand your dealership with new equipment, tools, staff, buildings, land, and more. You can also enjoy the realistic graphics, sounds, physics, animations, weather effects , day-night cycles, and more that the game offers. You can also download and install the game's APK file from a trusted source and enjoy the latest updates and features of the game. Car Dealership Simulator is a game that will test your business skills, negotiation skills, and customer service skills as you try to build the car buying and selling empire of your dreams.

    -

    FAQs

    -

    Here are some frequently asked questions about Car Dealership Simulator and its APK file:

    -

    car dealership simulator apk free download
    -car dealership simulator android game
    -car dealership simulator mod apk unlimited money
    -car dealership simulator latest version apk
    -car dealership simulator offline apk
    -car dealership simulator 2023 apk
    -car dealership simulator hack apk
    -car dealership simulator 3d apk
    -car dealership simulator pro apk
    -car dealership simulator pc download
    -car dealership simulator online game
    -car dealership simulator cheats apk
    -car dealership simulator update apk
    -car dealership simulator full apk
    -car dealership simulator premium apk
    -car dealership tycoon simulator apk
    -car dealership business simulator apk
    -car dealership job simulator apk
    -car dealership mechanic simulator apk
    -car dealership driving simulator apk
    -car dealership manager simulator apk
    -car dealership repair simulator apk
    -car dealership city simulator apk
    -car dealership adventure simulator apk
    -car dealership empire simulator apk
    -car dealer simulators download for android
    -car dealer simulators modded apk download
    -car dealer simulators game free download
    -car dealer simulators new version download
    -car dealer simulators best game download
    -car dealer simulators offline game download
    -car dealer simulators 2022 game download
    -car dealer simulators hacked game download
    -car dealer simulators 3d game download
    -car dealer simulators pro game download
    -car dealer tycoon simulators download apk
    -car dealer business simulators download apk
    -car dealer job simulators download apk
    -car dealer mechanic simulators download apk
    -car dealer driving simulators download apk
    -car dealer manager simulators download apk
    -car dealer repair simulators download apk
    -car dealer city simulators download apk
    -car dealer adventure simulators download apk
    -car dealer empire simulators download apk
    -how to download and install car dealership simulator on android device?
    -where to find the best and safest source for downloading the latest version of the game?
    -what are the features and benefits of playing the game on your smartphone or tablet?

    -

    Q: Is Car Dealership Simulator free to play?

    -

    A: Yes, Car Dealership Simulator is free to play on Steam. However, you may need to pay for some in-game items or features, such as premium cars, upgrades, or currency. You can also download and install the game's APK file for free from a trusted source, but you may encounter some ads or limitations in the game.

    -

    Q: Is Car Dealership Simulator safe to download and install?

    -

    A: Car Dealership Simulator is safe to download and install from Steam, as it is verified and approved by the official app store. However, if you download and install the game's APK file from an unknown source, you may expose your device to malware, viruses, spyware, adware, or other harmful software that can damage your device or compromise your privacy and security. Therefore, you should always use caution and discretion when downloading and installing APK files from unknown sources, and only do so at your own risk.

    -

    Q: How can I update Car Dealership Simulator?

    -

    A: If you download and install Car Dealership Simulator from Steam, you can update the game automatically or manually through the app store. However, if you download and install the game's APK file from an unknown source, you may not be able to update the game through the app store. Instead, you may need to download and install the latest version of the APK file from the same or another trusted source. You may also need to uninstall the previous version of the APK file before installing the new one.

    -

    Q: How can I contact the developers of Car Dealership Simulator?

    -

    A: If you have any questions, feedbacks, suggestions, or issues about Car Dealership Simulator, you can contact the developers of the game through their email address: quadfixgames@gmail.com. You can also follow them on their social media accounts: Facebook, Twitter, Instagram, YouTube, and Discord. You can also visit their website for more information about their games and projects.

    -

    Q: What are some similar games to Car Dealership Simulator?

    -

    A: If you enjoy playing Car Dealership Simulator, you may also like some similar games that involve car buying and selling mechanics, such as: - Car Mechanic Simulator 2021: A simulation game that lets you repair, paint, tune, and drive cars in a realistic way. - Car Trader Simulator: A simulation game that lets you buy damaged cars, renovate them, and sell them for a profit. - Dealer's Life 2: A simulation game that lets you run your own pawn shop and deal with various items and customers. - Junkyard Tycoon: A simulation game that lets you manage your own junkyard and recycle scrap metal into valuable products. - Motor Depot: A simulation game that lets you drive various vehicles in an open world environment.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Game Turbo Blue APK A guide to the best games and tips for optimal performance.md b/spaces/congsaPfin/Manga-OCR/logs/Game Turbo Blue APK A guide to the best games and tips for optimal performance.md deleted file mode 100644 index cf5ab1e64631af94bf68d0ceb4f5d373e0ce3ad1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Game Turbo Blue APK A guide to the best games and tips for optimal performance.md +++ /dev/null @@ -1,38 +0,0 @@ -
    -

    Game Turbo Blue APK: What Is It and How to Download It

    -

    If you are a gamer who loves playing high-end games on your Android device, you might have heard of game turbo blue apk. This is a modified version of the original game turbo app that comes pre-installed on Xiaomi devices. It is designed to enhance your gaming performance by optimizing your device's resources, graphics, and network. In this article, we will tell you what game turbo blue apk is, how it works, how to download and install it on your device, how to use it to boost your gaming experience, and what are its pros and cons.

    -

    Introduction

    -

    Game turbo blue apk is an app that allows you to improve your gaming performance on your Android device. It does so by adjusting your device's settings, such as CPU, GPU, RAM, battery, network, etc., according to the requirements of each game. It also provides you with a variety of features, such as in-game toolbox, shortcuts, screen recording, screenshot, floating window, etc., that make your gaming more convenient and enjoyable.

    -

    game turbo blue apk


    Download File » https://urlca.com/2uO6X2



    -

    Game turbo blue apk is based on the original game turbo app that comes built-in on Xiaomi devices. However, it has some additional features and improvements that make it more powerful and versatile. For example, it supports more games, has more customization options, has a better user interface, etc. It also works on non-Xiaomi devices, so you can use it on any Android device that runs Android 6.0 or higher.

    -

    How to Download and Install Game Turbo Blue APK on Your Android Device

    -

    If you want to try out game turbo blue apk on your device, you need to download and install it manually. Here are the steps you need to follow:

    -
      -
    1. Go to this link and download the latest version of game turbo blue apk.
    2. -
    3. Once the download is complete, go to your device's settings and enable the installation of apps from unknown sources.
    4. -
    5. Locate the downloaded file in your file manager and tap on it to start the installation process.
    6. -
    7. Follow the on-screen instructions and grant the necessary permissions to the app.
    8. -
    9. Wait for the installation to finish and then launch the app from your app drawer or home screen.
    10. -
    -

    Tips and warnings:

    -
      -
    • Before installing game turbo blue apk, make sure you have enough storage space on your device.
    • -
    • Make sure you have a stable internet connection while downloading and installing the app.
    • -
    • Be careful when downloading game turbo blue apk from third-party sources, as some of them may contain malware or viruses. Always use a trusted source like APKCombo.
    • -
    • Game turbo blue apk may not be compatible with some devices or games. If you encounter any problems or errors while using the app, try uninstalling it and reinstalling it again.
    • -
    -

    How to Use Game Turbo Blue APK to Boost Your Gaming Performance

    -

    Once you have installed game turbo blue apk on your

    Q3: Does game turbo blue apk work on all Android devices and games?

    -

    A3: Game turbo blue apk works on any Android device that runs Android 6.0 or higher. However, it may not be compatible with some devices or games. If you encounter any problems or errors while using game turbo blue apk, try uninstalling it and reinstalling it again. You can also contact the developers of game turbo blue apk for support and feedback.

    -

    Q4: How can I update game turbo blue apk to the latest version?

    -

    A4: Game turbo blue apk may not be updated regularly or supported by the developers. However, you can check for updates manually by visiting the APKCombo website and downloading the latest version of game turbo blue apk. You can then install it over the existing version on your device.

    -

    Q5: How can I uninstall game turbo blue apk from my device?

    -

    A5: If you want to uninstall game turbo blue apk from your device, you can follow these steps:

    -
      -
    1. Go to your device's settings and tap on "Apps" or "Applications".
    2. -
    3. Find and tap on "Game Turbo Blue" from the list of installed apps.
    4. -
    5. Tap on "Uninstall" and confirm your action.
    6. -
    7. Wait for the uninstallation to finish and then restart your device.
    8. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hot Wheels Race Off Mod APK The Best Racing Game for Hot Wheels Fans.md b/spaces/congsaPfin/Manga-OCR/logs/Hot Wheels Race Off Mod APK The Best Racing Game for Hot Wheels Fans.md deleted file mode 100644 index 88acfb9e5363f4e4010989af0526d5c578b5aacc..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Hot Wheels Race Off Mod APK The Best Racing Game for Hot Wheels Fans.md +++ /dev/null @@ -1,97 +0,0 @@ - -

    Hot Wheels: Race Off Mod Apk - A Fun and Exciting Racing Game for Android

    -

    If you are a fan of racing games and Hot Wheels toys, you will love Hot Wheels: Race Off Mod Apk. This is a modified version of the original game that gives you unlimited money, all cars unlocked, and no ads. You can enjoy racing on over 60 tracks with more than 40 different Hot Wheels cars. You can also perform amazing stunts, collect coins, and upgrade your cars to make them faster and stronger.

    -

    What is Hot Wheels: Race Off Mod Apk?

    -

    Hot Wheels: Race Off Mod Apk is a racing game for Android devices that is based on the popular toy brand Hot Wheels. The game was developed by Hutch Games and has more than 100 million downloads on Google Play Store. The game lets you race on various tracks that are inspired by the real Hot Wheels sets. You can also customize your cars with different colors, wheels, engines, and more.

    -

    hot wheels race off mod apk


    Download File ––– https://urlca.com/2uOfVF



    -

    Features of Hot Wheels: Race Off Mod Apk

    -

    The mod apk version of Hot Wheels: Race Off has some extra features that make the game more fun and easy to play. Here are some of the features of Hot Wheels: Race Off Mod Apk:

    -

    Unlimited Money

    -

    With this feature, you will never run out of money in the game. You can use the money to buy new cars, upgrade your existing cars, and unlock new tracks. You can also buy boosters and power-ups that will help you win the races.

    -

    All Cars Unlocked

    -

    This feature allows you to access all the cars in the game without having to complete any levels or challenges. You can choose from over 40 different Hot Wheels cars, each with its own unique design and performance. You can also switch between cars anytime you want.

    -

    No Ads

    -

    This feature removes all the ads from the game, so you can enjoy uninterrupted gameplay. You will not have to watch any videos or banners that will slow down your device or consume your data. You can also save your battery life by playing without ads.

    -

    How to Download and Install Hot Wheels: Race Off Mod Apk?

    -

    If you want to download and install Hot Wheels: Race Off Mod Apk on your Android device, you will need to follow these simple steps:

    -

    Step 1: Download the Mod Apk File

    -

    The first step is to download the mod apk file from a reliable source. You can use this link to download the latest version of Hot Wheels: Race Off Mod Apk. The file size is about 100 MB, so make sure you have enough space on your device.

    -

    Step 2: Enable Unknown Sources on Your Device

    -

    The next step is to enable unknown sources on your device, so you can install apps from outside the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install the mod apk file.

    -

    hot wheels race off mod apk unlimited money
    -hot wheels race off mod apk download latest version
    -hot wheels race off mod apk android 1
    -hot wheels race off mod apk revdl
    -hot wheels race off mod apk hack
    -hot wheels race off mod apk free shopping
    -hot wheels race off mod apk all cars unlocked
    -hot wheels race off mod apk rexdl
    -hot wheels race off mod apk no ads
    -hot wheels race off mod apk unlimited gems
    -hot wheels race off mod apk 11.0.12232
    -hot wheels race off mod apk pure
    -hot wheels race off mod apk offline
    -hot wheels race off mod apk happymod
    -hot wheels race off mod apk 2023
    -hot wheels race off mod apk obb
    -hot wheels race off mod apk unlimited everything
    -hot wheels race off mod apk online
    -hot wheels race off mod apk latest update
    -hot wheels race off mod apk old version
    -hot wheels race off mod apk unlimited fuel
    -hot wheels race off mod apk for pc
    -hot wheels race off mod apk ios
    -hot wheels race off mod apk data
    -hot wheels race off mod apk 10.0.12158
    -hot wheels race off mod apk apkpure
    -hot wheels race off mod apk unlimited coins and gems
    -hot wheels race off mod apk 9.0.12022
    -hot wheels race off mod apk android oyun club
    -hot wheels race off mod apk 8.0.12116
    -hot wheels race off mod apk 7.0.12148
    -hot wheels race off mod apk 6.0.12148
    -hot wheels race off mod apk 5.0.12148
    -hot wheels race off mod apk 4.0.12148
    -hot wheels race off mod apk 3.0.12148
    -hot wheels race off mod apk 2.0.12148
    -hot wheels race off mod apk 1.0.12148
    -download game hot wheels race off mod apk versi terbaru
    -cara download game hot wheels race off mod apk
    -cheat game hot wheels race off mod apk
    -game guardian script for hot wheels race off mod apk
    -how to install and play the game of Hot Wheels Race Off Mod Apk
    -how to get free gems in Hot Wheels Race Off Mod Apk
    -how to unlock all cars in Hot Wheels Race Off Mod Apk
    -how to update Hot Wheels Race Off Mod Apk
    -is Hot Wheels Race Off Mod Apk safe to use
    -what is the difference between Hot Wheels Race Off and Hot Wheels Race Off Mod Apk
    -where can I find the best Hot Wheels Race Off Mod Apk
    -who is the developer of Hot Wheels Race Off Mod Apk

    -

    Step 3: Install the Mod Apk File

    -

    The final step is to install the mod apk file on your device. To do this, locate the file in your downloads folder and tap on it. You will see a pop-up window asking for your permission to install the app. Tap on Install and wait for the installation process to finish. Once it is done , you can launch the game and enjoy the mod features.

    -

    How to Play Hot Wheels: Race Off Mod Apk?

    -

    Playing Hot Wheels: Race Off Mod Apk is very easy and fun. You just need to follow these simple steps:

    -

    Choose Your Car and Track

    -

    The first thing you need to do is to choose your car and track. You can select from over 40 different Hot Wheels cars, each with its own stats and abilities. You can also customize your car with different colors, wheels, engines, and more. You can then choose from over 60 tracks, each with its own challenges and obstacles. You can also unlock new tracks by completing levels and earning stars.

    -

    Control Your Car and Perform Stunts

    -

    The next thing you need to do is to control your car and perform stunts. You can use the buttons on the screen to accelerate, brake, and tilt your car. You need to balance your speed and gravity to avoid crashing or falling off the track. You can also perform amazing stunts by jumping, flipping, and spinning your car in the air. You can earn extra points and coins by doing stunts.

    -

    Collect Coins and Upgrade Your Car

    -

    The last thing you need to do is to collect coins and upgrade your car. You can find coins on the track or by doing stunts. You can use the coins to buy new cars or upgrade your existing cars. You can improve your car's speed, grip, stability, and boost. You can also buy power-ups that will help you win the races, such as nitro, magnets, shields, and rockets.

    -

    Conclusion

    -

    Hot Wheels: Race Off Mod Apk is a fun and exciting racing game for Android devices that will make you feel like a kid again. You can race on various tracks with different Hot Wheels cars and perform amazing stunts. You can also enjoy unlimited money, all cars unlocked, and no ads with the mod apk version of the game. If you are looking for a thrilling and addictive racing game, you should download Hot Wheels: Race Off Mod Apk today.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Hot Wheels: Race Off Mod Apk:

    -
      -
    • Is Hot Wheels: Race Off Mod Apk safe to download and install?
    • -

      Yes, Hot Wheels: Race Off Mod Apk is safe to download and install on your Android device. The mod apk file has been tested for viruses and malware and does not contain any harmful or malicious code. However, you should always download the mod apk file from a trusted source and enable unknown sources on your device before installing it.

      -
    • Is Hot Wheels: Race Off Mod Apk compatible with my device?
    • -

      Hot Wheels: Race Off Mod Apk is compatible with most Android devices that have Android 4.4 or higher versions. However, some devices may experience performance issues or crashes due to hardware limitations or compatibility issues. If you encounter any problems while playing the game, you can try lowering the graphics settings or clearing the cache of the game.

      -
    • Can I play Hot Wheels: Race Off Mod Apk online with other players?
    • -

      No, Hot Wheels: Race Off Mod Apk does not support online multiplayer mode. The game is only available in offline mode, where you can race against the computer or yourself. However, you can still compete with other players by comparing your scores and achievements on the leaderboard.

      -
    • Can I update Hot Wheels: Race Off Mod Apk to the latest version?
    • -

      No, Hot Wheels: Race Off Mod Apk does not support automatic updates from the Google Play Store. If you want to update the game to the latest version, you will need to download and install the new mod apk file manually. However, you should be careful when updating the game, as some updates may cause errors or glitches in the game.

      -
    • Can I restore my progress if I uninstall Hot Wheels: Race Off Mod Apk?
    • -

      No, Hot Wheels: Race Off Mod Apk does not support cloud saving or backup of your progress. If you uninstall the game or delete the data of the game, you will lose all your progress and coins. Therefore, you should make sure that you have enough space on your device before installing the game or backup your data before uninstalling it.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install Clash of Clans Mod Raja Apk on Android.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install Clash of Clans Mod Raja Apk on Android.md deleted file mode 100644 index 868d3ea48db462091a119b2c618d381bed784cef..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install Clash of Clans Mod Raja Apk on Android.md +++ /dev/null @@ -1,116 +0,0 @@ -
    -

    Clash of Clans Mod APK Raja APK: Everything You Need to Know

    -

    Are you a fan of strategy games? Do you want to build your own village, train your army, and fight against millions of players worldwide? If yes, then you might have heard of Clash of Clans, one of the most popular mobile games in the world. But did you know that there is a way to enjoy the game even more with unlimited resources, access to all features, and no restrictions? Yes, we are talking about Clash of Clans Mod APK Raja APK, a modified version of the game that lets you play with more freedom and fun. In this article, we will tell you everything you need to know about this modded version, including its benefits, risks, and installation guide. Read on to find out more!

    -

    clash of clans mod apk raja apk


    Download Zip 🔗 https://urlca.com/2uOcxD



    -

    What is Clash of Clans?

    -

    Clash of Clans is a freemium strategy game developed by Supercell, a Finnish game company. It was released in 2012 for iOS devices and in 2013 for Android devices. Since then, it has become one of the most downloaded and played games in the world, with over 500 million downloads on Google Play Store alone. The game has also received many awards and accolades, such as being named as one of the best games by Google Play and Apple App Store.

    -

    Features of Clash of Clans

    -

    Clash of Clans is a game that offers a lot of features and content for players to enjoy. Here are some of the main features that make the game so addictive and fun:

    -

    Build your village and lead your clan

    -

    The core gameplay of Clash of Clans is to build your own village from scratch and turn it into a strong fortress. You can customize your village with various buildings, such as barracks, town hall, walls, defenses, mines, collectors, storages, traps, and more. You can also join or create a clan with other players and cooperate with them in various activities, such as donating troops, requesting reinforcements, chatting, and sharing strategies.

    -

    Defend your base and raid others

    -

    Another aspect of Clash of Clans is to defend your base from enemy attacks and raid other players' bases to loot their resources. You can use different types of troops, spells, heroes, and siege machines to attack or defend. You can also choose from different modes of attack, such as single-player campaign against the Goblin King, multiplayer battles against random opponents, friendly challenges or wars against clanmates or friends, or special events and challenges.

    -

    Join epic clan wars and leagues

    -

    One of the most exciting features of Clash of Clans is to participate in clan wars and leagues with your clan. Clan wars are team-based competitions where two clans face each other in a series of two attacks, and the clan with the most stars and percentage wins the war. Clan leagues are monthly tournaments where clans compete in a league system with seven other clans of similar skill level. The clans with the most trophies at the end of the season get promoted to higher leagues and earn more rewards.

    -

    Upgrade your troops, spells, and heroes

    -

    As you progress in the game, you can upgrade your troops, spells, and heroes to make them more powerful and effective. You can use the laboratory to research new levels and abilities for your troops and spells. You can also use the hero altar to upgrade your heroes, such as the Barbarian King, the Archer Queen, the Grand Warden, and the Royal Champion. Each hero has a unique skill that can be activated during battles.

    -

    clash of clans hack apk download raja apk
    -clash of clans mod apk unlimited gems raja apk
    -clash of clans mod apk latest version raja apk
    -clash of clans mod apk offline raja apk
    -clash of clans mod apk android 1 raja apk
    -clash of clans mod apk 2023 raja apk
    -clash of clans mod apk th14 raja apk
    -clash of clans mod apk private server raja apk
    -clash of clans mod apk unlimited everything raja apk
    -clash of clans mod apk unlimited troops raja apk
    -clash of clans mod menu apk raja apk
    -clash of clans modded apk free download raja apk
    -clash of clans modded apk no root raja apk
    -clash of clans modded apk online raja apk
    -clash of clans modded apk with builder base raja apk
    -clash of clans hacked version download raja apk
    -clash of clans hack app download raja apk
    -clash of clans hack tool download raja apk
    -clash of clans hack online generator raja apk
    -clash of clans hack without human verification raja apk
    -clash of clans hack no survey no password raja apk
    -clash of clans hack unlimited coins and gems raja apk
    -clash of clans hack version download for android raja apk
    -clash of clans hack version download for ios raja apk
    -clash of clans hack version download for pc raja apk
    -how to download clash of clans mod apk from raja apk
    -how to install clash of clans mod apk from raja apk
    -how to update clash of clans mod apk from raja apk
    -how to play clash of clans mod apk from raja apk
    -how to get free gems in clash of clans mod apk from raja apk
    -how to get free gold in clash of clans mod apk from raja apk
    -how to get free elixir in clash of clans mod apk from raja apk
    -how to get free dark elixir in clash of clans mod apk from raja apk
    -how to get free builder base resources in clash of clans mod apk from raja apk
    -how to unlock all troops in clash of clans mod apk from raja apk
    -how to unlock all buildings in clash of clans mod apk from raja apk
    -how to unlock all heroes in clash of clans mod apk from raja apk
    -how to unlock all spells in clash of clans mod apk from raja apk
    -how to unlock all skins in clash of clans mod apk from raja apk
    -how to unlock all clan perks in clash of clans mod apk from raja ap

    -

    Explore the Builder Base and the Clan Capital

    -

    Besides the main village, you can also explore two other areas in Clash of Clans: the Builder Base and the Clan Capital. The Builder Base is a separate island where you can build a second base with different buildings, troops, and mechanics. You can also engage in versus battles with other players and earn loot and trophies. The Clan Capital is a new feature that was introduced in 2021. It is a shared area where your clan can build a common base with various structures, such as clan hall, clan treasury, clan academy, clan monument, and more. You can also participate in clan quests and events to earn clan perks and rewards.

    -

    What is Clash of Clans Mod APK Raja APK?

    -

    Clash of Clans Mod APK Raja APK is a modified version of Clash of Clans that is created by a third-party developer named Raja APK. It is not an official version of the game and it is not endorsed or supported by Supercell. The modded version offers some features and advantages that are not available in the original version of the game. However, it also comes with some risks and drawbacks that you should be aware of before downloading it.

    -

    Benefits of downloading the modded version

    -

    Some of the benefits of downloading Clash of Clans Mod APK Raja APK are:

    -

    Unlimited gems, gold, and elixir

    -

    Gems, gold, and elixir are the main currencies in Clash of Clans that are used to buy and upgrade various items in the game. Normally, you have to earn them by playing the game, completing achievements, or purchasing them with real money. However, with Clash of Clans Mod APK Raja APK, you can get unlimited gems, gold, and elixir for free. You can use them to speed up your progress, unlock new features, and enjoy the game without any limitations.

    -

    Access to all troops, buildings, and upgrades

    -

    In Clash of Clans, you have to unlock new troops, buildings, and upgrades by reaching certain levels or completing certain tasks. Some of them require a lot of time and resources to unlock. However, with Clash of Clans Mod APK Raja APK, you can access all troops, buildings, and upgrades from the start. You can build your dream village and army without any restrictions or waiting time.

    -

    No root required

    -

    Some modded versions of Clash of Clans require you to root your device in order to install them. Rooting is a process that gives you full control over your device's system settings and files. However, rooting also voids your device's warranty and exposes it to security risks. Moreover, rooting is not easy or safe for everyone to do. However, with Clash of Clans Mod APK Raja APK, you do not need to root your device to install it. You can simply download and install it like any other app.

    -

    Free to download and play

    -

    Clash of Clans Mod APK Raja APK is free to download and play. You do not need to pay any money or subscription fees to enjoy it. You can simply visit the website of Raja APK and download the latest version of the modded APK for free.

    Risks and precautions of using the modded version

    -

    While Clash of Clans Mod APK Raja APK has many benefits, it also has some risks and drawbacks that you should be aware of before downloading it. Some of the risks and precautions of using the modded version are:

    -

    Possible ban from the official server

    -

    One of the biggest risks of using Clash of Clans Mod APK Raja APK is that you might get banned from the official server of Clash of Clans. This is because Supercell, the developer of the game, does not allow or support any modded versions of the game. They have a strict policy against cheating and hacking, and they can detect and ban any account that uses a modded version. If you get banned, you will lose your progress, your clan, and your account permanently. You will not be able to play the game again with the same account or device.

    -

    Potential malware or virus infection

    -

    Another risk of using Clash of Clans Mod APK Raja APK is that you might get infected with malware or virus. This is because the modded version is not verified or tested by Google Play Store or any other trusted source. It is created by a third-party developer who might have malicious intentions or hidden codes in the modded APK. If you download and install the modded APK from an unknown or untrusted website, you might expose your device and your data to security threats. You might get hacked, spammed, or scammed by hackers or scammers who can access your device and your information.

    -

    Loss of progress or data

    -

    A third risk of using Clash of Clans Mod APK Raja APK is that you might lose your progress or data. This is because the modded version is not compatible or synchronized with the original version of the game. You cannot connect your modded account to your Google Play account or your Facebook account. You cannot backup or restore your data with Google Play Cloud or any other cloud service. You cannot transfer your data from one device to another. If you delete the modded APK, uninstall the game, or change your device, you will lose your progress and data forever.

    -

    How to avoid or minimize the risks

    -

    While there is no guarantee that you can avoid or minimize all the risks of using Clash of Clans Mod APK Raja APK, there are some steps that you can take to reduce the chances of getting into trouble. Here are some tips that you can follow:

    -
      -
    • Use a secondary account or device to play the modded version. Do not use your main account or device that you use for the original version of the game. This way, you can protect your progress, your clan, and your account from getting banned.
    • -
    • Download and install the modded APK from a reliable and reputable website. Do not download and install the modded APK from an unknown or untrusted website that might have malware or virus. You can use Raja APK as a trusted source for downloading Clash of Clans Mod APK Raja APK.
    • -
    • Scan and check the modded APK before installing it on your device. Use a good antivirus or anti-malware software to scan and check the modded APK for any harmful or suspicious codes. If you find any red flags or warnings, do not install the modded APK on your device.
    • -
    • Do not use the modded version for too long or too often. Use the modded version sparingly and occasionally for fun and entertainment purposes only. Do not use it for competitive or serious gameplay. Do not abuse or exploit the unlimited resources or features of the modded version. This way, you can reduce the chances of getting detected and banned by Supercell.
    • -
    -

    How to download and install Clash of Clans Mod APK Raja APK?

    -

    If you want to download and install Clash of Clans Mod APK Raja APK on your device, you can follow this simple step-by-step guide:

    -

    Step-by-step guide

    -

    Visit the website of Raja APK

    -

    The first step is to visit the website of Raja APK, which is https://rajaapk.com/. This is a trusted and reputable website that offers various modded versions of popular games and apps, including Clash of Clans Mod APK Raja APK.

    -

    Search for Clash of Clans Mod APK

    -

    The second step is to search for Clash of Clans Mod APK on the website of Raja APK. You can use the search bar on the top right corner of the website to type in "Clash of Clans Mod APK" and hit enter. You will see a list of results related to Clash of Clans Mod APK.

    -Download the latest version of the modded APK -

    The third step is to download the latest version of Clash of Clans Mod APK Raja APK from the website of Raja APK. You can click on the result that matches your device and your preferences. You will see a page with more details and information about the modded APK, such as its features, screenshots, ratings, reviews, and download links. You can click on the download button or the link that says "Download Clash of Clans Mod APK Raja APK" to start downloading the modded APK file to your device.

    -

    Install the modded APK on your device

    -

    The fourth step is to install the modded APK on your device. Before you do that, you need to make sure that you have enabled the option to install apps from unknown sources on your device. You can do that by going to your device's settings, security, and allowing unknown sources. After that, you can locate the downloaded modded APK file on your device's file manager or downloads folder. You can tap on the file and follow the instructions to install it on your device.

    -

    Launch the game and enjoy

    -

    The final step is to launch the game and enjoy. You can find the game icon on your device's home screen or app drawer. You can tap on it and start playing Clash of Clans Mod APK Raja APK with unlimited resources, access to all features, and no restrictions. You can also connect with other players who are using the same modded version and have fun together.

    -

    Conclusion

    -

    Clash of Clans is a great game that offers a lot of fun and excitement for strategy game lovers. However, if you want to experience more freedom and fun in the game, you can try Clash of Clans Mod APK Raja APK, a modified version of the game that gives you unlimited resources, access to all features, and no restrictions. However, you should also be aware of the risks and precautions of using the modded version, such as possible ban from the official server, potential malware or virus infection, loss of progress or data, and how to avoid or minimize them. If you follow this guide, you can download and install Clash of Clans Mod APK Raja APK on your device easily and safely. We hope you enjoy playing Clash of Clans Mod APK Raja APK and have a blast!

    -

    FAQs

    -

    Here are some frequently asked questions about Clash of Clans Mod APK Raja APK:

    -
      -
    • Is Clash of Clans Mod APK Raja APK safe to use?
      -Clash of Clans Mod APK Raja APK is safe to use if you download it from a reliable and reputable website like Raja APK. However, you should also scan and check the modded APK before installing it on your device and use a secondary account or device to play it.
    • -
    • Can I play Clash of Clans Mod APK Raja APK online with other players?
      -Yes, you can play Clash of Clans Mod APK Raja APK online with other players who are using the same modded version. However, you cannot play with players who are using the original version of the game or other modded versions.
    • -
    • Can I update Clash of Clans Mod APK Raja APK to the latest version?
      -Yes, you can update Clash of Clans Mod APK Raja APK to the latest version by visiting the website of Raja APK and downloading the new version of the modded APK. However, you might lose your progress or data if you update without backing up your data.
    • -
    • Can I use Clash of Clans Mod APK Raja APK on iOS devices?
      -No, you cannot use Clash of Clans Mod APK Raja APK on iOS devices because it is only compatible with Android devices. If you want to use a modded version of Clash of Clans on iOS devices, you might need to jailbreak your device or use an emulator.
    • -
    • Can I request for new features or report bugs in Clash of Clans Mod APK Raja APK?
      -Yes, you can request for new features or report bugs in Clash of Clans Mod APK Raja APK by contacting the developer of Raja APK through their website or social media platforms. However, there is no guarantee that they will respond or fulfill your requests.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Treasure of Mystery Island 3 The Ghost Ship - A Hidden Object Adventure APK for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Treasure of Mystery Island 3 The Ghost Ship - A Hidden Object Adventure APK for Android.md deleted file mode 100644 index e58c6011b122bbcae080e3429145396c0587ce58..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Treasure of Mystery Island 3 The Ghost Ship - A Hidden Object Adventure APK for Android.md +++ /dev/null @@ -1,152 +0,0 @@ - -

    Treasure of Mystery Island 3 APK: A Ghostly Adventure Game for Android

    -

    Do you love adventure games that are full of mystery, suspense, and paranormal activity? If yes, then you should try Treasure of Mystery Island 3 APK, a thrilling game that will take you on a ghostly journey across a tropical island.

    -

    In this article, we will tell you everything you need to know about this game, including what it is, what features it has, how to download and install it on your Android device, how it compares with its previous versions, and what are the reviews of other players who have tried it.

    -

    treasure of mystery island 3 apk


    Download Zip ✶✶✶ https://urlca.com/2uO5w1



    -

    So, let's get started!

    -

    What is Treasure of Mystery Island 3?

    -

    Treasure of Mystery Island 3 is an adventure game developed by Alawar Entertainment, Inc., a company that specializes in casual games for various platforms. It is the third installment in the popular Treasure of Mystery Island series, which follows the adventures of a young journalist who is sent to investigate paranormal phenomena on different islands.

    -

    The game is also known as The Treasures of Mystery Island 3: The Ghost Ship, which is its official title on Google Play Store. However, since many players refer to it as Treasure of Mystery Island 3 APK, we will use this name throughout the article.

    -

    The story of Treasure of Mystery Island 3

    -

    The game follows the story of Alex, a young journalist who is sent to a remote island in the Pacific Ocean to cover a story about a mysterious ship that appears every 27 years. The ship is said to be haunted by the ghosts of its crew, who died in a tragic accident many years ago.

    -

    Alex arrives on the island and meets Lisa, a local girl who works as a guide. Together, they explore the island and discover its secrets, while trying to avoid the dangers that lurk in the shadows. They also encounter some friendly and not-so-friendly characters, who help or hinder them along the way.

    -

    The game is divided into four chapters, each with its own plot and challenges. The chapters are:

    -
      -
    • Chapter 1: The Arrival
    • -
    • Chapter 2: The Ghost Ship
    • -
    • Chapter 3: The Curse of the Island
    • -
    • Chapter 4: The Final Battle
    • -
    -

    The game has a captivating storyline that will keep you hooked until the end. You will have to solve puzzles, find hidden objects, and interact with various items and characters to progress through the game. You will also have to face some spooky scenes and jump scares that will make your heart race.

    -

    The features of Treasure of Mystery Island 3

    -

    The game has many features that make it an enjoyable and immersive experience. Some of these features are:

    -

    treasure of mystery island 3 android game
    -download treasure of mystery island 3 for free
    -treasure of mystery island 3: the ghost ship walkthrough
    -treasure of mystery island 3 full version apk
    -treasure of mystery island 3 hidden object game
    -treasure of mystery island 3 mod apk unlimited hints
    -treasure of mystery island 3 alawar entertainment
    -how to play treasure of mystery island 3
    -treasure of mystery island 3 apk offline
    -treasure of mystery island 3 apk latest version
    -treasure of mystery island 3 apk + data
    -treasure of mystery island 3 apk no ads
    -treasure of mystery island 3 apk cracked
    -treasure of mystery island 3 apk obb
    -treasure of mystery island 3 apk revdl
    -treasure of mystery island 3 apk mob.org
    -treasure of mystery island 3 apk pure
    -treasure of mystery island 3 apk uptodown
    -treasure of mystery island 3 apk android oyun club
    -treasure of mystery island 3 apk hack
    -treasure of mystery island 3 apk mod menu
    -treasure of mystery island 3 apk rexdl
    -treasure of mystery island 3 apk mirror
    -treasure of mystery island 3 apk apkpure
    -treasure of mystery island 3 apk appvn
    -treasure of mystery island 3 apk aptoide
    -treasure of mystery island 3 apk android republic
    -treasure of mystery island 3 apk andropalace
    -treasure of mystery island 3 apk blackmod
    -treasure of mystery island 3 apk by alfygame.com
    -treasure of mystery island 3 apk cheat codes
    -treasure of mystery island 3 apk download for pc
    -treasure of mystery island 3 apk download apkpure.com
    -treasure of mystery island 3 apk download free full version android games.com
    -treasure of mystery island 3 apk download highly compressed android games.com
    -treasure of mystery island 3 apk download modded android games.com
    -treasure of mystery island 3 apk download unlimited money android games.com
    -treasure of mystery island 3 apk download unlocked android games.com
    -treasure of mystery island 3 apk free shopping android games.com
    -treasure of mystery island 3 apk gamestechy.com

    -

    Four massive chapters

    -

    The game has four chapters that span over 60 locations. Each chapter has its own theme, atmosphere, and difficulty level. You will have to explore different areas of the island, such as beaches, jungles, caves, temples, and more. You will also have to visit the ghost ship and uncover its secrets.

    -

    60 spine-shivering locations

    -

    The game has 60 locations that are rich in detail and variety. The graphics are stunning and realistic, and the sound effects are eerie and atmospheric. You will feel like you are really on the island, surrounded by mystery and danger. You will also encounter some amazing animations and special effects that will enhance your gameplay.

    -

    Unlimited hints and tips

    -

    The game has unlimited hints and tips that will help you if you get stuck or need some guidance. You can use the hint button to reveal a hidden object or a clue, or you can use the tip button to get a detailed explanation of what to do next. You can also skip puzzles if you find them too hard or boring.

    -

    Interactive tutorial

    -

    The game has an interactive tutorial that will teach you how to play the game and use its features. The tutorial is optional and can be accessed at any time from the main menu. It will show you how to navigate the game interface, how to interact with items and characters, how to solve puzzles and find hidden objects, and more.

    -

    How to download and install Treasure of Mystery Island 3 APK?

    -

    If you want to play Treasure of Mystery Island 3 APK on your Android device, you will need to download and install it from a trusted source. Here are the steps you need to follow:

    -

    Downloading Treasure of Mystery Island 3 APK from a trusted source

    -

    The first step is to download the APK file of the game from a trusted source. You can use this link to download it safely and securely. The file size is about 300 MB, so make sure you have enough space on your device and a stable internet connection.

    -

    Installing Treasure of Mystery Island 3 APK on your Android device

    -

    The second step is to install the APK file on your Android device. Before you do that, you need to enable the installation of apps from unknown sources on your device settings. To do that, follow these steps:

    -
      -
    1. Go to your device settings and tap on Security or Privacy.
    2. -
    3. Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
    4. -
    5. You may see a warning message that says installing apps from unknown sources may harm your device. Tap on OK or Allow to proceed.
    6. -
    -

    Once you have enabled the installation of apps from unknown sources, you can install the APK file by following these steps:

    -
      -
    1. Locate the APK file on your device storage using a file manager app or your browser downloads.
    2. -
    3. Tap on the APK file and select Install.
    4. -
    5. You may see a pop-up window that asks for permissions to access your device features with them. To answer this question, we have created a table that shows the differences between Treasure of Mystery Island 1, Treasure of Mystery Island 2, and Treasure of Mystery Island 3. Here it is:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      GameTitleRelease DateStoryFeatures
      Treasure of Mystery Island 1The Treasures of Mystery Island2008Alex is sent to a tropical island to deliver a package, but he finds himself in the middle of a volcanic eruption and a mystery involving ancient artifacts.- 20 thrilling episodes
      - Over 1000 objects to find
      - 15 mini-games
      - Two game modes: casual and expert
      Treasure of Mystery Island 2The Treasures of Mystery Island: The Gates of Fate2010Alex and Lisa are separated by a time portal and have to find each other and stop a cataclysmic event that threatens the island.- 14 exciting chapters
      - Over 200 locations
      - Over 20 mini-games
      - Four game modes: regular, advanced, timed, and relaxed
      Treasure of Mystery Island 3The Treasures of Mystery Island 3: The Ghost Ship2011Alex is sent to a remote island to investigate a ghost ship that appears every 27 years. He meets Lisa, a local guide, and they uncover the secrets of the island and the ship.- Four massive chapters
      - 60 spine-shivering locations
      - Unlimited hints and tips
      - Interactive tutorial
      -

      As you can see, each game has its own unique title, story, and features. However, they all share the same genre, characters, and gameplay. They are all adventure games that involve finding hidden objects, solving puzzles, and exploring exotic locations. They also feature Alex and Lisa as the main protagonists, who have to work together to solve the mysteries of the islands.

      -

      Therefore, we can say that Treasure of Mystery Island 3 APK is similar to its previous versions in some aspects, but different in others. It has a new story, new locations, new challenges, and new graphics. However, it also retains the same charm, fun, and excitement that made the series popular among adventure game fans.

      -

      What are the reviews of Treasure of Mystery Island 3?

      -

      If you are wondering what other players think about Treasure of Mystery Island 3 APK, you can read some of their reviews on Google Play Store or other platforms. Here are some examples of positive and negative reviews:

      -

      The positive reviews of Treasure of Mystery Island 3

      -
        -
      • "I love this game! It has a great story, beautiful graphics, and challenging puzzles. It is one of the best adventure games I have ever played. I highly recommend it to anyone who likes mystery and suspense."
      • -
      • "This game is awesome! It has everything I look for in an adventure game: a captivating plot, stunning visuals, spooky atmosphere, and engaging gameplay. It is also very easy to play and has unlimited hints and tips. I enjoyed every minute of it."
      • -
      • "This game is amazing! It has a lot of variety and surprises. It is not just a hidden object game, but also a puzzle game, an action game, and a horror game. It kept me on the edge of my seat until the end. It is definitely worth playing."
      • -
      -

      The negative reviews of Treasure of Mystery Island 3

      -
        -
      • "I hate this game! It is too hard, too scary, and too boring. It has too many puzzles, too many hidden objects, and too many ghosts. It is also very glitchy and crashes often. I wasted my time and money on it."
      • -
      • "This game is terrible! It has a lame story, poor graphics, and annoying sound effects. It is also very repetitive and predictable. It has nothing new or original to offer. I regret downloading it."
      • -
      • "This game is disappointing! It is not as good as the previous versions. It has a weak story, dull locations, and easy puzzles. It is also very short and ends abruptly. It does not live up to the expectations."
      • -
      -

      As you can see, the reviews are mixed. Some players love the game and praise it for its story, graphics, and gameplay, while others hate it and criticize it for its difficulty, glitches, and lack of originality. However, the majority of the reviews are positive, and the game has a rating of 4.3 out of 5 stars on Google Play Store. Therefore, we can say that Treasure of Mystery Island 3 APK is a game that has received mostly positive feedback from its players, and that it is a game that you should try if you are a fan of adventure games.

      Conclusion

      -

      In conclusion, Treasure of Mystery Island 3 APK is a ghostly adventure game for Android that will take you on a thrilling journey across a tropical island. You will have to solve puzzles, find hidden objects, and uncover the secrets of the island and the ghost ship that haunts it.

      -

      The game has a captivating story, stunning graphics, and challenging gameplay. It also has unlimited hints and tips, an interactive tutorial, and four massive chapters. It is one of the best adventure games you can play on your Android device.

      -

      If you want to play Treasure of Mystery Island 3 APK, you can download it from this link and install it on your device following the steps we have explained in this article. You will not regret it!

      -

      Thank you for reading this article. We hope you found it helpful and informative. If you have any questions or comments about the game or the article, please feel free to leave them below. We would love to hear from you!

      -

      FAQs

      -

      Here are some frequently asked questions about Treasure of Mystery Island 3 APK:

      -

      Q: Is Treasure of Mystery Island 3 APK free?

      -

      A: Yes, Treasure of Mystery Island 3 APK is free to download and play. However, it may contain some in-app purchases or ads that you can choose to buy or ignore.

      -

      Q: Is Treasure of Mystery Island 3 APK safe?

      -

      A: Yes, Treasure of Mystery Island 3 APK is safe to download and install on your device. However, you should always download it from a trusted source, such as the link we have provided in this article, and enable the installation of apps from unknown sources on your device settings.

      -

      Q: Is Treasure of Mystery Island 3 APK compatible with my device?

      -

      A: Treasure of Mystery Island 3 APK is compatible with most Android devices that run on Android 4.0 or higher. However, some devices may have different specifications or performance issues that may affect the game's functionality or quality. You can check the game's requirements and compatibility on Google Play Store before downloading it.

      -

      Q: How long is Treasure of Mystery Island 3 APK?

      -

      A: The length of Treasure of Mystery Island 3 APK may vary depending on your skill level, game mode, and playing style. However, on average, it may take you about 4 to 6 hours to complete the game.

      -

      Q: Can I play Treasure of Mystery Island 3 APK offline?

      -

      A: Yes, you can play Treasure of Mystery Island 3 APK offline once you have downloaded and installed it on your device. However, you may need an internet connection to access some features or updates that may be available online.

      - : https://www.alawar.com/game/the-treasures-of-mystery-island-the-ghost-ship/ : https://play.google.com/store/apps/details?id=com.alawar.tomi3full&hl=en_US&gl=US

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Bluetooth Peripheral Device Driver Download For Windows 7 Ultimate 32 19.md b/spaces/contluForse/HuggingGPT/assets/Bluetooth Peripheral Device Driver Download For Windows 7 Ultimate 32 19.md deleted file mode 100644 index 1068b1b59dd470b71671987fa46b53309de01002..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Bluetooth Peripheral Device Driver Download For Windows 7 Ultimate 32 19.md +++ /dev/null @@ -1,22 +0,0 @@ - -

      Note You may be prompted to provide the path of the driver. Windows may have the driver built-in, or may still have the driver files installed from the last time that you set up the device. If you are asked for the driver and you do not have it, you can try to download the latest driver from the hardware vendor's website.

      -

      Note You may be prompted to provide the path of the driver. Windows may have the driver built-in, or may still have the driver files installed from the last time that you set up the device. However, sometimes, it will open the New Hardware Wizard which may ask for the driver. If you are asked for the driver and you do not have it, you can try to download the latest driver from the hardware vendor's website.

      -

      bluetooth peripheral device driver download for windows 7 ultimate 32 19


      Download Filehttps://ssurll.com/2uzxyH



      -

      Now, the Device Manager will search for the relevant driver automatically. It will also download and install the driver for you. After the driver has been updated, we suggest that you try to connect your mobile device to your PC again to see if the error is gone.

      -

      Using the BitDefender driver updater is very easy. Just download this program from the official website and then run it to scan your system for the latest Bluetooth Peripheral Device Driver version. The application will detect all the peripheral devices and check whether they support Bluetooth technology. If it detects the required driver, you will be given the option to download and install the latest one.

      -

      You may want to use the latest drivers even if you have just purchased the same device. Updating the Bluetooth peripheral device driver download ensures that the device functions properly and also that you do not experience any errors or connectivity problems. With this, you can use the device in a better way. If you download the latest driver update, it will ensure that your device operates in perfect condition.

      -

      If you do not want to take any risks and want to ensure the smooth functioning of your Bluetooth-enabled device, you can go in for the Bluetooth bit driver updater. You just have to download the latest driver update and install it on your system. You do not have to worry about compatibility issues. This is because the driver is designed to work with the most recent operating systems. It will make sure that your device operates without any errors and is also safe to use.

      -

      For Bluetooth peripheral device drivers for Windows 7, you could use device update software to help in installing the drivers. These software companies provide their users with the latest updates in a timely manner. You do not have to worry about updating the drivers because they are easily accessible on the Internet.

      -

      In this guide, you can find out how to download and install Bluetooth driver windows 10, and fix common issues with them such as Bluetooth not working, or Bluetooth not detecting devices on Windows 10.

      -

      -

      Before downloading a Bluetooth driver, you need to get information about your system and note important details. This will ensure that you download the correct Bluetooth drivers compatible with your setup and Bluetooth devices. You may run into wireless connection issues if you download the incorrect drivers.

      -

      Bluetooth peripheral device driver by default is crucial in allowing connections and data sharing operations. If you attempted to share files via Bluetooth and received an error message, it is likely that your Bluetooth driver is malfunctioning.

      -

      Are you getting the driver error message when enabling Bluetooth to your computer? Maybe you are unable to use Bluetooth devices that you have enabled on a computer such as mobile phones, wireless headsets, mouse and keyboards, microphones, etc. You can use the easy methods in this post to quickly download the peripheral Bluetooth device driver free of charge to find and solve the problem. The most recent anti-virus software has scanned all the downloads available on this website and is guaranteed free of viruses and malware.

      -

      var size = ["125", "125"]; if ( betterads_el_width >= 728 ) betterads_el_width = ["728", "90"]; else if ( betterads_el_width >= 468 ) betterads_el_width = ["468", "60"]; else if ( betterads_el_width >= 336 ) betterads_el_width = ["336", "280"]; else if ( betterads_el_width >= 300 ) betterads_el_width = ["300", "250"]; else if ( betterads_el_width >= 250 ) betterads_el_width = ["250", "250"]; else if ( betterads_el_width >= 200 ) betterads_el_width = ["200", "200"]; else if ( betterads_el_width >= 180 ) betterads_el_width = ["180", "150"]; if ( betterads_screen_width >= 1140 ) else if ( betterads_screen_width >= 1019 && betterads_screen_width < 1140 ) document.getElementById('gat-92603-1327947536-place').innerHTML = ''; (adsbygoogle = window.adsbygoogle else if ( betterads_screen_width >= 768 && betterads_screen_width < 1019 ) document.getElementById('gat-92603-1327947536-place').innerHTML = ''; (adsbygoogle = window.adsbygoogle else if ( betterads_screen_width < 768 ) How do I download Bluetooth peripheral device driver

      1. Download the file to a folder on your PC.
      2. Uninstall current version of Intel Wireless Bluetooth.
      3. Double-click the file to launch installation.
      var betterads_screen_width = document.body.clientWidth;betterads_el = document.getElementById('gat-51174-333714197');

      -

      How is logitech up to speed with windows 7 drives? I was able to use my MX bluetooth with vista and had no issues and now can not get any drives for windows 7. Anyone know where or when to get the drivers?

      -

      If you could not find the exact driver for your hardware device or you aren't sure which driver is right one, we have a program that will detect your hardware specifications and identify the correct driver for your needs. Please click here to download.

      -

      I have installed Windows 7 on MacBook Pro using BootCamp. Usually when turning my Bluetooth headset on and trying to pair it with Mac for the first time, Windows fails to install the drivers and opens a solution in Action Center, which suggests to download the driver from the Broadcom webpage. This used to work for me before, drivers were installed and everything worked well. However now, when I start the driver installer, it would get stuck at "Detecting Bluetooth Device" stage. There is also a warning with text, which says "Please plug in or turn on your Bluetooth device":

      -

      Before installing the driver the laptop would detect the Bluetooth device (Creative D200 speakers) but not be able to pair due to lack of a driver, which it then searched for but could not download. After installation the speakers work fine. They use the A2DP high quality BlueTooth audio codec.

      -

      Apparently what it tries to find is the bluetooth receiver itself, not the device that connect to it (e.g. headset, mouse etc.). I have no idea why it didn't work with built-in device that is somewhere inside my laptop, but it did with another external bluetooth usb thumb. Once I have plugged it in, the installer has recognized it and installed drivers. Apparently same drivers worked for my built-in bluetooth, so i just unplugged usb thumb and since then it works for me. Hope this will be useful for someone.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Doctor Strange (English) Dual Audio Eng Hindi 1080p Everything You Need to Know About the Movie.md b/spaces/contluForse/HuggingGPT/assets/Doctor Strange (English) Dual Audio Eng Hindi 1080p Everything You Need to Know About the Movie.md deleted file mode 100644 index 0a84233827fdc3b4648b04ed5a578c3398a35890..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Doctor Strange (English) Dual Audio Eng Hindi 1080p Everything You Need to Know About the Movie.md +++ /dev/null @@ -1,6 +0,0 @@ -

      crack dongle see electrical expert v4 144


      Download Ziphttps://ssurll.com/2uzvGc



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/conv_ws.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/conv_ws.py deleted file mode 100644 index a3941e27874993418b3b5708d5a7485f175ff9c8..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/conv_ws.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .registry import CONV_LAYERS - - -def conv_ws_2d(input, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - eps=1e-5): - c_in = weight.size(0) - weight_flat = weight.view(c_in, -1) - mean = weight_flat.mean(dim=1, keepdim=True).view(c_in, 1, 1, 1) - std = weight_flat.std(dim=1, keepdim=True).view(c_in, 1, 1, 1) - weight = (weight - mean) / (std + eps) - return F.conv2d(input, weight, bias, stride, padding, dilation, groups) - - -@CONV_LAYERS.register_module('ConvWS') -class ConvWS2d(nn.Conv2d): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - eps=1e-5): - super(ConvWS2d, self).__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.eps = eps - - def forward(self, x): - return conv_ws_2d(x, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups, self.eps) - - -@CONV_LAYERS.register_module(name='ConvAWS') -class ConvAWS2d(nn.Conv2d): - """AWS (Adaptive Weight Standardization) - - This is a variant of Weight Standardization - (https://arxiv.org/pdf/1903.10520.pdf) - It is used in DetectoRS to avoid NaN - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the conv kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If set True, adds a learnable bias to the - output. Default: True - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.register_buffer('weight_gamma', - torch.ones(self.out_channels, 1, 1, 1)) - self.register_buffer('weight_beta', - torch.zeros(self.out_channels, 1, 1, 1)) - - def _get_weight(self, weight): - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - weight = (weight - mean) / std - weight = self.weight_gamma * weight + self.weight_beta - return weight - - def forward(self, x): - weight = self._get_weight(self.weight) - return F.conv2d(x, weight, self.bias, self.stride, self.padding, - self.dilation, self.groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Override default load function. - - AWS overrides the function _load_from_state_dict to recover - weight_gamma and weight_beta if they are missing. If weight_gamma and - weight_beta are found in the checkpoint, this function will return - after super()._load_from_state_dict. Otherwise, it will compute the - mean and std of the pretrained weights and store them in weight_beta - and weight_gamma. - """ - - self.weight_gamma.data.fill_(-1) - local_missing_keys = [] - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, local_missing_keys, - unexpected_keys, error_msgs) - if self.weight_gamma.data.mean() > 0: - for k in local_missing_keys: - missing_keys.append(k) - return - weight = self.weight.data - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - self.weight_beta.data.copy_(mean) - self.weight_gamma.data.copy_(std) - missing_gamma_beta = [ - k for k in local_missing_keys - if k.endswith('weight_gamma') or k.endswith('weight_beta') - ] - for k in missing_gamma_beta: - local_missing_keys.remove(k) - for k in local_missing_keys: - missing_keys.append(k) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/ibims.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/ibims.py deleted file mode 100644 index b66abfabcf4cfc617d4a60ec818780c3548d9920..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/ibims.py +++ /dev/null @@ -1,81 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import os - -import numpy as np -import torch -from PIL import Image -from torch.utils.data import DataLoader, Dataset -from torchvision import transforms as T - - -class iBims(Dataset): - def __init__(self, config): - root_folder = config.ibims_root - with open(os.path.join(root_folder, "imagelist.txt"), 'r') as f: - imglist = f.read().split() - - samples = [] - for basename in imglist: - img_path = os.path.join(root_folder, 'rgb', basename + ".png") - depth_path = os.path.join(root_folder, 'depth', basename + ".png") - valid_mask_path = os.path.join( - root_folder, 'mask_invalid', basename+".png") - transp_mask_path = os.path.join( - root_folder, 'mask_transp', basename+".png") - - samples.append( - (img_path, depth_path, valid_mask_path, transp_mask_path)) - - self.samples = samples - # self.normalize = T.Normalize( - # mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - self.normalize = lambda x : x - - def __getitem__(self, idx): - img_path, depth_path, valid_mask_path, transp_mask_path = self.samples[idx] - - img = np.asarray(Image.open(img_path), dtype=np.float32) / 255.0 - depth = np.asarray(Image.open(depth_path), - dtype=np.uint16).astype('float')*50.0/65535 - - mask_valid = np.asarray(Image.open(valid_mask_path)) - mask_transp = np.asarray(Image.open(transp_mask_path)) - - # depth = depth * mask_valid * mask_transp - depth = np.where(mask_valid * mask_transp, depth, -1) - - img = torch.from_numpy(img).permute(2, 0, 1) - img = self.normalize(img) - depth = torch.from_numpy(depth).unsqueeze(0) - return dict(image=img, depth=depth, image_path=img_path, depth_path=depth_path, dataset='ibims') - - def __len__(self): - return len(self.samples) - - -def get_ibims_loader(config, batch_size=1, **kwargs): - dataloader = DataLoader(iBims(config), batch_size=batch_size, **kwargs) - return dataloader diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/payload_streamer.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/payload_streamer.py deleted file mode 100644 index 9f8b8bc57cc22fc693da1646bf806c2a6ca8d797..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/payload_streamer.py +++ /dev/null @@ -1,75 +0,0 @@ -""" -Payload implemenation for coroutines as data provider. - -As a simple case, you can upload data from file:: - - @aiohttp.streamer - async def file_sender(writer, file_name=None): - with open(file_name, 'rb') as f: - chunk = f.read(2**16) - while chunk: - await writer.write(chunk) - - chunk = f.read(2**16) - -Then you can use `file_sender` like this: - - async with session.post('http://httpbin.org/post', - data=file_sender(file_name='huge_file')) as resp: - print(await resp.text()) - -..note:: Coroutine must accept `writer` as first argument - -""" - -import types -import warnings -from typing import Any, Awaitable, Callable, Dict, Tuple - -from .abc import AbstractStreamWriter -from .payload import Payload, payload_type - -__all__ = ("streamer",) - - -class _stream_wrapper: - def __init__( - self, - coro: Callable[..., Awaitable[None]], - args: Tuple[Any, ...], - kwargs: Dict[str, Any], - ) -> None: - self.coro = types.coroutine(coro) - self.args = args - self.kwargs = kwargs - - async def __call__(self, writer: AbstractStreamWriter) -> None: - await self.coro(writer, *self.args, **self.kwargs) # type: ignore[operator] - - -class streamer: - def __init__(self, coro: Callable[..., Awaitable[None]]) -> None: - warnings.warn( - "@streamer is deprecated, use async generators instead", - DeprecationWarning, - stacklevel=2, - ) - self.coro = coro - - def __call__(self, *args: Any, **kwargs: Any) -> _stream_wrapper: - return _stream_wrapper(self.coro, args, kwargs) - - -@payload_type(_stream_wrapper) -class StreamWrapperPayload(Payload): - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) - - -@payload_type(streamer) -class StreamPayload(StreamWrapperPayload): - def __init__(self, value: Any, *args: Any, **kwargs: Any) -> None: - super().__init__(value(), *args, **kwargs) - - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) diff --git a/spaces/deepwisdom/MetaGPT/metagpt/static/cy_aps/assets/style-86c4771e.css b/spaces/deepwisdom/MetaGPT/metagpt/static/cy_aps/assets/style-86c4771e.css deleted file mode 100644 index 9be5faa32202cc654ecd8a831239661485a6edb9..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/static/cy_aps/assets/style-86c4771e.css +++ /dev/null @@ -1 +0,0 @@ -/*! normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css */html{line-height:1.15;-webkit-text-size-adjust:100%;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}body{margin:0}main{display:block}h1{margin:.67em 0;font-size:2em}hr{box-sizing:content-box;height:0;overflow:visible}pre{font-size:1em;font-family:monospace,monospace}a{background-color:transparent}abbr[title]{text-decoration:underline;text-decoration:underline dotted;border-bottom:none}b,strong{font-weight:bolder}code,kbd,samp{font-size:1em;font-family:monospace,monospace}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}img{border-style:none}button,input,optgroup,select,textarea{margin:0;font-size:100%;font-family:inherit;line-height:1.15}button,input{overflow:visible}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button}button::-moz-focus-inner,[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner{padding:0;border-style:none}button:-moz-focusring,[type=button]:-moz-focusring,[type=reset]:-moz-focusring,[type=submit]:-moz-focusring{outline:1px dotted ButtonText}fieldset{padding:.35em .75em .625em}legend{display:table;box-sizing:border-box;max-width:100%;padding:0;color:inherit;white-space:normal}progress{vertical-align:baseline}textarea{overflow:auto}[type=checkbox],[type=radio]{box-sizing:border-box;padding:0}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:textfield}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}details{display:block}summary{display:list-item}template{display:none}[hidden]{display:none}.arco-icon{display:inline-block;width:1em;height:1em;color:inherit;font-style:normal;vertical-align:-2px;outline:none;stroke:currentColor}.arco-icon-loading,.arco-icon-spin{animation:arco-loading-circle 1s infinite cubic-bezier(0,0,1,1)}@keyframes arco-loading-circle{0%{transform:rotate(0)}to{transform:rotate(360deg)}}.arco-icon-hover{position:relative;display:inline-block;cursor:pointer;line-height:12px}.arco-icon-hover .arco-icon{position:relative}.arco-icon-hover:before{position:absolute;display:block;box-sizing:border-box;background-color:transparent;border-radius:var(--border-radius-circle);transition:background-color .1s cubic-bezier(0,0,1,1);content:""}.arco-icon-hover:hover:before{background-color:var(--color-fill-2)}.arco-icon-hover.arco-icon-hover-disabled:before{opacity:0}.arco-icon-hover:before{top:50%;left:50%;width:20px;height:20px;transform:translate(-50%,-50%)}.arco-icon-hover-size-mini{line-height:12px}.arco-icon-hover-size-mini:before{top:50%;left:50%;width:20px;height:20px;transform:translate(-50%,-50%)}.arco-icon-hover-size-small{line-height:12px}.arco-icon-hover-size-small:before{top:50%;left:50%;width:20px;height:20px;transform:translate(-50%,-50%)}.arco-icon-hover-size-large{line-height:12px}.arco-icon-hover-size-large:before{top:50%;left:50%;width:24px;height:24px;transform:translate(-50%,-50%)}.arco-icon-hover-size-huge{line-height:12px}.arco-icon-hover-size-huge:before{top:50%;left:50%;width:24px;height:24px;transform:translate(-50%,-50%)}.fade-in-standard-enter-from,.fade-in-standard-appear-from{opacity:0}.fade-in-standard-enter-to,.fade-in-standard-appear-to{opacity:1}.fade-in-standard-enter-active,.fade-in-standard-appear-active{transition:opacity .3s cubic-bezier(.34,.69,.1,1)}.fade-in-standard-leave-from{opacity:1}.fade-in-standard-leave-to{opacity:0}.fade-in-standard-leave-active{transition:opacity .3s cubic-bezier(.34,.69,.1,1)}.fade-in-enter-from,.fade-in-appear-from{opacity:0}.fade-in-enter-to,.fade-in-appear-to{opacity:1}.fade-in-enter-active,.fade-in-appear-active{transition:opacity .1s cubic-bezier(0,0,1,1)}.fade-in-leave-from{opacity:1}.fade-in-leave-to{opacity:0}.fade-in-leave-active{transition:opacity .1s cubic-bezier(0,0,1,1)}.zoom-in-enter-from,.zoom-in-appear-from{transform:scale(.5);opacity:0}.zoom-in-enter-to,.zoom-in-appear-to{transform:scale(1);opacity:1}.zoom-in-enter-active,.zoom-in-appear-active{transition:opacity .3s cubic-bezier(.34,.69,.1,1),transform .3s cubic-bezier(.34,.69,.1,1)}.zoom-in-leave-from{transform:scale(1);opacity:1}.zoom-in-leave-to{transform:scale(.5);opacity:0}.zoom-in-leave-active{transition:opacity .3s cubic-bezier(.34,.69,.1,1),transform .3s cubic-bezier(.34,.69,.1,1)}.zoom-in-fade-out-enter-from,.zoom-in-fade-out-appear-from{transform:scale(.5);opacity:0}.zoom-in-fade-out-enter-to,.zoom-in-fade-out-appear-to{transform:scale(1);opacity:1}.zoom-in-fade-out-enter-active,.zoom-in-fade-out-appear-active{transition:opacity .3s cubic-bezier(.3,1.3,.3,1),transform .3s cubic-bezier(.3,1.3,.3,1)}.zoom-in-fade-out-leave-from{transform:scale(1);opacity:1}.zoom-in-fade-out-leave-to{transform:scale(.5);opacity:0}.zoom-in-fade-out-leave-active{transition:opacity .3s cubic-bezier(.3,1.3,.3,1),transform .3s cubic-bezier(.3,1.3,.3,1)}.zoom-in-big-enter-from,.zoom-in-big-appear-from{transform:scale(.5);opacity:0}.zoom-in-big-enter-to,.zoom-in-big-appear-to{transform:scale(1);opacity:1}.zoom-in-big-enter-active,.zoom-in-big-appear-active{transition:opacity .2s cubic-bezier(0,0,1,1),transform .2s cubic-bezier(0,0,1,1)}.zoom-in-big-leave-from{transform:scale(1);opacity:1}.zoom-in-big-leave-to{transform:scale(.2);opacity:0}.zoom-in-big-leave-active{transition:opacity .2s cubic-bezier(0,0,1,1),transform .2s cubic-bezier(0,0,1,1)}.zoom-in-left-enter-from,.zoom-in-left-appear-from{transform:scale(.1);opacity:.1}.zoom-in-left-enter-to,.zoom-in-left-appear-to{transform:scale(1);opacity:1}.zoom-in-left-enter-active,.zoom-in-left-appear-active{transform-origin:0 50%;transition:opacity .3s cubic-bezier(0,0,1,1),transform .3s cubic-bezier(.3,1.3,.3,1)}.zoom-in-left-leave-from{transform:scale(1);opacity:1}.zoom-in-left-leave-to{transform:scale(.1);opacity:.1}.zoom-in-left-leave-active{transform-origin:0 50%;transition:opacity .3s cubic-bezier(0,0,1,1),transform .3s cubic-bezier(.3,1.3,.3,1)}.zoom-in-top-enter-from,.zoom-in-top-appear-from{transform:scaleY(.8) translateZ(0);opacity:0}.zoom-in-top-enter-to,.zoom-in-top-appear-to{transform:scaleY(1) translateZ(0);opacity:1}.zoom-in-top-enter-active,.zoom-in-top-appear-active{transform-origin:0 0;transition:transform .3s cubic-bezier(.3,1.3,.3,1),opacity .3s cubic-bezier(.3,1.3,.3,1)}.zoom-in-top-leave-from{transform:scaleY(1) translateZ(0);opacity:1}.zoom-in-top-leave-to{transform:scaleY(.8) translateZ(0);opacity:0}.zoom-in-top-leave-active{transform-origin:0 0;transition:transform .3s cubic-bezier(.3,1.3,.3,1),opacity .3s cubic-bezier(.3,1.3,.3,1)}.zoom-in-bottom-enter-from,.zoom-in-bottom-appear-from{transform:scaleY(.8) translateZ(0);opacity:0}.zoom-in-bottom-enter-to,.zoom-in-bottom-appear-to{transform:scaleY(1) translateZ(0);opacity:1}.zoom-in-bottom-enter-active,.zoom-in-bottom-appear-active{transform-origin:100% 100%;transition:transform .3s cubic-bezier(.3,1.3,.3,1),opacity .3s cubic-bezier(.3,1.3,.3,1)}.zoom-in-bottom-leave-from{transform:scaleY(1) translateZ(0);opacity:1}.zoom-in-bottom-leave-to{transform:scaleY(.8) translateZ(0);opacity:0}.zoom-in-bottom-leave-active{transform-origin:100% 100%;transition:transform .3s cubic-bezier(.3,1.3,.3,1),opacity .3s cubic-bezier(.3,1.3,.3,1)}.slide-dynamic-origin-enter-from,.slide-dynamic-origin-appear-from{transform:scaleY(.9);transform-origin:0 0;opacity:0}.slide-dynamic-origin-enter-to,.slide-dynamic-origin-appear-to{transform:scaleY(1);transform-origin:0 0;opacity:1}.slide-dynamic-origin-enter-active,.slide-dynamic-origin-appear-active{transition:transform .2s cubic-bezier(.34,.69,.1,1),opacity .2s cubic-bezier(.34,.69,.1,1)}.slide-dynamic-origin-leave-from{transform:scaleY(1);transform-origin:0 0;opacity:1}.slide-dynamic-origin-leave-to{transform:scaleY(.9);transform-origin:0 0;opacity:0}.slide-dynamic-origin-leave-active{transition:transform .2s cubic-bezier(.34,.69,.1,1),opacity .2s cubic-bezier(.34,.69,.1,1)}.slide-left-enter-from,.slide-left-appear-from{transform:translate(-100%)}.slide-left-enter-to,.slide-left-appear-to{transform:translate(0)}.slide-left-enter-active,.slide-left-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-left-leave-from{transform:translate(0)}.slide-left-leave-to{transform:translate(-100%)}.slide-left-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-right-enter-from,.slide-right-appear-from{transform:translate(100%)}.slide-right-enter-to,.slide-right-appear-to{transform:translate(0)}.slide-right-enter-active,.slide-right-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-right-leave-from{transform:translate(0)}.slide-right-leave-to{transform:translate(100%)}.slide-right-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-top-enter-from,.slide-top-appear-from{transform:translateY(-100%)}.slide-top-enter-to,.slide-top-appear-to{transform:translateY(0)}.slide-top-enter-active,.slide-top-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-top-leave-from{transform:translateY(0)}.slide-top-leave-to{transform:translateY(-100%)}.slide-top-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-bottom-enter-from,.slide-bottom-appear-from{transform:translateY(100%)}.slide-bottom-enter-to,.slide-bottom-appear-to{transform:translateY(0)}.slide-bottom-enter-active,.slide-bottom-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-bottom-leave-from{transform:translateY(0)}.slide-bottom-leave-to{transform:translateY(100%)}.slide-bottom-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}body{--red-1: 255,236,232;--red-2: 253,205,197;--red-3: 251,172,163;--red-4: 249,137,129;--red-5: 247,101,96;--red-6: 245,63,63;--red-7: 203,39,45;--red-8: 161,21,30;--red-9: 119,8,19;--red-10: 77,0,10;--orangered-1: 255,243,232;--orangered-2: 253,221,195;--orangered-3: 252,197,159;--orangered-4: 250,172,123;--orangered-5: 249,144,87;--orangered-6: 247,114,52;--orangered-7: 204,81,32;--orangered-8: 162,53,17;--orangered-9: 119,31,6;--orangered-10: 77,14,0;--orange-1: 255,247,232;--orange-2: 255,228,186;--orange-3: 255,207,139;--orange-4: 255,182,93;--orange-5: 255,154,46;--orange-6: 255,125,0;--orange-7: 210,95,0;--orange-8: 166,69,0;--orange-9: 121,46,0;--orange-10: 77,27,0;--gold-1: 255,252,232;--gold-2: 253,244,191;--gold-3: 252,233,150;--gold-4: 250,220,109;--gold-5: 249,204,69;--gold-6: 247,186,30;--gold-7: 204,146,19;--gold-8: 162,109,10;--gold-9: 119,75,4;--gold-10: 77,45,0;--yellow-1: 254,255,232;--yellow-2: 254,254,190;--yellow-3: 253,250,148;--yellow-4: 252,242,107;--yellow-5: 251,232,66;--yellow-6: 250,220,25;--yellow-7: 207,175,15;--yellow-8: 163,132,8;--yellow-9: 120,93,3;--yellow-10: 77,56,0;--lime-1: 252,255,232;--lime-2: 237,248,187;--lime-3: 220,241,144;--lime-4: 201,233,104;--lime-5: 181,226,65;--lime-6: 159,219,29;--lime-7: 126,183,18;--lime-8: 95,148,10;--lime-9: 67,112,4;--lime-10: 42,77,0;--green-1: 232,255,234;--green-2: 175,240,181;--green-3: 123,225,136;--green-4: 76,210,99;--green-5: 35,195,67;--green-6: 0,180,42;--green-7: 0,154,41;--green-8: 0,128,38;--green-9: 0,102,34;--green-10: 0,77,28;--cyan-1: 232,255,251;--cyan-2: 183,244,236;--cyan-3: 137,233,224;--cyan-4: 94,223,214;--cyan-5: 55,212,207;--cyan-6: 20,201,201;--cyan-7: 13,165,170;--cyan-8: 7,130,139;--cyan-9: 3,97,108;--cyan-10: 0,66,77;--blue-1: 232,247,255;--blue-2: 195,231,254;--blue-3: 159,212,253;--blue-4: 123,192,252;--blue-5: 87,169,251;--blue-6: 52,145,250;--blue-7: 32,108,207;--blue-8: 17,75,163;--blue-9: 6,48,120;--blue-10: 0,26,77;--arcoblue-1: 232,243,255;--arcoblue-2: 190,218,255;--arcoblue-3: 148,191,255;--arcoblue-4: 106,161,255;--arcoblue-5: 64,128,255;--arcoblue-6: 22,93,255;--arcoblue-7: 14,66,210;--arcoblue-8: 7,44,166;--arcoblue-9: 3,26,121;--arcoblue-10: 0,13,77;--purple-1: 245,232,255;--purple-2: 221,190,246;--purple-3: 195,150,237;--purple-4: 168,113,227;--purple-5: 141,78,218;--purple-6: 114,46,209;--purple-7: 85,29,176;--purple-8: 60,16,143;--purple-9: 39,6,110;--purple-10: 22,0,77;--pinkpurple-1: 255,232,251;--pinkpurple-2: 247,186,239;--pinkpurple-3: 240,142,230;--pinkpurple-4: 232,101,223;--pinkpurple-5: 225,62,219;--pinkpurple-6: 217,26,217;--pinkpurple-7: 176,16,182;--pinkpurple-8: 138,9,147;--pinkpurple-9: 101,3,112;--pinkpurple-10: 66,0,77;--magenta-1: 255,232,241;--magenta-2: 253,194,219;--magenta-3: 251,157,199;--magenta-4: 249,121,183;--magenta-5: 247,84,168;--magenta-6: 245,49,157;--magenta-7: 203,30,131;--magenta-8: 161,16,105;--magenta-9: 119,6,79;--magenta-10: 77,0,52;--gray-1: 247,248,250;--gray-2: 242,243,245;--gray-3: 229,230,235;--gray-4: 201,205,212;--gray-5: 169,174,184;--gray-6: 134,144,156;--gray-7: 107,119,133;--gray-8: 78,89,105;--gray-9: 39,46,59;--gray-10: 29,33,41;--success-1: var(--green-1);--success-2: var(--green-2);--success-3: var(--green-3);--success-4: var(--green-4);--success-5: var(--green-5);--success-6: var(--green-6);--success-7: var(--green-7);--success-8: var(--green-8);--success-9: var(--green-9);--success-10: var(--green-10);--primary-1: var(--arcoblue-1);--primary-2: var(--arcoblue-2);--primary-3: var(--arcoblue-3);--primary-4: var(--arcoblue-4);--primary-5: var(--arcoblue-5);--primary-6: var(--arcoblue-6);--primary-7: var(--arcoblue-7);--primary-8: var(--arcoblue-8);--primary-9: var(--arcoblue-9);--primary-10: var(--arcoblue-10);--danger-1: var(--red-1);--danger-2: var(--red-2);--danger-3: var(--red-3);--danger-4: var(--red-4);--danger-5: var(--red-5);--danger-6: var(--red-6);--danger-7: var(--red-7);--danger-8: var(--red-8);--danger-9: var(--red-9);--danger-10: var(--red-10);--warning-1: var(--orange-1);--warning-2: var(--orange-2);--warning-3: var(--orange-3);--warning-4: var(--orange-4);--warning-5: var(--orange-5);--warning-6: var(--orange-6);--warning-7: var(--orange-7);--warning-8: var(--orange-8);--warning-9: var(--orange-9);--warning-10: var(--orange-10);--link-1: var(--arcoblue-1);--link-2: var(--arcoblue-2);--link-3: var(--arcoblue-3);--link-4: var(--arcoblue-4);--link-5: var(--arcoblue-5);--link-6: var(--arcoblue-6);--link-7: var(--arcoblue-7);--link-8: var(--arcoblue-8);--link-9: var(--arcoblue-9);--link-10: var(--arcoblue-10)}body[arco-theme=dark]{--red-1: 77,0,10;--red-2: 119,6,17;--red-3: 161,22,31;--red-4: 203,46,52;--red-5: 245,78,78;--red-6: 247,105,101;--red-7: 249,141,134;--red-8: 251,176,167;--red-9: 253,209,202;--red-10: 255,240,236;--orangered-1: 77,14,0;--orangered-2: 119,30,5;--orangered-3: 162,55,20;--orangered-4: 204,87,41;--orangered-5: 247,126,69;--orangered-6: 249,146,90;--orangered-7: 250,173,125;--orangered-8: 252,198,161;--orangered-9: 253,222,197;--orangered-10: 255,244,235;--orange-1: 77,27,0;--orange-2: 121,48,4;--orange-3: 166,75,10;--orange-4: 210,105,19;--orange-5: 255,141,31;--orange-6: 255,150,38;--orange-7: 255,179,87;--orange-8: 255,205,135;--orange-9: 255,227,184;--orange-10: 255,247,232;--gold-1: 77,45,0;--gold-2: 119,75,4;--gold-3: 162,111,15;--gold-4: 204,150,31;--gold-5: 247,192,52;--gold-6: 249,204,68;--gold-7: 250,220,108;--gold-8: 252,233,149;--gold-9: 253,244,190;--gold-10: 255,252,232;--yellow-1: 77,56,0;--yellow-2: 120,94,7;--yellow-3: 163,134,20;--yellow-4: 207,179,37;--yellow-5: 250,225,60;--yellow-6: 251,233,75;--yellow-7: 252,243,116;--yellow-8: 253,250,157;--yellow-9: 254,254,198;--yellow-10: 254,255,240;--lime-1: 42,77,0;--lime-2: 68,112,6;--lime-3: 98,148,18;--lime-4: 132,183,35;--lime-5: 168,219,57;--lime-6: 184,226,75;--lime-7: 203,233,112;--lime-8: 222,241,152;--lime-9: 238,248,194;--lime-10: 253,255,238;--green-1: 0,77,28;--green-2: 4,102,37;--green-3: 10,128,45;--green-4: 18,154,55;--green-5: 29,180,64;--green-6: 39,195,70;--green-7: 80,210,102;--green-8: 126,225,139;--green-9: 178,240,183;--green-10: 235,255,236;--cyan-1: 0,66,77;--cyan-2: 6,97,108;--cyan-3: 17,131,139;--cyan-4: 31,166,170;--cyan-5: 48,201,201;--cyan-6: 63,212,207;--cyan-7: 102,223,215;--cyan-8: 144,233,225;--cyan-9: 190,244,237;--cyan-10: 240,255,252;--blue-1: 0,26,77;--blue-2: 5,47,120;--blue-3: 19,76,163;--blue-4: 41,113,207;--blue-5: 70,154,250;--blue-6: 90,170,251;--blue-7: 125,193,252;--blue-8: 161,213,253;--blue-9: 198,232,254;--blue-10: 234,248,255;--arcoblue-1: 0,13,77;--arcoblue-2: 4,27,121;--arcoblue-3: 14,50,166;--arcoblue-4: 29,77,210;--arcoblue-5: 48,111,255;--arcoblue-6: 60,126,255;--arcoblue-7: 104,159,255;--arcoblue-8: 147,190,255;--arcoblue-9: 190,218,255;--arcoblue-10: 234,244,255;--purple-1: 22,0,77;--purple-2: 39,6,110;--purple-3: 62,19,143;--purple-4: 90,37,176;--purple-5: 123,61,209;--purple-6: 142,81,218;--purple-7: 169,116,227;--purple-8: 197,154,237;--purple-9: 223,194,246;--purple-10: 247,237,255;--pinkpurple-1: 66,0,77;--pinkpurple-2: 101,3,112;--pinkpurple-3: 138,13,147;--pinkpurple-4: 176,27,182;--pinkpurple-5: 217,46,217;--pinkpurple-6: 225,61,219;--pinkpurple-7: 232,102,223;--pinkpurple-8: 240,146,230;--pinkpurple-9: 247,193,240;--pinkpurple-10: 255,242,253;--magenta-1: 77,0,52;--magenta-2: 119,8,80;--magenta-3: 161,23,108;--magenta-4: 203,43,136;--magenta-5: 245,69,166;--magenta-6: 247,86,169;--magenta-7: 249,122,184;--magenta-8: 251,158,200;--magenta-9: 253,195,219;--magenta-10: 255,232,241;--gray-1: 23,23,26;--gray-2: 46,46,48;--gray-3: 72,72,73;--gray-4: 95,95,96;--gray-5: 120,120,122;--gray-6: 146,146,147;--gray-7: 171,171,172;--gray-8: 197,197,197;--gray-9: 223,223,223;--gray-10: 246,246,246;--primary-1: var(--arcoblue-1);--primary-2: var(--arcoblue-2);--primary-3: var(--arcoblue-3);--primary-4: var(--arcoblue-4);--primary-5: var(--arcoblue-5);--primary-6: var(--arcoblue-6);--primary-7: var(--arcoblue-7);--primary-8: var(--arcoblue-8);--primary-9: var(--arcoblue-9);--primary-10: var(--arcoblue-10);--success-1: var(--green-1);--success-2: var(--green-2);--success-3: var(--green-3);--success-4: var(--green-4);--success-5: var(--green-5);--success-6: var(--green-6);--success-7: var(--green-7);--success-8: var(--green-8);--success-9: var(--green-9);--success-10: var(--green-10);--danger-1: var(--red-1);--danger-2: var(--red-2);--danger-3: var(--red-3);--danger-4: var(--red-4);--danger-5: var(--red-5);--danger-6: var(--red-6);--danger-7: var(--red-7);--danger-8: var(--red-8);--danger-9: var(--red-9);--danger-10: var(--red-10);--warning-1: var(--orange-1);--warning-2: var(--orange-2);--warning-3: var(--orange-3);--warning-4: var(--orange-4);--warning-5: var(--orange-5);--warning-6: var(--orange-6);--warning-7: var(--orange-7);--warning-8: var(--orange-8);--warning-9: var(--orange-9);--warning-10: var(--orange-10);--link-1: var(--arcoblue-1);--link-2: var(--arcoblue-2);--link-3: var(--arcoblue-3);--link-4: var(--arcoblue-4);--link-5: var(--arcoblue-5);--link-6: var(--arcoblue-6);--link-7: var(--arcoblue-7);--link-8: var(--arcoblue-8);--link-9: var(--arcoblue-9);--link-10: var(--arcoblue-10)}body{--color-white: #ffffff;--color-black: #000000;--color-border: rgb(var(--gray-3));--color-bg-popup: var(--color-bg-5);--color-bg-1: #fff;--color-bg-2: #fff;--color-bg-3: #fff;--color-bg-4: #fff;--color-bg-5: #fff;--color-bg-white: #fff;--color-neutral-1: rgb(var(--gray-1));--color-neutral-2: rgb(var(--gray-2));--color-neutral-3: rgb(var(--gray-3));--color-neutral-4: rgb(var(--gray-4));--color-neutral-5: rgb(var(--gray-5));--color-neutral-6: rgb(var(--gray-6));--color-neutral-7: rgb(var(--gray-7));--color-neutral-8: rgb(var(--gray-8));--color-neutral-9: rgb(var(--gray-9));--color-neutral-10: rgb(var(--gray-10));--color-text-1: var(--color-neutral-10);--color-text-2: var(--color-neutral-8);--color-text-3: var(--color-neutral-6);--color-text-4: var(--color-neutral-4);--color-border-1: var(--color-neutral-2);--color-border-2: var(--color-neutral-3);--color-border-3: var(--color-neutral-4);--color-border-4: var(--color-neutral-6);--color-fill-1: var(--color-neutral-1);--color-fill-2: var(--color-neutral-2);--color-fill-3: var(--color-neutral-3);--color-fill-4: var(--color-neutral-4);--color-primary-light-1: rgb(var(--primary-1));--color-primary-light-2: rgb(var(--primary-2));--color-primary-light-3: rgb(var(--primary-3));--color-primary-light-4: rgb(var(--primary-4));--color-link-light-1: rgb(var(--link-1));--color-link-light-2: rgb(var(--link-2));--color-link-light-3: rgb(var(--link-3));--color-link-light-4: rgb(var(--link-4));--color-secondary: var(--color-neutral-2);--color-secondary-hover: var(--color-neutral-3);--color-secondary-active: var(--color-neutral-4);--color-secondary-disabled: var(--color-neutral-1);--color-danger-light-1: rgb(var(--danger-1));--color-danger-light-2: rgb(var(--danger-2));--color-danger-light-3: rgb(var(--danger-3));--color-danger-light-4: rgb(var(--danger-4));--color-success-light-1: rgb(var(--success-1));--color-success-light-2: rgb(var(--success-2));--color-success-light-3: rgb(var(--success-3));--color-success-light-4: rgb(var(--success-4));--color-warning-light-1: rgb(var(--warning-1));--color-warning-light-2: rgb(var(--warning-2));--color-warning-light-3: rgb(var(--warning-3));--color-warning-light-4: rgb(var(--warning-4));--border-radius-none: 0;--border-radius-small: 2px;--border-radius-medium: 4px;--border-radius-large: 8px;--border-radius-circle: 50%;--color-tooltip-bg: rgb(var(--gray-10));--color-spin-layer-bg: rgba(255, 255, 255, .6);--color-menu-dark-bg: #232324;--color-menu-light-bg: #ffffff;--color-menu-dark-hover: rgba(255, 255, 255, .04);--color-mask-bg: rgba(29, 33, 41, .6)}body[arco-theme=dark]{--color-white: rgba(255, 255, 255, .9);--color-black: #000000;--color-border: #333335;--color-bg-1: #17171a;--color-bg-2: #232324;--color-bg-3: #2a2a2b;--color-bg-4: #313132;--color-bg-5: #373739;--color-bg-white: #f6f6f6;--color-text-1: rgba(255, 255, 255, .9);--color-text-2: rgba(255, 255, 255, .7);--color-text-3: rgba(255, 255, 255, .5);--color-text-4: rgba(255, 255, 255, .3);--color-fill-1: rgba(255, 255, 255, .04);--color-fill-2: rgba(255, 255, 255, .08);--color-fill-3: rgba(255, 255, 255, .12);--color-fill-4: rgba(255, 255, 255, .16);--color-primary-light-1: rgba(var(--primary-6), .2);--color-primary-light-2: rgba(var(--primary-6), .35);--color-primary-light-3: rgba(var(--primary-6), .5);--color-primary-light-4: rgba(var(--primary-6), .65);--color-secondary: rgba(var(--gray-9), .08);--color-secondary-hover: rgba(var(--gray-8), .16);--color-secondary-active: rgba(var(--gray-7), .24);--color-secondary-disabled: rgba(var(--gray-9), .08);--color-danger-light-1: rgba(var(--danger-6), .2);--color-danger-light-2: rgba(var(--danger-6), .35);--color-danger-light-3: rgba(var(--danger-6), .5);--color-danger-light-4: rgba(var(--danger-6), .65);--color-success-light-1: rgb(var(--success-6), .2);--color-success-light-2: rgb(var(--success-6), .35);--color-success-light-3: rgb(var(--success-6), .5);--color-success-light-4: rgb(var(--success-6), .65);--color-warning-light-1: rgb(var(--warning-6), .2);--color-warning-light-2: rgb(var(--warning-6), .35);--color-warning-light-3: rgb(var(--warning-6), .5);--color-warning-light-4: rgb(var(--warning-6), .65);--color-link-light-1: rgb(var(--link-6), .2);--color-link-light-2: rgb(var(--link-6), .35);--color-link-light-3: rgb(var(--link-6), .5);--color-link-light-4: rgb(var(--link-6), .65);--color-tooltip-bg: #373739;--color-spin-layer-bg: rgba(51, 51, 51, .6);--color-menu-dark-bg: #232324;--color-menu-light-bg: #232324;--color-menu-dark-hover: var(--color-fill-2);--color-mask-bg: rgba(23, 23, 26, .6)}body{font-size:14px;font-family:Inter,-apple-system,BlinkMacSystemFont,PingFang SC,Hiragino Sans GB,noto sans,Microsoft YaHei,Helvetica Neue,Helvetica,Arial,sans-serif}.arco-trigger-wrapper{display:inline-block}.arco-trigger-popup{position:absolute;z-index:1000}.arco-trigger-arrow{position:absolute;z-index:-1;display:block;box-sizing:border-box;width:8px;height:8px;background-color:var(--color-bg-5);content:""}.arco-trigger-popup[trigger-placement=top] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=tl] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=tr] .arco-trigger-arrow{border-top:none;border-left:none;border-bottom-right-radius:var(--border-radius-small)}.arco-trigger-popup[trigger-placement=bottom] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=bl] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=br] .arco-trigger-arrow{border-right:none;border-bottom:none;border-top-left-radius:var(--border-radius-small)}.arco-trigger-popup[trigger-placement=left] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=lt] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=lb] .arco-trigger-arrow{border-bottom:none;border-left:none;border-top-right-radius:var(--border-radius-small)}.arco-trigger-popup[trigger-placement=right] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=rt] .arco-trigger-arrow,.arco-trigger-popup[trigger-placement=rb] .arco-trigger-arrow{border-top:none;border-right:none;border-bottom-left-radius:var(--border-radius-small)}.arco-auto-tooltip{display:block;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-input-label{display:inline-flex;box-sizing:border-box;width:100%;padding-right:12px;padding-left:12px;color:var(--color-text-1);font-size:14px;background-color:var(--color-fill-2);border:1px solid transparent;border-radius:var(--border-radius-small);cursor:text;transition:color .1s cubic-bezier(0,0,1,1),border-color .1s cubic-bezier(0,0,1,1),background-color .1s cubic-bezier(0,0,1,1);cursor:pointer}.arco-input-label.arco-input-label-search{cursor:text}.arco-input-label.arco-input-label-search .arco-input-label-input,.arco-input-label.arco-input-label-search .arco-input-label-value{pointer-events:none}.arco-input-label:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-input-label:focus-within,.arco-input-label.arco-input-label-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-input-label.arco-input-label-disabled{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent;cursor:not-allowed}.arco-input-label.arco-input-label-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent}.arco-input-label.arco-input-label-disabled .arco-input-label-prefix,.arco-input-label.arco-input-label-disabled .arco-input-label-suffix{color:inherit}.arco-input-label.arco-input-label-error{background-color:var(--color-danger-light-1);border-color:transparent}.arco-input-label.arco-input-label-error:hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-input-label.arco-input-label-error:focus-within,.arco-input-label.arco-input-label-error.arco-input-label-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-input-label .arco-input-label-prefix,.arco-input-label .arco-input-label-suffix{display:inline-flex;flex-shrink:0;align-items:center;white-space:nowrap;user-select:none}.arco-input-label .arco-input-label-prefix>svg,.arco-input-label .arco-input-label-suffix>svg{font-size:14px}.arco-input-label .arco-input-label-prefix{padding-right:12px;color:var(--color-text-2)}.arco-input-label .arco-input-label-suffix{padding-left:12px;color:var(--color-text-2)}.arco-input-label .arco-input-label-suffix .arco-feedback-icon{display:inline-flex}.arco-input-label .arco-input-label-suffix .arco-feedback-icon-status-validating{color:rgb(var(--primary-6))}.arco-input-label .arco-input-label-suffix .arco-feedback-icon-status-success{color:rgb(var(--success-6))}.arco-input-label .arco-input-label-suffix .arco-feedback-icon-status-warning{color:rgb(var(--warning-6))}.arco-input-label .arco-input-label-suffix .arco-feedback-icon-status-error{color:rgb(var(--danger-6))}.arco-input-label .arco-input-label-clear-btn{align-self:center;color:var(--color-text-2);font-size:12px;visibility:hidden;cursor:pointer}.arco-input-label .arco-input-label-clear-btn>svg{position:relative;transition:color .1s cubic-bezier(0,0,1,1)}.arco-input-label:hover .arco-input-label-clear-btn{visibility:visible}.arco-input-label:not(.arco-input-label-focus) .arco-input-label-icon-hover:hover:before{background-color:var(--color-fill-4)}.arco-input-label .arco-input-label-input{width:100%;padding-right:0;padding-left:0;color:inherit;line-height:1.5715;background:none;border:none;border-radius:0;outline:none;cursor:inherit;-webkit-appearance:none;-webkit-tap-highlight-color:rgba(0,0,0,0)}.arco-input-label .arco-input-label-input::placeholder{color:var(--color-text-3)}.arco-input-label .arco-input-label-input[disabled]::placeholder{color:var(--color-text-4)}.arco-input-label .arco-input-label-input[disabled]{-webkit-text-fill-color:var(--color-text-4)}.arco-input-label .arco-input-label-input-hidden{position:absolute;width:0!important}.arco-input-label .arco-input-label-value{display:flex;align-items:center;box-sizing:border-box;width:100%;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-input-label .arco-input-label-value:after{font-size:0;line-height:0;visibility:hidden;content:"."}.arco-input-label .arco-input-label-value-hidden{display:none}.arco-input-label.arco-input-label-size-mini .arco-input-label-input,.arco-input-label.arco-input-label-size-mini .arco-input-label-value{padding-top:1px;padding-bottom:1px;font-size:12px;line-height:1.667}.arco-input-label.arco-input-label-size-mini .arco-input-label-value{min-height:22px}.arco-input-label.arco-input-label-size-medium .arco-input-label-input,.arco-input-label.arco-input-label-size-medium .arco-input-label-value{padding-top:4px;padding-bottom:4px;font-size:14px;line-height:1.5715}.arco-input-label.arco-input-label-size-medium .arco-input-label-value{min-height:30px}.arco-input-label.arco-input-label-size-small .arco-input-label-input,.arco-input-label.arco-input-label-size-small .arco-input-label-value{padding-top:2px;padding-bottom:2px;font-size:14px;line-height:1.5715}.arco-input-label.arco-input-label-size-small .arco-input-label-value{min-height:26px}.arco-input-label.arco-input-label-size-large .arco-input-label-input,.arco-input-label.arco-input-label-size-large .arco-input-label-value{padding-top:6px;padding-bottom:6px;font-size:14px;line-height:1.5715}.arco-input-label.arco-input-label-size-large .arco-input-label-value{min-height:34px}.arco-picker{position:relative;display:inline-flex;align-items:center;box-sizing:border-box;padding:4px 11px 4px 4px;line-height:1.5715;background-color:var(--color-fill-2);border:1px solid transparent;border-radius:var(--border-radius-small);transition:all .1s cubic-bezier(0,0,1,1)}.arco-picker-input{display:inline-flex;flex:1}.arco-picker input{width:100%;padding:0 0 0 8px;color:var(--color-text-2);line-height:1.5715;text-align:left;background-color:transparent;border:none;outline:none;transition:all .1s cubic-bezier(0,0,1,1)}.arco-picker input::placeholder{color:var(--color-text-3)}.arco-picker input[disabled]{-webkit-text-fill-color:var(--color-text-4)}.arco-picker-has-prefix{padding-left:12px}.arco-picker-prefix{padding-right:4px;color:var(--color-text-2);font-size:14px}.arco-picker-suffix{display:inline-flex;align-items:center;margin-left:4px}.arco-picker-suffix .arco-feedback-icon{display:inline-flex}.arco-picker-suffix .arco-feedback-icon-status-validating{color:rgb(var(--primary-6))}.arco-picker-suffix .arco-feedback-icon-status-success{color:rgb(var(--success-6))}.arco-picker-suffix .arco-feedback-icon-status-warning{color:rgb(var(--warning-6))}.arco-picker-suffix .arco-feedback-icon-status-error{color:rgb(var(--danger-6))}.arco-picker-suffix .arco-feedback-icon{margin-left:4px}.arco-picker-suffix-icon{color:var(--color-text-2)}.arco-picker .arco-picker-clear-icon{display:none;color:var(--color-text-2);font-size:12px}.arco-picker:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-picker:not(.arco-picker-disabled):hover .arco-picker-clear-icon{display:inline-block}.arco-picker:not(.arco-picker-disabled):hover .arco-picker-suffix .arco-picker-clear-icon+span{display:none}.arco-picker input[disabled]{color:var(--color-text-4);cursor:not-allowed}.arco-picker input[disabled]::placeholder{color:var(--color-text-4)}.arco-picker-error{background-color:var(--color-danger-light-1);border-color:transparent}.arco-picker-error:hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-picker-focused{box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-picker-focused,.arco-picker-focused:hover{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6))}.arco-picker-focused.arco-picker-error{border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-picker-focused .arco-picker-input-active input,.arco-picker-focused:hover .arco-picker-input-active input{background:var(--color-fill-2)}.arco-picker-disabled,.arco-picker-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent;cursor:not-allowed}.arco-picker-disabled input[disabled],.arco-picker-disabled:hover input[disabled]{color:var(--color-text-4);cursor:not-allowed}.arco-picker-disabled input[disabled]::placeholder,.arco-picker-disabled:hover input[disabled]::placeholder{color:var(--color-text-4)}.arco-picker-separator{min-width:10px;padding:0 8px;color:var(--color-text-3)}.arco-picker-disabled .arco-picker-separator,.arco-picker-disabled .arco-picker-suffix-icon{color:var(--color-text-4)}.arco-picker-size-mini{height:24px}.arco-picker-size-mini input{font-size:12px}.arco-picker-size-small{height:28px}.arco-picker-size-small input{font-size:14px}.arco-picker-size-medium{height:32px}.arco-picker-size-medium input{font-size:14px}.arco-picker-size-large{height:36px}.arco-picker-size-large input{font-size:14px}.arco-select-view-single{display:inline-flex;box-sizing:border-box;width:100%;padding-right:12px;padding-left:12px;color:var(--color-text-1);font-size:14px;background-color:var(--color-fill-2);border:1px solid transparent;border-radius:var(--border-radius-small);cursor:text;transition:color .1s cubic-bezier(0,0,1,1),border-color .1s cubic-bezier(0,0,1,1),background-color .1s cubic-bezier(0,0,1,1);cursor:pointer}.arco-select-view-single.arco-select-view-search{cursor:text}.arco-select-view-single.arco-select-view-search .arco-select-view-input,.arco-select-view-single.arco-select-view-search .arco-select-view-value{pointer-events:none}.arco-select-view-single:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-select-view-single:focus-within,.arco-select-view-single.arco-select-view-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-select-view-single.arco-select-view-disabled{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent;cursor:not-allowed}.arco-select-view-single.arco-select-view-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent}.arco-select-view-single.arco-select-view-disabled .arco-select-view-prefix,.arco-select-view-single.arco-select-view-disabled .arco-select-view-suffix{color:inherit}.arco-select-view-single.arco-select-view-error{background-color:var(--color-danger-light-1);border-color:transparent}.arco-select-view-single.arco-select-view-error:hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-select-view-single.arco-select-view-error:focus-within,.arco-select-view-single.arco-select-view-error.arco-select-view-single-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-select-view-single .arco-select-view-prefix,.arco-select-view-single .arco-select-view-suffix{display:inline-flex;flex-shrink:0;align-items:center;white-space:nowrap;user-select:none}.arco-select-view-single .arco-select-view-prefix>svg,.arco-select-view-single .arco-select-view-suffix>svg{font-size:14px}.arco-select-view-single .arco-select-view-prefix{padding-right:12px;color:var(--color-text-2)}.arco-select-view-single .arco-select-view-suffix{padding-left:12px;color:var(--color-text-2)}.arco-select-view-single .arco-select-view-suffix .arco-feedback-icon{display:inline-flex}.arco-select-view-single .arco-select-view-suffix .arco-feedback-icon-status-validating{color:rgb(var(--primary-6))}.arco-select-view-single .arco-select-view-suffix .arco-feedback-icon-status-success{color:rgb(var(--success-6))}.arco-select-view-single .arco-select-view-suffix .arco-feedback-icon-status-warning{color:rgb(var(--warning-6))}.arco-select-view-single .arco-select-view-suffix .arco-feedback-icon-status-error{color:rgb(var(--danger-6))}.arco-select-view-single .arco-select-view-clear-btn{align-self:center;color:var(--color-text-2);font-size:12px;visibility:hidden;cursor:pointer}.arco-select-view-single .arco-select-view-clear-btn>svg{position:relative;transition:color .1s cubic-bezier(0,0,1,1)}.arco-select-view-single:hover .arco-select-view-clear-btn{visibility:visible}.arco-select-view-single:not(.arco-select-view-focus) .arco-select-view-icon-hover:hover:before{background-color:var(--color-fill-4)}.arco-select-view-single .arco-select-view-input{width:100%;padding-right:0;padding-left:0;color:inherit;line-height:1.5715;background:none;border:none;border-radius:0;outline:none;cursor:inherit;-webkit-appearance:none;-webkit-tap-highlight-color:rgba(0,0,0,0)}.arco-select-view-single .arco-select-view-input::placeholder{color:var(--color-text-3)}.arco-select-view-single .arco-select-view-input[disabled]::placeholder{color:var(--color-text-4)}.arco-select-view-single .arco-select-view-input[disabled]{-webkit-text-fill-color:var(--color-text-4)}.arco-select-view-single .arco-select-view-input-hidden{position:absolute;width:0!important}.arco-select-view-single .arco-select-view-value{display:flex;align-items:center;box-sizing:border-box;width:100%;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-select-view-single .arco-select-view-value:after{font-size:0;line-height:0;visibility:hidden;content:"."}.arco-select-view-single .arco-select-view-value-hidden{display:none}.arco-select-view-single.arco-select-view-size-mini .arco-select-view-input,.arco-select-view-single.arco-select-view-size-mini .arco-select-view-value{padding-top:1px;padding-bottom:1px;font-size:12px;line-height:1.667}.arco-select-view-single.arco-select-view-size-mini .arco-select-view-value{min-height:22px}.arco-select-view-single.arco-select-view-size-medium .arco-select-view-input,.arco-select-view-single.arco-select-view-size-medium .arco-select-view-value{padding-top:4px;padding-bottom:4px;font-size:14px;line-height:1.5715}.arco-select-view-single.arco-select-view-size-medium .arco-select-view-value{min-height:30px}.arco-select-view-single.arco-select-view-size-small .arco-select-view-input,.arco-select-view-single.arco-select-view-size-small .arco-select-view-value{padding-top:2px;padding-bottom:2px;font-size:14px;line-height:1.5715}.arco-select-view-single.arco-select-view-size-small .arco-select-view-value{min-height:26px}.arco-select-view-single.arco-select-view-size-large .arco-select-view-input,.arco-select-view-single.arco-select-view-size-large .arco-select-view-value{padding-top:6px;padding-bottom:6px;font-size:14px;line-height:1.5715}.arco-select-view-single.arco-select-view-size-large .arco-select-view-value{min-height:34px}.arco-select-view-multiple{display:inline-flex;box-sizing:border-box;width:100%;padding-right:12px;padding-left:12px;color:var(--color-text-1);font-size:14px;background-color:var(--color-fill-2);border:1px solid transparent;border-radius:var(--border-radius-small);cursor:text;transition:color .1s cubic-bezier(0,0,1,1),border-color .1s cubic-bezier(0,0,1,1),background-color .1s cubic-bezier(0,0,1,1)}.arco-select-view-multiple:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-select-view-multiple:focus-within,.arco-select-view-multiple.arco-select-view-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-select-view-multiple.arco-select-view-disabled{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent;cursor:not-allowed}.arco-select-view-multiple.arco-select-view-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent}.arco-select-view-multiple.arco-select-view-disabled .arco-select-view-prefix,.arco-select-view-multiple.arco-select-view-disabled .arco-select-view-suffix{color:inherit}.arco-select-view-multiple.arco-select-view-error{background-color:var(--color-danger-light-1);border-color:transparent}.arco-select-view-multiple.arco-select-view-error:hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-select-view-multiple.arco-select-view-error:focus-within,.arco-select-view-multiple.arco-select-view-error.arco-select-view-multiple-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-select-view-multiple .arco-select-view-prefix,.arco-select-view-multiple .arco-select-view-suffix{display:inline-flex;flex-shrink:0;align-items:center;white-space:nowrap;user-select:none}.arco-select-view-multiple .arco-select-view-prefix>svg,.arco-select-view-multiple .arco-select-view-suffix>svg{font-size:14px}.arco-select-view-multiple .arco-select-view-prefix{padding-right:12px;color:var(--color-text-2)}.arco-select-view-multiple .arco-select-view-suffix{padding-left:12px;color:var(--color-text-2)}.arco-select-view-multiple .arco-select-view-suffix .arco-feedback-icon{display:inline-flex}.arco-select-view-multiple .arco-select-view-suffix .arco-feedback-icon-status-validating{color:rgb(var(--primary-6))}.arco-select-view-multiple .arco-select-view-suffix .arco-feedback-icon-status-success{color:rgb(var(--success-6))}.arco-select-view-multiple .arco-select-view-suffix .arco-feedback-icon-status-warning{color:rgb(var(--warning-6))}.arco-select-view-multiple .arco-select-view-suffix .arco-feedback-icon-status-error{color:rgb(var(--danger-6))}.arco-select-view-multiple .arco-select-view-clear-btn{align-self:center;color:var(--color-text-2);font-size:12px;visibility:hidden;cursor:pointer}.arco-select-view-multiple .arco-select-view-clear-btn>svg{position:relative;transition:color .1s cubic-bezier(0,0,1,1)}.arco-select-view-multiple:hover .arco-select-view-clear-btn{visibility:visible}.arco-select-view-multiple:not(.arco-select-view-focus) .arco-select-view-icon-hover:hover:before{background-color:var(--color-fill-4)}.arco-select-view-multiple.arco-select-view-has-tag{padding-right:4px;padding-left:4px}.arco-select-view-multiple.arco-select-view-has-prefix{padding-left:12px}.arco-select-view-multiple.arco-select-view-has-suffix{padding-right:12px}.arco-select-view-multiple .arco-select-view-inner{flex:1;overflow:hidden;line-height:0}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-tag{display:inline-flex;align-items:center;margin-right:4px;color:var(--color-text-1);font-size:12px;white-space:pre-wrap;word-break:break-word;background-color:var(--color-bg-2);border-color:var(--color-fill-3)}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-tag .arco-icon-hover:hover:before{background-color:var(--color-fill-2)}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-tag.arco-tag-custom-color{color:var(--color-white)}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-tag.arco-tag-custom-color .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:#fff3}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-input{width:100%;padding-right:0;padding-left:0;color:inherit;line-height:1.5715;background:none;border:none;border-radius:0;outline:none;cursor:inherit;-webkit-appearance:none;-webkit-tap-highlight-color:rgba(0,0,0,0);box-sizing:border-box}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-input::placeholder{color:var(--color-text-3)}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-input[disabled]::placeholder{color:var(--color-text-4)}.arco-select-view-multiple .arco-select-view-inner .arco-select-view-input[disabled]{-webkit-text-fill-color:var(--color-text-4)}.arco-select-view-multiple .arco-select-view-mirror{position:absolute;top:0;left:0;white-space:pre;visibility:hidden;pointer-events:none}.arco-select-view-multiple.arco-select-view-focus .arco-select-view-tag{background-color:var(--color-fill-2);border-color:var(--color-fill-2)}.arco-select-view-multiple.arco-select-view-focus .arco-select-view-tag .arco-icon-hover:hover:before{background-color:var(--color-fill-3)}.arco-select-view-multiple.arco-select-view-disabled .arco-select-view-tag{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:var(--color-fill-3)}.arco-select-view-multiple.arco-select-view-readonly,.arco-select-view-multiple.arco-select-view-disabled-input{cursor:default}.arco-select-view-multiple.arco-select-view-size-mini{font-size:12px}.arco-select-view-multiple.arco-select-view-size-mini .arco-select-view-inner{padding-top:0;padding-bottom:0}.arco-select-view-multiple.arco-select-view-size-mini .arco-select-view-tag,.arco-select-view-multiple.arco-select-view-size-mini .arco-select-view-input{margin-top:1px;margin-bottom:1px;line-height:18px;vertical-align:middle}.arco-select-view-multiple.arco-select-view-size-mini .arco-select-view-tag{height:auto;min-height:20px}.arco-select-view-multiple.arco-select-view-size-mini .arco-select-view-input{height:20px}.arco-select-view-multiple.arco-select-view-size-medium{font-size:14px}.arco-select-view-multiple.arco-select-view-size-medium .arco-select-view-inner{padding-top:2px;padding-bottom:2px}.arco-select-view-multiple.arco-select-view-size-medium .arco-select-view-tag,.arco-select-view-multiple.arco-select-view-size-medium .arco-select-view-input{margin-top:1px;margin-bottom:1px;line-height:22px;vertical-align:middle}.arco-select-view-multiple.arco-select-view-size-medium .arco-select-view-tag{height:auto;min-height:24px}.arco-select-view-multiple.arco-select-view-size-medium .arco-select-view-input{height:24px}.arco-select-view-multiple.arco-select-view-size-small{font-size:14px}.arco-select-view-multiple.arco-select-view-size-small .arco-select-view-inner{padding-top:2px;padding-bottom:2px}.arco-select-view-multiple.arco-select-view-size-small .arco-select-view-tag,.arco-select-view-multiple.arco-select-view-size-small .arco-select-view-input{margin-top:1px;margin-bottom:1px;line-height:18px;vertical-align:middle}.arco-select-view-multiple.arco-select-view-size-small .arco-select-view-tag{height:auto;min-height:20px}.arco-select-view-multiple.arco-select-view-size-small .arco-select-view-input{height:20px}.arco-select-view-multiple.arco-select-view-size-large{font-size:14px}.arco-select-view-multiple.arco-select-view-size-large .arco-select-view-inner{padding-top:2px;padding-bottom:2px}.arco-select-view-multiple.arco-select-view-size-large .arco-select-view-tag,.arco-select-view-multiple.arco-select-view-size-large .arco-select-view-input{margin-top:1px;margin-bottom:1px;line-height:26px;vertical-align:middle}.arco-select-view-multiple.arco-select-view-size-large .arco-select-view-tag{height:auto;min-height:28px}.arco-select-view-multiple.arco-select-view-size-large .arco-select-view-input{height:28px}.arco-select-view-multiple.arco-select-view-disabled-input{cursor:pointer}.arco-select-view.arco-select-view-borderless{background:none!important;border:none!important;box-shadow:none!important}.arco-select-view-suffix .arco-feedback-icon{margin-left:4px}.arco-select-view-clear-btn svg,.arco-select-view-icon svg{display:block;font-size:12px}.arco-select-view-opened .arco-select-view-arrow-icon{transform:rotate(180deg)}.arco-select-view-expand-icon{transform:rotate(-45deg)}.arco-select-view-clear-btn{display:none;cursor:pointer}.arco-select-view:hover .arco-select-view-clear-btn{display:block}.arco-select-view:hover .arco-select-view-clear-btn~*{display:none}.arco-affix{position:fixed;z-index:999}.arco-alert{display:flex;align-items:center;box-sizing:border-box;width:100%;padding:8px 15px;overflow:hidden;font-size:14px;line-height:1.5715;text-align:left;border-radius:var(--border-radius-small)}.arco-alert-with-title{align-items:flex-start;padding:15px}.arco-alert-normal{background-color:var(--color-neutral-2);border:1px solid transparent}.arco-alert-info{background-color:var(--color-primary-light-1);border:1px solid transparent}.arco-alert-success{background-color:var(--color-success-light-1);border:1px solid transparent}.arco-alert-warning{background-color:var(--color-warning-light-1);border:1px solid transparent}.arco-alert-error{background-color:var(--color-danger-light-1);border:1px solid transparent}.arco-alert-banner{border:none;border-radius:0}.arco-alert-body{position:relative;flex:1}.arco-alert-title{margin-bottom:4px;font-weight:500;font-size:16px;line-height:1.5}.arco-alert-normal .arco-alert-title,.arco-alert-normal .arco-alert-content{color:var(--color-text-1)}.arco-alert-normal.arco-alert-with-title .arco-alert-content{color:var(--color-text-2)}.arco-alert-info .arco-alert-title,.arco-alert-info .arco-alert-content{color:var(--color-text-1)}.arco-alert-info.arco-alert-with-title .arco-alert-content{color:var(--color-text-2)}.arco-alert-success .arco-alert-title,.arco-alert-success .arco-alert-content{color:var(--color-text-1)}.arco-alert-success.arco-alert-with-title .arco-alert-content{color:var(--color-text-2)}.arco-alert-warning .arco-alert-title,.arco-alert-warning .arco-alert-content{color:var(--color-text-1)}.arco-alert-warning.arco-alert-with-title .arco-alert-content{color:var(--color-text-2)}.arco-alert-error .arco-alert-title,.arco-alert-error .arco-alert-content{color:var(--color-text-1)}.arco-alert-error.arco-alert-with-title .arco-alert-content{color:var(--color-text-2)}.arco-alert-icon{margin-right:8px}.arco-alert-icon svg{font-size:16px;vertical-align:-3px}.arco-alert-with-title .arco-alert-icon svg{font-size:18px;vertical-align:-5px}.arco-alert-normal .arco-alert-icon svg{color:var(--color-neutral-4)}.arco-alert-info .arco-alert-icon svg{color:rgb(var(--primary-6))}.arco-alert-success .arco-alert-icon svg{color:rgb(var(--success-6))}.arco-alert-warning .arco-alert-icon svg{color:rgb(var(--warning-6))}.arco-alert-error .arco-alert-icon svg{color:rgb(var(--danger-6))}.arco-alert-close-btn{top:4px;right:0;box-sizing:border-box;margin-left:8px;padding:0;color:var(--color-text-2);font-size:12px;background-color:transparent;border:none;outline:none;cursor:pointer;transition:color .1s cubic-bezier(0,0,1,1)}.arco-alert-close-btn:hover{color:var(--color-text-1)}.arco-alert-action+.arco-alert-close-btn{margin-left:8px}.arco-alert-action{margin-left:8px}.arco-alert-with-title .arco-alert-close-btn{margin-top:0;margin-right:0}.arco-anchor{position:relative;width:150px;overflow:auto}.arco-anchor-line-slider{position:absolute;top:0;left:0;z-index:1;width:2px;height:12px;margin-top:9.0005px;background-color:rgb(var(--primary-6));transition:top .2s cubic-bezier(.34,.69,.1,1)}.arco-anchor-list{position:relative;margin-top:0;margin-bottom:0;margin-left:4px;padding-left:0;list-style:none}.arco-anchor-list:before{position:absolute;left:-4px;width:2px;height:100%;background-color:var(--color-fill-3);content:""}.arco-anchor-sublist{margin-top:0;margin-bottom:0;padding-left:0;list-style:none}.arco-anchor-link-item{margin-bottom:2px}.arco-anchor-link-item .arco-anchor-link{display:block;margin-bottom:2px;padding:4px 8px;overflow:hidden;color:var(--color-text-2);font-size:14px;line-height:1.5715;white-space:nowrap;text-decoration:none;text-overflow:ellipsis;border-radius:var(--border-radius-small);cursor:pointer}.arco-anchor-link-item .arco-anchor-link:hover{color:var(--color-text-1);font-weight:500;background-color:var(--color-fill-2)}.arco-anchor-link-active>.arco-anchor-link{color:var(--color-text-1);font-weight:500;transition:all .1s cubic-bezier(0,0,1,1)}.arco-anchor-link-item .arco-anchor-link-item{margin-left:16px}.arco-anchor-line-less .arco-anchor-list{margin-left:0}.arco-anchor-line-less .arco-anchor-list:before{display:none}.arco-anchor-line-less .arco-anchor-link-active>.arco-anchor-link{color:rgb(var(--primary-6));font-weight:500;background-color:var(--color-fill-2)}.arco-autocomplete-popup .arco-select-popup{background-color:var(--color-bg-popup);border:1px solid var(--color-fill-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-autocomplete-popup .arco-select-popup .arco-select-popup-inner{max-height:200px;padding:4px 0}.arco-autocomplete-popup .arco-select-popup .arco-select-option{height:36px;padding:0 12px;font-size:14px;line-height:36px;color:var(--color-text-1);background-color:var(--color-bg-popup)}.arco-autocomplete-popup .arco-select-popup .arco-select-option-selected{color:var(--color-text-1);background-color:var(--color-bg-popup)}.arco-autocomplete-popup .arco-select-popup .arco-select-option-hover{color:var(--color-text-1);background-color:var(--color-fill-2)}.arco-autocomplete-popup .arco-select-popup .arco-select-option-disabled{color:var(--color-text-4);background-color:var(--color-bg-popup)}.arco-autocomplete-popup .arco-select-popup .arco-select-option-selected{font-weight:500}.arco-avatar{position:relative;display:inline-flex;align-items:center;box-sizing:border-box;width:40px;height:40px;color:var(--color-white);font-size:20px;white-space:nowrap;vertical-align:middle;background-color:var(--color-fill-4)}.arco-avatar-circle{border-radius:var(--border-radius-circle)}.arco-avatar-circle .arco-avatar-image{overflow:hidden;border-radius:var(--border-radius-circle)}.arco-avatar-square{border-radius:var(--border-radius-medium)}.arco-avatar-square .arco-avatar-image{overflow:hidden;border-radius:var(--border-radius-medium)}.arco-avatar-text{position:absolute;left:50%;font-weight:500;line-height:1;transform:translate(-50%);transform-origin:0 center}.arco-avatar-image{display:inline-block;width:100%;height:100%}.arco-avatar-image-icon{display:flex;align-items:center;justify-content:center;width:100%;height:100%}.arco-avatar-image img,.arco-avatar-image picture{width:100%;height:100%}.arco-avatar-trigger-icon-button{position:absolute;right:-4px;bottom:-4px;z-index:1;width:20px;height:20px;color:var(--color-fill-4);font-size:12px;line-height:20px;text-align:center;background-color:var(--color-neutral-2);border-radius:var(--border-radius-circle);transition:background-color .1s cubic-bezier(0,0,1,1)}.arco-avatar-trigger-icon-mask{position:absolute;top:0;left:0;z-index:0;display:flex;align-items:center;justify-content:center;width:100%;height:100%;color:var(--color-white);font-size:16px;background-color:#1d212999;border-radius:var(--border-radius-medium);opacity:0;transition:all .1s cubic-bezier(0,0,1,1)}.arco-avatar-circle .arco-avatar-trigger-icon-mask{border-radius:var(--border-radius-circle)}.arco-avatar-with-trigger-icon{cursor:pointer}.arco-avatar-with-trigger-icon:hover .arco-avatar-trigger-icon-mask{z-index:2;opacity:1}.arco-avatar-with-trigger-icon:hover .arco-avatar-trigger-icon-button{background-color:var(--color-neutral-3)}.arco-avatar-group{display:inline-block;line-height:0}.arco-avatar-group-max-count-avatar{color:var(--color-white);font-size:20px;cursor:default}.arco-avatar-group .arco-avatar{border:2px solid var(--color-bg-2)}.arco-avatar-group .arco-avatar:not(:first-child){margin-left:-10px}.arco-avatar-group-popover .arco-avatar:not(:first-child){margin-left:4px}.arco-back-top{position:fixed;right:24px;bottom:24px;z-index:100}.arco-back-top-btn{width:40px;height:40px;color:var(--color-white);font-size:12px;text-align:center;background-color:rgb(var(--primary-6));border:none;border-radius:var(--border-radius-circle);outline:none;cursor:pointer;transition:all .2s cubic-bezier(0,0,1,1)}.arco-back-top-btn:hover{background-color:rgb(var(--primary-5))}.arco-back-top-btn svg{font-size:14px}.arco-badge{position:relative;display:inline-block;line-height:1}.arco-badge-number,.arco-badge-dot,.arco-badge-text,.arco-badge-custom-dot{position:absolute;top:2px;right:2px;z-index:2;box-sizing:border-box;overflow:hidden;text-align:center;border-radius:20px;transform:translate(50%,-50%);transform-origin:100% 0%}.arco-badge-custom-dot{background-color:var(--color-bg-2)}.arco-badge-number,.arco-badge-text{min-width:20px;height:20px;padding:0 6px;color:var(--color-white);font-weight:500;font-size:12px;line-height:20px;background-color:rgb(var(--danger-6));box-shadow:0 0 0 2px var(--color-bg-2)}.arco-badge-dot{width:6px;height:6px;background-color:rgb(var(--danger-6));border-radius:var(--border-radius-circle);box-shadow:0 0 0 2px var(--color-bg-2)}.arco-badge-no-children .arco-badge-dot,.arco-badge-no-children .arco-badge-number,.arco-badge-no-children .arco-badge-text{position:relative;top:unset;right:unset;display:inline-block;transform:none}.arco-badge-status-wrapper{display:inline-flex;align-items:center}.arco-badge-status-dot{display:inline-block;width:6px;height:6px;border-radius:var(--border-radius-circle)}.arco-badge-status-normal{background-color:var(--color-fill-4)}.arco-badge-status-processing{background-color:rgb(var(--primary-6))}.arco-badge-status-success{background-color:rgb(var(--success-6))}.arco-badge-status-warning{background-color:rgb(var(--warning-6))}.arco-badge-status-danger,.arco-badge-color-red{background-color:rgb(var(--danger-6))}.arco-badge-color-orangered{background-color:#f77234}.arco-badge-color-orange{background-color:rgb(var(--orange-6))}.arco-badge-color-gold{background-color:rgb(var(--gold-6))}.arco-badge-color-lime{background-color:rgb(var(--lime-6))}.arco-badge-color-green{background-color:rgb(var(--success-6))}.arco-badge-color-cyan{background-color:rgb(var(--cyan-6))}.arco-badge-color-arcoblue{background-color:rgb(var(--primary-6))}.arco-badge-color-purple{background-color:rgb(var(--purple-6))}.arco-badge-color-pinkpurple{background-color:rgb(var(--pinkpurple-6))}.arco-badge-color-magenta{background-color:rgb(var(--magenta-6))}.arco-badge-color-gray{background-color:rgb(var(--gray-4))}.arco-badge .arco-badge-status-text{margin-left:8px;color:var(--color-text-1);font-size:12px;line-height:1.5715}.arco-badge-number-text{display:inline-block;animation:arco-badge-scale .5s cubic-bezier(.3,1.3,.3,1)}@keyframes arco-badge-scale{0%{transform:scale(0)}to{transform:scale(1)}}.badge-zoom-enter,.badge-zoom-appear{transform:translate(50%,-50%) scale(.2);transform-origin:center}.badge-zoom-enter-active,.badge-zoom-appear-active{transform:translate(50%,-50%) scale(1);transform-origin:center;opacity:1;transition:opacity .3s cubic-bezier(.3,1.3,.3,1),transform .3s cubic-bezier(.3,1.3,.3,1)}.badge-zoom-exit{transform:translate(50%,-50%) scale(1);transform-origin:center;opacity:1}.badge-zoom-exit-active{transform:translate(50%,-50%) scale(.2);transform-origin:center;opacity:0;transition:opacity .3s cubic-bezier(.3,1.3,.3,1),transform .3s cubic-bezier(.3,1.3,.3,1)}.arco-breadcrumb{display:inline-flex;align-items:center;color:var(--color-text-2);font-size:14px}.arco-breadcrumb-icon{color:var(--color-text-2)}.arco-breadcrumb-item{display:inline-block;padding:0 4px;color:var(--color-text-2);line-height:24px;vertical-align:middle}.arco-breadcrumb-item>.arco-icon{color:var(--color-text-3)}.arco-breadcrumb-item a{display:inline-block;margin:0 -4px;padding:0 4px;color:var(--color-text-2);text-decoration:none;border-radius:var(--border-radius-small);background-color:transparent}.arco-breadcrumb-item a:hover{color:rgb(var(--link-6));background-color:var(--color-fill-2)}.arco-breadcrumb-item:last-child{color:var(--color-text-1);font-weight:500}.arco-breadcrumb-item-ellipses{position:relative;top:-3px;display:inline-block;padding:0 4px;color:var(--color-text-2)}.arco-breadcrumb-item-separator{display:inline-block;margin:0 4px;color:var(--color-text-4);line-height:24px;vertical-align:middle}.arco-breadcrumb-item-with-dropdown{cursor:pointer}.arco-breadcrumb-item-dropdown-icon{margin-left:4px;color:var(--color-text-2);font-size:12px}.arco-breadcrumb-item-dropdown-icon-active svg{transform:rotate(180deg)}.arco-btn{position:relative;display:inline-flex;align-items:center;justify-content:center;box-sizing:border-box;font-weight:400;line-height:1.5715;white-space:nowrap;outline:none;cursor:pointer;transition:all .1s cubic-bezier(0,0,1,1);-webkit-appearance:none;user-select:none}.arco-btn>a:only-child{color:currentColor}.arco-btn:active{transition:none}.arco-btn-long{display:flex;width:100%}.arco-btn-link{display:inline-flex;align-items:center;justify-content:center;text-decoration:none}.arco-btn-link:not([href]){color:var(--color-text-4)}.arco-btn-link:hover{text-decoration:none}.arco-btn-link.arco-btn-only-icon{display:inline-flex;align-items:center;justify-content:center;vertical-align:top}.arco-btn.arco-btn-only-icon .arco-btn-icon{display:flex;justify-content:center}.arco-btn-loading{position:relative;cursor:default}.arco-btn-loading:before{position:absolute;top:-1px;right:-1px;bottom:-1px;left:-1px;z-index:1;display:block;background:#fff;border-radius:inherit;opacity:.4;transition:opacity .1s cubic-bezier(0,0,1,1);content:"";pointer-events:none}.arco-btn-loading-fixed-width{transition:none}.arco-btn-two-chinese-chars>*:not(svg){margin-right:-.3em;letter-spacing:.3em}.arco-btn-outline,.arco-btn-outline[type=button],.arco-btn-outline[type=submit]{color:rgb(var(--primary-6));background-color:transparent;border:1px solid rgb(var(--primary-6))}.arco-btn-outline:hover,.arco-btn-outline[type=button]:hover,.arco-btn-outline[type=submit]:hover{color:rgb(var(--primary-5));background-color:transparent;border-color:rgb(var(--primary-5))}.arco-btn-outline:focus-visible,.arco-btn-outline[type=button]:focus-visible,.arco-btn-outline[type=submit]:focus-visible{box-shadow:0 0 0 .25em rgb(var(--primary-3))}.arco-btn-outline:active,.arco-btn-outline[type=button]:active,.arco-btn-outline[type=submit]:active{color:rgb(var(--primary-7));background-color:transparent;border-color:rgb(var(--primary-7))}.arco-btn-outline.arco-btn-loading,.arco-btn-outline[type=button].arco-btn-loading,.arco-btn-outline[type=submit].arco-btn-loading{color:rgb(var(--primary-6));background-color:transparent;border:1px solid rgb(var(--primary-6))}.arco-btn-outline.arco-btn-disabled,.arco-btn-outline[type=button].arco-btn-disabled,.arco-btn-outline[type=submit].arco-btn-disabled{color:var(--color-primary-light-3);background-color:transparent;border:1px solid var(--color-primary-light-3);cursor:not-allowed}.arco-btn-outline.arco-btn-status-warning{color:rgb(var(--warning-6));background-color:transparent;border-color:rgb(var(--warning-6))}.arco-btn-outline.arco-btn-status-warning:hover{color:rgb(var(--warning-5));background-color:transparent;border-color:rgb(var(--warning-5))}.arco-btn-outline.arco-btn-status-warning:focus-visible{box-shadow:0 0 0 .25em rgb(var(--warning-3))}.arco-btn-outline.arco-btn-status-warning:active{color:rgb(var(--warning-7));background-color:transparent;border-color:rgb(var(--warning-7))}.arco-btn-outline.arco-btn-status-warning.arco-btn-loading{color:rgb(var(--warning-6));background-color:transparent;border-color:rgb(var(--warning-6))}.arco-btn-outline.arco-btn-status-warning.arco-btn-disabled{color:var(--color-warning-light-3);background-color:transparent;border:1px solid var(--color-warning-light-3)}.arco-btn-outline.arco-btn-status-danger{color:rgb(var(--danger-6));background-color:transparent;border-color:rgb(var(--danger-6))}.arco-btn-outline.arco-btn-status-danger:hover{color:rgb(var(--danger-5));background-color:transparent;border-color:rgb(var(--danger-5))}.arco-btn-outline.arco-btn-status-danger:focus-visible{box-shadow:0 0 0 .25em rgb(var(--danger-3))}.arco-btn-outline.arco-btn-status-danger:active{color:rgb(var(--danger-7));background-color:transparent;border-color:rgb(var(--danger-7))}.arco-btn-outline.arco-btn-status-danger.arco-btn-loading{color:rgb(var(--danger-6));background-color:transparent;border-color:rgb(var(--danger-6))}.arco-btn-outline.arco-btn-status-danger.arco-btn-disabled{color:var(--color-danger-light-3);background-color:transparent;border:1px solid var(--color-danger-light-3)}.arco-btn-outline.arco-btn-status-success{color:rgb(var(--success-6));background-color:transparent;border-color:rgb(var(--success-6))}.arco-btn-outline.arco-btn-status-success:hover{color:rgb(var(--success-5));background-color:transparent;border-color:rgb(var(--success-5))}.arco-btn-outline.arco-btn-status-success:focus-visible{box-shadow:0 0 0 .25em rgb(var(--success-3))}.arco-btn-outline.arco-btn-status-success:active{color:rgb(var(--success-7));background-color:transparent;border-color:rgb(var(--success-7))}.arco-btn-outline.arco-btn-status-success.arco-btn-loading{color:rgb(var(--success-6));background-color:transparent;border-color:rgb(var(--success-6))}.arco-btn-outline.arco-btn-status-success.arco-btn-disabled{color:var(--color-success-light-3);background-color:transparent;border:1px solid var(--color-success-light-3)}.arco-btn-primary,.arco-btn-primary[type=button],.arco-btn-primary[type=submit]{color:#fff;background-color:rgb(var(--primary-6));border:1px solid transparent}.arco-btn-primary:hover,.arco-btn-primary[type=button]:hover,.arco-btn-primary[type=submit]:hover{color:#fff;background-color:rgb(var(--primary-5));border-color:transparent}.arco-btn-primary:focus-visible,.arco-btn-primary[type=button]:focus-visible,.arco-btn-primary[type=submit]:focus-visible{box-shadow:0 0 0 .25em rgb(var(--primary-3))}.arco-btn-primary:active,.arco-btn-primary[type=button]:active,.arco-btn-primary[type=submit]:active{color:#fff;background-color:rgb(var(--primary-7));border-color:transparent}.arco-btn-primary.arco-btn-loading,.arco-btn-primary[type=button].arco-btn-loading,.arco-btn-primary[type=submit].arco-btn-loading{color:#fff;background-color:rgb(var(--primary-6));border:1px solid transparent}.arco-btn-primary.arco-btn-disabled,.arco-btn-primary[type=button].arco-btn-disabled,.arco-btn-primary[type=submit].arco-btn-disabled{color:#fff;background-color:var(--color-primary-light-3);border:1px solid transparent;cursor:not-allowed}.arco-btn-primary.arco-btn-status-warning{color:#fff;background-color:rgb(var(--warning-6));border-color:transparent}.arco-btn-primary.arco-btn-status-warning:hover{color:#fff;background-color:rgb(var(--warning-5));border-color:transparent}.arco-btn-primary.arco-btn-status-warning:focus-visible{box-shadow:0 0 0 .25em rgb(var(--warning-3))}.arco-btn-primary.arco-btn-status-warning:active{color:#fff;background-color:rgb(var(--warning-7));border-color:transparent}.arco-btn-primary.arco-btn-status-warning.arco-btn-loading{color:#fff;background-color:rgb(var(--warning-6));border-color:transparent}.arco-btn-primary.arco-btn-status-warning.arco-btn-disabled{color:#fff;background-color:var(--color-warning-light-3);border:1px solid transparent}.arco-btn-primary.arco-btn-status-danger{color:#fff;background-color:rgb(var(--danger-6));border-color:transparent}.arco-btn-primary.arco-btn-status-danger:hover{color:#fff;background-color:rgb(var(--danger-5));border-color:transparent}.arco-btn-primary.arco-btn-status-danger:focus-visible{box-shadow:0 0 0 .25em rgb(var(--danger-3))}.arco-btn-primary.arco-btn-status-danger:active{color:#fff;background-color:rgb(var(--danger-7));border-color:transparent}.arco-btn-primary.arco-btn-status-danger.arco-btn-loading{color:#fff;background-color:rgb(var(--danger-6));border-color:transparent}.arco-btn-primary.arco-btn-status-danger.arco-btn-disabled{color:#fff;background-color:var(--color-danger-light-3);border:1px solid transparent}.arco-btn-primary.arco-btn-status-success{color:#fff;background-color:rgb(var(--success-6));border-color:transparent}.arco-btn-primary.arco-btn-status-success:hover{color:#fff;background-color:rgb(var(--success-5));border-color:transparent}.arco-btn-primary.arco-btn-status-success:focus-visible{box-shadow:0 0 0 .25em rgb(var(--success-3))}.arco-btn-primary.arco-btn-status-success:active{color:#fff;background-color:rgb(var(--success-7));border-color:transparent}.arco-btn-primary.arco-btn-status-success.arco-btn-loading{color:#fff;background-color:rgb(var(--success-6));border-color:transparent}.arco-btn-primary.arco-btn-status-success.arco-btn-disabled{color:#fff;background-color:var(--color-success-light-3);border:1px solid transparent}.arco-btn-secondary,.arco-btn-secondary[type=button],.arco-btn-secondary[type=submit]{color:var(--color-text-2);background-color:var(--color-secondary);border:1px solid transparent}.arco-btn-secondary:hover,.arco-btn-secondary[type=button]:hover,.arco-btn-secondary[type=submit]:hover{color:var(--color-text-2);background-color:var(--color-secondary-hover);border-color:transparent}.arco-btn-secondary:focus-visible,.arco-btn-secondary[type=button]:focus-visible,.arco-btn-secondary[type=submit]:focus-visible{box-shadow:0 0 0 .25em var(--color-neutral-4)}.arco-btn-secondary:active,.arco-btn-secondary[type=button]:active,.arco-btn-secondary[type=submit]:active{color:var(--color-text-2);background-color:var(--color-secondary-active);border-color:transparent}.arco-btn-secondary.arco-btn-loading,.arco-btn-secondary[type=button].arco-btn-loading,.arco-btn-secondary[type=submit].arco-btn-loading{color:var(--color-text-2);background-color:var(--color-secondary);border:1px solid transparent}.arco-btn-secondary.arco-btn-disabled,.arco-btn-secondary[type=button].arco-btn-disabled,.arco-btn-secondary[type=submit].arco-btn-disabled{color:var(--color-text-4);background-color:var(--color-secondary-disabled);border:1px solid transparent;cursor:not-allowed}.arco-btn-secondary.arco-btn-status-warning{color:rgb(var(--warning-6));background-color:var(--color-warning-light-1);border-color:transparent}.arco-btn-secondary.arco-btn-status-warning:hover{color:rgb(var(--warning-6));background-color:var(--color-warning-light-2);border-color:transparent}.arco-btn-secondary.arco-btn-status-warning:focus-visible{box-shadow:0 0 0 .25em rgb(var(--warning-3))}.arco-btn-secondary.arco-btn-status-warning:active{color:rgb(var(--warning-6));background-color:var(--color-warning-light-3);border-color:transparent}.arco-btn-secondary.arco-btn-status-warning.arco-btn-loading{color:rgb(var(--warning-6));background-color:var(--color-warning-light-1);border-color:transparent}.arco-btn-secondary.arco-btn-status-warning.arco-btn-disabled{color:var(--color-warning-light-3);background-color:var(--color-warning-light-1);border:1px solid transparent}.arco-btn-secondary.arco-btn-status-danger{color:rgb(var(--danger-6));background-color:var(--color-danger-light-1);border-color:transparent}.arco-btn-secondary.arco-btn-status-danger:hover{color:rgb(var(--danger-6));background-color:var(--color-danger-light-2);border-color:transparent}.arco-btn-secondary.arco-btn-status-danger:focus-visible{box-shadow:0 0 0 .25em rgb(var(--danger-3))}.arco-btn-secondary.arco-btn-status-danger:active{color:rgb(var(--danger-6));background-color:var(--color-danger-light-3);border-color:transparent}.arco-btn-secondary.arco-btn-status-danger.arco-btn-loading{color:rgb(var(--danger-6));background-color:var(--color-danger-light-1);border-color:transparent}.arco-btn-secondary.arco-btn-status-danger.arco-btn-disabled{color:var(--color-danger-light-3);background-color:var(--color-danger-light-1);border:1px solid transparent}.arco-btn-secondary.arco-btn-status-success{color:rgb(var(--success-6));background-color:var(--color-success-light-1);border-color:transparent}.arco-btn-secondary.arco-btn-status-success:hover{color:rgb(var(--success-6));background-color:var(--color-success-light-2);border-color:transparent}.arco-btn-secondary.arco-btn-status-success:focus-visible{box-shadow:0 0 0 .25em rgb(var(--success-3))}.arco-btn-secondary.arco-btn-status-success:active{color:rgb(var(--success-6));background-color:var(--color-success-light-3);border-color:transparent}.arco-btn-secondary.arco-btn-status-success.arco-btn-loading{color:rgb(var(--success-6));background-color:var(--color-success-light-1);border-color:transparent}.arco-btn-secondary.arco-btn-status-success.arco-btn-disabled{color:var(--color-success-light-3);background-color:var(--color-success-light-1);border:1px solid transparent}.arco-btn-dashed,.arco-btn-dashed[type=button],.arco-btn-dashed[type=submit]{color:var(--color-text-2);background-color:var(--color-fill-2);border:1px dashed var(--color-neutral-3)}.arco-btn-dashed:hover,.arco-btn-dashed[type=button]:hover,.arco-btn-dashed[type=submit]:hover{color:var(--color-text-2);background-color:var(--color-fill-3);border-color:var(--color-neutral-4)}.arco-btn-dashed:focus-visible,.arco-btn-dashed[type=button]:focus-visible,.arco-btn-dashed[type=submit]:focus-visible{box-shadow:0 0 0 .25em var(--color-neutral-4)}.arco-btn-dashed:active,.arco-btn-dashed[type=button]:active,.arco-btn-dashed[type=submit]:active{color:var(--color-text-2);background-color:var(--color-fill-4);border-color:var(--color-neutral-5)}.arco-btn-dashed.arco-btn-loading,.arco-btn-dashed[type=button].arco-btn-loading,.arco-btn-dashed[type=submit].arco-btn-loading{color:var(--color-text-2);background-color:var(--color-fill-2);border:1px dashed var(--color-neutral-3)}.arco-btn-dashed.arco-btn-disabled,.arco-btn-dashed[type=button].arco-btn-disabled,.arco-btn-dashed[type=submit].arco-btn-disabled{color:var(--color-text-4);background-color:var(--color-fill-2);border:1px dashed var(--color-neutral-3);cursor:not-allowed}.arco-btn-dashed.arco-btn-status-warning{color:rgb(var(--warning-6));background-color:var(--color-warning-light-1);border-color:var(--color-warning-light-2)}.arco-btn-dashed.arco-btn-status-warning:hover{color:rgb(var(--warning-6));background-color:var(--color-warning-light-2);border-color:var(--color-warning-light-3)}.arco-btn-dashed.arco-btn-status-warning:focus-visible{box-shadow:0 0 0 .25em rgb(var(--warning-3))}.arco-btn-dashed.arco-btn-status-warning:active{color:rgb(var(--warning-6));background-color:var(--color-warning-light-3);border-color:var(--color-warning-light-4)}.arco-btn-dashed.arco-btn-status-warning.arco-btn-loading{color:rgb(var(--warning-6));background-color:var(--color-warning-light-1);border-color:var(--color-warning-light-2)}.arco-btn-dashed.arco-btn-status-warning.arco-btn-disabled{color:var(--color-warning-light-3);background-color:var(--color-warning-light-1);border:1px dashed var(--color-warning-light-2)}.arco-btn-dashed.arco-btn-status-danger{color:rgb(var(--danger-6));background-color:var(--color-danger-light-1);border-color:var(--color-danger-light-2)}.arco-btn-dashed.arco-btn-status-danger:hover{color:rgb(var(--danger-6));background-color:var(--color-danger-light-2);border-color:var(--color-danger-light-3)}.arco-btn-dashed.arco-btn-status-danger:focus-visible{box-shadow:0 0 0 .25em rgb(var(--danger-3))}.arco-btn-dashed.arco-btn-status-danger:active{color:rgb(var(--danger-6));background-color:var(--color-danger-light-3);border-color:var(--color-danger-light-4)}.arco-btn-dashed.arco-btn-status-danger.arco-btn-loading{color:rgb(var(--danger-6));background-color:var(--color-danger-light-1);border-color:var(--color-danger-light-2)}.arco-btn-dashed.arco-btn-status-danger.arco-btn-disabled{color:var(--color-danger-light-3);background-color:var(--color-danger-light-1);border:1px dashed var(--color-danger-light-2)}.arco-btn-dashed.arco-btn-status-success{color:rgb(var(--success-6));background-color:var(--color-success-light-1);border-color:var(--color-success-light-2)}.arco-btn-dashed.arco-btn-status-success:hover{color:rgb(var(--success-6));background-color:var(--color-success-light-2);border-color:var(--color-success-light-3)}.arco-btn-dashed.arco-btn-status-success:focus-visible{box-shadow:0 0 0 .25em rgb(var(--success-3))}.arco-btn-dashed.arco-btn-status-success:active{color:rgb(var(--success-6));background-color:var(--color-success-light-3);border-color:var(--color-success-light-4)}.arco-btn-dashed.arco-btn-status-success.arco-btn-loading{color:rgb(var(--success-6));background-color:var(--color-success-light-1);border-color:var(--color-success-light-2)}.arco-btn-dashed.arco-btn-status-success.arco-btn-disabled{color:var(--color-success-light-3);background-color:var(--color-success-light-1);border:1px dashed var(--color-success-light-2)}.arco-btn-text,.arco-btn-text[type=button],.arco-btn-text[type=submit]{color:rgb(var(--primary-6));background-color:transparent;border:1px solid transparent}.arco-btn-text:hover,.arco-btn-text[type=button]:hover,.arco-btn-text[type=submit]:hover{color:rgb(var(--primary-6));background-color:var(--color-fill-2);border-color:transparent}.arco-btn-text:focus-visible,.arco-btn-text[type=button]:focus-visible,.arco-btn-text[type=submit]:focus-visible{box-shadow:0 0 0 .25em var(--color-neutral-4)}.arco-btn-text:active,.arco-btn-text[type=button]:active,.arco-btn-text[type=submit]:active{color:rgb(var(--primary-6));background-color:var(--color-fill-3);border-color:transparent}.arco-btn-text.arco-btn-loading,.arco-btn-text[type=button].arco-btn-loading,.arco-btn-text[type=submit].arco-btn-loading{color:rgb(var(--primary-6));background-color:transparent;border:1px solid transparent}.arco-btn-text.arco-btn-disabled,.arco-btn-text[type=button].arco-btn-disabled,.arco-btn-text[type=submit].arco-btn-disabled{color:var(--color-primary-light-3);background-color:transparent;border:1px solid transparent;cursor:not-allowed}.arco-btn-text.arco-btn-status-warning{color:rgb(var(--warning-6));background-color:transparent;border-color:transparent}.arco-btn-text.arco-btn-status-warning:hover{color:rgb(var(--warning-6));background-color:var(--color-fill-2);border-color:transparent}.arco-btn-text.arco-btn-status-warning:focus-visible{box-shadow:0 0 0 .25em rgb(var(--warning-3))}.arco-btn-text.arco-btn-status-warning:active{color:rgb(var(--warning-6));background-color:var(--color-fill-3);border-color:transparent}.arco-btn-text.arco-btn-status-warning.arco-btn-loading{color:rgb(var(--warning-6));background-color:transparent;border-color:transparent}.arco-btn-text.arco-btn-status-warning.arco-btn-disabled{color:var(--color-warning-light-3);background-color:transparent;border:1px solid transparent}.arco-btn-text.arco-btn-status-danger{color:rgb(var(--danger-6));background-color:transparent;border-color:transparent}.arco-btn-text.arco-btn-status-danger:hover{color:rgb(var(--danger-6));background-color:var(--color-fill-2);border-color:transparent}.arco-btn-text.arco-btn-status-danger:focus-visible{box-shadow:0 0 0 .25em rgb(var(--danger-3))}.arco-btn-text.arco-btn-status-danger:active{color:rgb(var(--danger-6));background-color:var(--color-fill-3);border-color:transparent}.arco-btn-text.arco-btn-status-danger.arco-btn-loading{color:rgb(var(--danger-6));background-color:transparent;border-color:transparent}.arco-btn-text.arco-btn-status-danger.arco-btn-disabled{color:var(--color-danger-light-3);background-color:transparent;border:1px solid transparent}.arco-btn-text.arco-btn-status-success{color:rgb(var(--success-6));background-color:transparent;border-color:transparent}.arco-btn-text.arco-btn-status-success:hover{color:rgb(var(--success-6));background-color:var(--color-fill-2);border-color:transparent}.arco-btn-text.arco-btn-status-success:focus-visible{box-shadow:0 0 0 .25em rgb(var(--success-3))}.arco-btn-text.arco-btn-status-success:active{color:rgb(var(--success-6));background-color:var(--color-fill-3);border-color:transparent}.arco-btn-text.arco-btn-status-success.arco-btn-loading{color:rgb(var(--success-6));background-color:transparent;border-color:transparent}.arco-btn-text.arco-btn-status-success.arco-btn-disabled{color:var(--color-success-light-3);background-color:transparent;border:1px solid transparent}.arco-btn-size-mini{height:24px;padding:0 11px;font-size:12px;border-radius:var(--border-radius-small)}.arco-btn-size-mini:not(.arco-btn-only-icon) .arco-btn-icon{margin-right:4px}.arco-btn-size-mini svg{vertical-align:-1px}.arco-btn-size-mini.arco-btn-loading-fixed-width.arco-btn-loading{padding-right:3px;padding-left:3px}.arco-btn-size-mini.arco-btn-only-icon{width:24px;height:24px;padding:0}.arco-btn-size-mini.arco-btn-shape-circle{width:24px;height:24px;padding:0;text-align:center;border-radius:var(--border-radius-circle)}.arco-btn-size-mini.arco-btn-shape-round{border-radius:12px}.arco-btn-size-small{height:28px;padding:0 15px;font-size:14px;border-radius:var(--border-radius-small)}.arco-btn-size-small:not(.arco-btn-only-icon) .arco-btn-icon{margin-right:6px}.arco-btn-size-small svg{vertical-align:-2px}.arco-btn-size-small.arco-btn-loading-fixed-width.arco-btn-loading{padding-right:5px;padding-left:5px}.arco-btn-size-small.arco-btn-only-icon{width:28px;height:28px;padding:0}.arco-btn-size-small.arco-btn-shape-circle{width:28px;height:28px;padding:0;text-align:center;border-radius:var(--border-radius-circle)}.arco-btn-size-small.arco-btn-shape-round{border-radius:14px}.arco-btn-size-medium{height:32px;padding:0 15px;font-size:14px;border-radius:var(--border-radius-small)}.arco-btn-size-medium:not(.arco-btn-only-icon) .arco-btn-icon{margin-right:8px}.arco-btn-size-medium svg{vertical-align:-2px}.arco-btn-size-medium.arco-btn-loading-fixed-width.arco-btn-loading{padding-right:4px;padding-left:4px}.arco-btn-size-medium.arco-btn-only-icon{width:32px;height:32px;padding:0}.arco-btn-size-medium.arco-btn-shape-circle{width:32px;height:32px;padding:0;text-align:center;border-radius:var(--border-radius-circle)}.arco-btn-size-medium.arco-btn-shape-round{border-radius:16px}.arco-btn-size-large{height:36px;padding:0 19px;font-size:14px;border-radius:var(--border-radius-small)}.arco-btn-size-large:not(.arco-btn-only-icon) .arco-btn-icon{margin-right:8px}.arco-btn-size-large svg{vertical-align:-2px}.arco-btn-size-large.arco-btn-loading-fixed-width.arco-btn-loading{padding-right:8px;padding-left:8px}.arco-btn-size-large.arco-btn-only-icon{width:36px;height:36px;padding:0}.arco-btn-size-large.arco-btn-shape-circle{width:36px;height:36px;padding:0;text-align:center;border-radius:var(--border-radius-circle)}.arco-btn-size-large.arco-btn-shape-round{border-radius:18px}.arco-btn-group{display:inline-flex;align-items:center}.arco-btn-group .arco-btn-outline:not(:first-child),.arco-btn-group .arco-btn-dashed:not(:first-child){margin-left:-1px}.arco-btn-group .arco-btn-primary:not(:last-child){border-right:1px solid rgb(var(--primary-5))}.arco-btn-group .arco-btn-secondary:not(:last-child){border-right:1px solid var(--color-secondary-hover)}.arco-btn-group .arco-btn-status-warning:not(:last-child){border-right:1px solid rgb(var(--warning-5))}.arco-btn-group .arco-btn-status-danger:not(:last-child){border-right:1px solid rgb(var(--danger-5))}.arco-btn-group .arco-btn-status-success:not(:last-child){border-right:1px solid rgb(var(--success-5))}.arco-btn-group .arco-btn-outline:hover,.arco-btn-group .arco-btn-dashed:hover,.arco-btn-group .arco-btn-outline:active,.arco-btn-group .arco-btn-dashed:active{z-index:2}.arco-btn-group .arco-btn:first-child{border-top-right-radius:0;border-bottom-right-radius:0}.arco-btn-group .arco-btn:last-child{border-top-left-radius:0;border-bottom-left-radius:0}.arco-btn-group .arco-btn:not(:first-child):not(:last-child){border-radius:0}body[arco-theme=dark] .arco-btn-primary.arco-btn-disabled{color:#ffffff4d}.arco-calendar{box-sizing:border-box;border:1px solid var(--color-neutral-3)}.arco-calendar-header{display:flex;padding:24px}.arco-calendar-header-left{position:relative;display:flex;flex:1;align-items:center;height:28px;line-height:28px}.arco-calendar-header-right{position:relative;height:28px}.arco-calendar-header-value{color:var(--color-text-1);font-weight:500;font-size:20px}.arco-calendar-header-icon{width:28px;height:28px;margin-right:12px;color:var(--color-text-2);font-size:12px;line-height:28px;text-align:center;background-color:var(--color-bg-5);border-radius:50%;transition:all .1s cubic-bezier(0,0,1,1);user-select:none}.arco-calendar-header-icon:not(:first-child){margin:0 12px}.arco-calendar-header-icon:focus-visible{box-shadow:0 0 0 2px var(--color-primary-light-3)}.arco-calendar-header-icon:not(.arco-calendar-header-icon-hidden){cursor:pointer}.arco-calendar-header-icon:not(.arco-calendar-header-icon-hidden):hover{background-color:var(--color-fill-3)}.arco-calendar .arco-calendar-header-value-year{width:100px;margin-right:8px}.arco-calendar .arco-calendar-header-value-month{width:76px;margin-right:32px}.arco-calendar-month{width:100%}.arco-calendar-month-row{display:flex;height:100px}.arco-calendar-month-row .arco-calendar-cell{flex:1;overflow:hidden;border-bottom:1px solid var(--color-neutral-3)}.arco-calendar-month-row:last-child .arco-calendar-cell{border-bottom:unset}.arco-calendar-month-cell-body{box-sizing:border-box}.arco-calendar-mode-month:not(.arco-calendar-panel) .arco-calendar-cell:not(:last-child){border-right:1px solid var(--color-neutral-3)}.arco-calendar-week-list{display:flex;box-sizing:border-box;width:100%;padding:0;border-bottom:1px solid var(--color-neutral-3)}.arco-calendar-week-list-item{flex:1;padding:20px 16px;color:#7d7d7f;text-align:left}.arco-calendar-cell .arco-calendar-date{box-sizing:border-box;width:100%;height:100%;padding:10px;cursor:pointer}.arco-calendar-cell .arco-calendar-date-circle{width:28px;height:28px;line-height:28px;text-align:center;border-radius:50%}.arco-calendar-date-content{height:70px;overflow-y:auto}.arco-calendar-cell-today .arco-calendar-date-circle{box-sizing:border-box;border:1px solid rgb(var(--primary-6))}.arco-calendar-date-value{color:var(--color-text-4);font-weight:500;font-size:16px}.arco-calendar-cell-in-view .arco-calendar-date-value{color:var(--color-text-1)}.arco-calendar-mode-month .arco-calendar-cell-selected .arco-calendar-date-circle,.arco-calendar-mode-year .arco-calendar-cell-selected .arco-calendar-cell-selected .arco-calendar-date-circle{color:#fff;background-color:rgb(var(--primary-6));border:1px solid rgb(var(--primary-6))}.arco-calendar-mode-year:not(.arco-calendar-panel){min-width:820px}.arco-calendar-mode-year .arco-calendar-header{border-bottom:1px solid var(--color-neutral-3)}.arco-calendar-mode-year .arco-calendar-body{padding:12px}.arco-calendar-mode-year .arco-calendar-year-row{display:flex}.arco-calendar-year-row>.arco-calendar-cell{flex:1;padding:20px 8px}.arco-calendar-year-row>.arco-calendar-cell:not(:last-child){border-right:1px solid var(--color-neutral-3)}.arco-calendar-year-row:not(:last-child)>.arco-calendar-cell{border-bottom:1px solid var(--color-neutral-3)}.arco-calendar-month-with-days .arco-calendar-month-row{height:26px}.arco-calendar-month-with-days .arco-calendar-cell{border-bottom:0}.arco-calendar-month-with-days .arco-calendar-month-cell-body{padding:0}.arco-calendar-month-with-days .arco-calendar-month-title{padding:10px 6px;color:var(--color-text-1);font-weight:500;font-size:16px}.arco-calendar-month-cell{width:100%;font-size:12px}.arco-calendar-month-cell .arco-calendar-week-list{padding:0;border-bottom:unset}.arco-calendar-month-cell .arco-calendar-week-list-item{padding:6px;color:#7d7d7f;text-align:center}.arco-calendar-month-cell .arco-calendar-cell{text-align:center}.arco-calendar-month-cell .arco-calendar-date{padding:2px}.arco-calendar-month-cell .arco-calendar-date-value{font-size:14px}.arco-calendar-month-cell .arco-calendar-date-circle{display:inline-block;width:22px;height:22px;line-height:22px;text-align:center;border-radius:50%}.arco-calendar-panel{background-color:var(--color-bg-5);border:1px solid var(--color-neutral-3)}.arco-calendar-panel .arco-calendar-header{padding:8px 16px;border-bottom:1px solid var(--color-neutral-3)}.arco-calendar-panel .arco-calendar-header-value{flex:1;font-size:14px;line-height:24px;text-align:center}.arco-calendar-panel .arco-calendar-header-icon{width:24px;height:24px;margin-right:2px;margin-left:2px;line-height:24px}.arco-calendar-panel .arco-calendar-body{padding:14px 16px}.arco-calendar-panel .arco-calendar-month-cell-body{padding:0}.arco-calendar-panel .arco-calendar-month-row{height:unset}.arco-calendar-panel .arco-calendar-week-list{padding:0;border-bottom:unset}.arco-calendar-panel .arco-calendar-week-list-item{height:32px;padding:0;font-weight:400;line-height:32px;text-align:center}.arco-calendar-panel .arco-calendar-cell,.arco-calendar-panel .arco-calendar-year-row .arco-calendar-cell{box-sizing:border-box;padding:2px 0;text-align:center;border-right:0;border-bottom:0}.arco-calendar-panel .arco-calendar-cell .arco-calendar-date{display:flex;justify-content:center;padding:4px 0}.arco-calendar-panel .arco-calendar-cell .arco-calendar-date-value{min-width:24px;height:24px;font-size:14px;line-height:24px;cursor:pointer}.arco-calendar-panel.arco-calendar-mode-year .arco-calendar-cell{padding:4px 0}.arco-calendar-panel.arco-calendar-mode-year .arco-calendar-cell .arco-calendar-date{padding:4px}.arco-calendar-panel.arco-calendar-mode-year .arco-calendar-cell .arco-calendar-date-value{width:100%;border-radius:12px}.arco-calendar-panel .arco-calendar-cell-selected .arco-calendar-date-value{color:var(--color-white);background-color:rgb(var(--primary-6));border-radius:50%}.arco-calendar-panel .arco-calendar-cell:not(.arco-calendar-cell-selected):not(.arco-calendar-cell-range-start):not(.arco-calendar-cell-range-end):not(.arco-calendar-cell-hover-range-start):not(.arco-calendar-cell-hover-range-end):not(.arco-calendar-cell-disabled):not(.arco-calendar-cell-week) .arco-calendar-date-value:hover{color:rgb(var(--primary-6));background-color:var(--color-primary-light-1);border-radius:50%}.arco-calendar-panel.arco-calendar-mode-year .arco-calendar-cell:not(.arco-calendar-cell-selected):not(.arco-calendar-cell-range-start):not(.arco-calendar-cell-range-end):not(.arco-calendar-cell-hover-range-start):not(.arco-calendar-cell-hover-range-end):not(.arco-calendar-cell-disabled) .arco-calendar-date-value:hover{border-radius:12px}.arco-calendar-panel .arco-calendar-cell-today{position:relative}.arco-calendar-panel .arco-calendar-cell-today:after{position:absolute;bottom:0;left:50%;display:block;width:4px;height:4px;margin-left:-2px;background-color:rgb(var(--primary-6));border-radius:50%;content:""}.arco-calendar-cell-in-range .arco-calendar-date{background-color:var(--color-primary-light-1)}.arco-calendar-cell-range-start .arco-calendar-date{border-radius:16px 0 0 16px}.arco-calendar-cell-range-end .arco-calendar-date{border-radius:0 16px 16px 0}.arco-calendar-cell-in-range-near-hover .arco-calendar-date{border-radius:0}.arco-calendar-cell-range-start .arco-calendar-date-value,.arco-calendar-cell-range-end .arco-calendar-date-value{color:var(--color-white);background-color:rgb(var(--primary-6));border-radius:50%}.arco-calendar-cell-hover-in-range .arco-calendar-date{background-color:var(--color-primary-light-1)}.arco-calendar-cell-hover-range-start .arco-calendar-date{border-radius:16px 0 0 16px}.arco-calendar-cell-hover-range-end .arco-calendar-date{border-radius:0 16px 16px 0}.arco-calendar-cell-hover-range-start .arco-calendar-date-value,.arco-calendar-cell-hover-range-end .arco-calendar-date-value{color:var(--color-text-1);background-color:var(--color-primary-light-2);border-radius:50%}.arco-calendar-panel .arco-calendar-cell-disabled>.arco-calendar-date{background-color:var(--color-fill-1);cursor:not-allowed}.arco-calendar-panel .arco-calendar-cell-disabled>.arco-calendar-date>.arco-calendar-date-value{color:var(--color-text-4);background-color:var(--color-fill-1);cursor:not-allowed}.arco-calendar-panel .arco-calendar-footer-btn-wrapper{height:38px;color:var(--color-text-1);line-height:38px;text-align:center;border-top:1px solid var(--color-neutral-3);cursor:pointer}.arco-calendar-rtl{direction:rtl}.arco-calendar-rtl .arco-calendar-header-icon{margin-right:0;margin-left:12px;transform:scaleX(-1)}.arco-calendar-rtl .arco-calendar-week-list-item{text-align:right}.arco-calendar-rtl.arco-calendar-mode-month:not(.arco-calendar-panel) .arco-calendar-cell:not(:last-child){border-right:0;border-left:1px solid var(--color-neutral-3)}.arco-calendar-rtl .arco-calendar-header-value-year{margin-right:0;margin-left:8px}.arco-calendar-rtl .arco-calendar-header-value-month{margin-right:0;margin-left:32px}.arco-card{position:relative;background:var(--color-bg-2);border-radius:var(--border-radius-none);transition:box-shadow .2s cubic-bezier(0,0,1,1)}.arco-card-header{position:relative;display:flex;align-items:center;justify-content:space-between;box-sizing:border-box;overflow:hidden;border-bottom:1px solid var(--color-neutral-3)}.arco-card-header-no-title:before{display:block;content:" "}.arco-card-header-title{flex:1;color:var(--color-text-1);font-weight:500;line-height:1.5715;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-card-header-extra{color:rgb(var(--primary-6));overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-card-body{color:var(--color-text-2)}.arco-card-cover{overflow:hidden}.arco-card-cover>*{display:block;width:100%}.arco-card-actions{display:flex;align-items:center;justify-content:space-between;margin-top:20px}.arco-card-actions:before{visibility:hidden;content:""}.arco-card-actions-right{display:flex;align-items:center}.arco-card-actions-item{display:flex;align-items:center;justify-content:center;color:var(--color-text-2);cursor:pointer;transition:color .2s cubic-bezier(0,0,1,1);overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-card-actions-item:hover{color:rgb(var(--primary-6))}.arco-card-actions-item:not(:last-child){margin-right:12px}.arco-card-meta-footer{display:flex;align-items:center;justify-content:space-between}.arco-card-meta-footer:last-child{margin-top:20px}.arco-card-meta-footer-only-actions:before{visibility:hidden;content:""}.arco-card-meta-footer .arco-card-actions{margin-top:0}.arco-card-meta-title{color:var(--color-text-1);font-weight:500;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-card-meta-description:not(:first-child){margin-top:4px}.arco-card-grid{position:relative;box-sizing:border-box;width:33.33%;box-shadow:1px 0 0 0 var(--color-neutral-3),0 1px 0 0 var(--color-neutral-3),1px 1px 0 0 var(--color-neutral-3),1px 0 0 0 var(--color-neutral-3) inset,0 1px 0 0 var(--color-neutral-3) inset}.arco-card-grid:before{position:absolute;top:0;right:0;bottom:0;left:0;transition:box-shadow .2s cubic-bezier(0,0,1,1);content:"";pointer-events:none}.arco-card-grid-hoverable:hover{z-index:1}.arco-card-grid-hoverable:hover:before{box-shadow:0 4px 10px rgb(var(--gray-2))}.arco-card-grid .arco-card{background:none;box-shadow:none}.arco-card-contain-grid:not(.arco-card-loading)>.arco-card-body{display:flex;flex-wrap:wrap;margin:0 -1px;padding:0}.arco-card-hoverable:hover{box-shadow:0 4px 10px rgb(var(--gray-2))}.arco-card-bordered{border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-small)}.arco-card-bordered .arco-card-cover{border-radius:var(--border-radius-small) var(--border-radius-small) 0 0}.arco-card-loading .arco-card-body{overflow:hidden;text-align:center}.arco-card-size-medium{font-size:14px}.arco-card-size-medium .arco-card-header{height:46px;padding:10px 16px}.arco-card-size-medium .arco-card-header-title,.arco-card-size-medium .arco-card-meta-title{font-size:16px}.arco-card-size-medium .arco-card-header-extra{font-size:14px}.arco-card-size-medium .arco-card-body{padding:16px}.arco-card-size-small{font-size:14px}.arco-card-size-small .arco-card-header{height:40px;padding:8px 16px}.arco-card-size-small .arco-card-header-title,.arco-card-size-small .arco-card-meta-title{font-size:16px}.arco-card-size-small .arco-card-header-extra{font-size:14px}.arco-card-size-small .arco-card-body{padding:12px 16px}body[arco-theme=dark] .arco-card-grid-hoverable:hover:before,body[arco-theme=dark] .arco-card-hoverable:hover{box-shadow:0 4px 10px rgba(var(--gray-1),40%)}@keyframes arco-carousel-slide-x-in{0%{transform:translate(100%)}to{transform:translate(0)}}@keyframes arco-carousel-slide-x-out{0%{transform:translate(0)}to{transform:translate(-100%)}}@keyframes arco-carousel-slide-x-in-reverse{0%{transform:translate(-100%)}to{transform:translate(0)}}@keyframes arco-carousel-slide-x-out-reverse{0%{transform:translate(0)}to{transform:translate(100%)}}@keyframes arco-carousel-slide-y-in{0%{transform:translateY(100%)}to{transform:translateY(0)}}@keyframes arco-carousel-slide-y-out{0%{transform:translateY(0)}to{transform:translateY(-100%)}}@keyframes arco-carousel-slide-y-in-reverse{0%{transform:translateY(-100%)}to{transform:translateY(0)}}@keyframes arco-carousel-slide-y-out-reverse{0%{transform:translateY(0)}to{transform:translateY(100%)}}@keyframes arco-carousel-card-bottom-to-middle{0%{transform:translate(0) translateZ(-400px);opacity:0}to{transform:translate(0) translateZ(-200px);opacity:.4}}@keyframes arco-carousel-card-middle-to-bottom{0%{transform:translate(-100%) translateZ(-200px);opacity:.4}to{transform:translate(-100%) translateZ(-400px);opacity:0}}@keyframes arco-carousel-card-top-to-middle{0%{transform:translate(-50%) translateZ(0);opacity:1}to{transform:translate(-100%) translateZ(-200px);opacity:.4}}@keyframes arco-carousel-card-middle-to-top{0%{transform:translate(0) translateZ(-200px);opacity:.4}to{transform:translate(-50%) translateZ(0);opacity:1}}@keyframes arco-carousel-card-bottom-to-middle-reverse{0%{transform:translate(-100%) translateZ(-400px);opacity:0}to{transform:translate(-100%) translateZ(-200px);opacity:.4}}@keyframes arco-carousel-card-middle-to-bottom-reverse{0%{transform:translate(0) translateZ(-200px);opacity:.4}to{transform:translate(0) translateZ(-400px);opacity:0}}@keyframes arco-carousel-card-top-to-middle-reverse{0%{transform:translate(-50%) translateZ(0);opacity:1}to{transform:translate(0) translateZ(-200px);opacity:.4}}@keyframes arco-carousel-card-middle-to-top-reverse{0%{transform:translate(-100%) translateZ(-200px);opacity:.4}to{transform:translate(-50%) translateZ(0);opacity:1}}.arco-carousel{position:relative}.arco-carousel-indicator-position-outer{margin-bottom:30px}.arco-carousel-slide,.arco-carousel-card,.arco-carousel-fade{position:relative;width:100%;height:100%;overflow:hidden}.arco-carousel-slide>*,.arco-carousel-card>*,.arco-carousel-fade>*{position:absolute;top:0;left:0;width:100%;height:100%;overflow:hidden}.arco-carousel-item-current{z-index:1}.arco-carousel-slide>*:not(.arco-carousel-item-current){display:none;visibility:hidden}.arco-carousel-slide.arco-carousel-horizontal .arco-carousel-item-slide-out{display:block;animation:arco-carousel-slide-x-out}.arco-carousel-slide.arco-carousel-horizontal .arco-carousel-item-slide-in{display:block;animation:arco-carousel-slide-x-in}.arco-carousel-slide.arco-carousel-horizontal.arco-carousel-negative .arco-carousel-item-slide-out{animation:arco-carousel-slide-x-out-reverse}.arco-carousel-slide.arco-carousel-horizontal.arco-carousel-negative .arco-carousel-item-slide-in{animation:arco-carousel-slide-x-in-reverse}.arco-carousel-slide.arco-carousel-vertical .arco-carousel-item-slide-out{display:block;animation:arco-carousel-slide-y-out}.arco-carousel-slide.arco-carousel-vertical .arco-carousel-item-slide-in{display:block;animation:arco-carousel-slide-y-in}.arco-carousel-slide.arco-carousel-vertical.arco-carousel-negative .arco-carousel-item-slide-out{animation:arco-carousel-slide-y-out-reverse}.arco-carousel-slide.arco-carousel-vertical.arco-carousel-negative .arco-carousel-item-slide-in{animation:arco-carousel-slide-y-in-reverse}.arco-carousel-card{perspective:800px}.arco-carousel-card>*{left:50%;transform:translate(-50%) translateZ(-400px);opacity:0;animation:arco-carousel-card-middle-to-bottom}.arco-carousel-card .arco-carousel-item-prev{transform:translate(-100%) translateZ(-200px);opacity:.4;animation:arco-carousel-card-top-to-middle}.arco-carousel-card .arco-carousel-item-next{transform:translate(0) translateZ(-200px);opacity:.4;animation:arco-carousel-card-bottom-to-middle}.arco-carousel-card .arco-carousel-item-current{transform:translate(-50%) translateZ(0);opacity:1;animation:arco-carousel-card-middle-to-top}.arco-carousel-card.arco-carousel-negative>*{animation:arco-carousel-card-middle-to-bottom-reverse}.arco-carousel-card.arco-carousel-negative .arco-carousel-item-prev{animation:arco-carousel-card-bottom-to-middle-reverse}.arco-carousel-card.arco-carousel-negative .arco-carousel-item-next{animation:arco-carousel-card-top-to-middle-reverse}.arco-carousel-card.arco-carousel-negative .arco-carousel-item-current{animation:arco-carousel-card-middle-to-top-reverse}.arco-carousel-fade>*{left:50%;transform:translate(-50%);opacity:0}.arco-carousel-fade .arco-carousel-item-current{opacity:1}.arco-carousel-indicator{position:absolute;display:flex;margin:0;padding:0}.arco-carousel-indicator-wrapper{position:absolute;z-index:2}.arco-carousel-indicator-wrapper-top{top:0;right:0;left:0;height:48px;background:linear-gradient(180deg,rgba(0,0,0,.15) 0%,rgba(0,0,0,0) 87%)}.arco-carousel-indicator-wrapper-bottom{right:0;bottom:0;left:0;height:48px;background:linear-gradient(180deg,rgba(0,0,0,0) 13%,rgba(0,0,0,.15) 100%)}.arco-carousel-indicator-wrapper-left{top:0;left:0;width:48px;height:100%;background:linear-gradient(90deg,rgba(0,0,0,.15) 0%,rgba(0,0,0,0) 87%)}.arco-carousel-indicator-wrapper-right{top:0;right:0;width:48px;height:100%;background:linear-gradient(90deg,rgba(0,0,0,0) 13%,rgba(0,0,0,.15) 100%)}.arco-carousel-indicator-wrapper-outer{right:0;left:0;background:none}.arco-carousel-indicator-bottom{bottom:12px;left:50%;transform:translate(-50%)}.arco-carousel-indicator-top{top:12px;left:50%;transform:translate(-50%)}.arco-carousel-indicator-left{top:50%;left:12px;transform:translate(-50%,-50%) rotate(90deg)}.arco-carousel-indicator-right{top:50%;right:12px;transform:translate(50%,-50%) rotate(90deg)}.arco-carousel-indicator-outer{left:50%;padding:4px;background-color:transparent;border-radius:20px;transform:translate(-50%)}.arco-carousel-indicator-outer.arco-carousel-indicator-dot{bottom:-22px}.arco-carousel-indicator-outer.arco-carousel-indicator-line{bottom:-20px}.arco-carousel-indicator-outer.arco-carousel-indicator-slider{bottom:-16px;padding:0;background-color:rgba(var(--gray-4),.5)}.arco-carousel-indicator-outer .arco-carousel-indicator-item{background-color:rgba(var(--gray-4),.5)}.arco-carousel-indicator-outer .arco-carousel-indicator-item:hover,.arco-carousel-indicator-outer .arco-carousel-indicator-item-active{background-color:var(--color-fill-4)}.arco-carousel-indicator-item{display:inline-block;background-color:#ffffff4d;border-radius:var(--border-radius-medium);cursor:pointer}.arco-carousel-indicator-item:hover,.arco-carousel-indicator-item-active{background-color:var(--color-white)}.arco-carousel-indicator-dot .arco-carousel-indicator-item{width:6px;height:6px;border-radius:50%}.arco-carousel-indicator-dot .arco-carousel-indicator-item:not(:last-child){margin-right:8px}.arco-carousel-indicator-line .arco-carousel-indicator-item{width:12px;height:4px}.arco-carousel-indicator-line .arco-carousel-indicator-item:not(:last-child){margin-right:8px}.arco-carousel-indicator-slider{width:48px;height:4px;background-color:#ffffff4d;border-radius:var(--border-radius-medium);cursor:pointer}.arco-carousel-indicator-slider .arco-carousel-indicator-item{position:absolute;top:0;height:100%;transition:left .3s}.arco-carousel-arrow>div{position:absolute;z-index:2;display:flex;align-items:center;justify-content:center;width:24px;height:24px;color:var(--color-white);background-color:#ffffff4d;border-radius:50%;cursor:pointer}.arco-carousel-arrow>div>svg{color:var(--color-white);font-size:14px}.arco-carousel-arrow>div:hover{background-color:#ffffff80}.arco-carousel-arrow-left{top:50%;left:12px;transform:translateY(-50%)}.arco-carousel-arrow-right{top:50%;right:12px;transform:translateY(-50%)}.arco-carousel-arrow-top{top:12px;left:50%;transform:translate(-50%)}.arco-carousel-arrow-bottom{bottom:12px;left:50%;transform:translate(-50%)}.arco-carousel-arrow-hover div{opacity:0;transition:all .3s}.arco-carousel:hover .arco-carousel-arrow-hover div{opacity:1}body[arco-theme=dark] .arco-carousel-arrow>div{background-color:#17171a4d}body[arco-theme=dark] .arco-carousel-arrow>div:hover{background-color:#17171a80}body[arco-theme=dark] .arco-carousel-indicator-item,body[arco-theme=dark] .arco-carousel-indicator-slider{background-color:#17171a4d}body[arco-theme=dark] .arco-carousel-indicator-item-active,body[arco-theme=dark] .arco-carousel-indicator-item:hover{background-color:var(--color-white)}body[arco-theme=dark] .arco-carousel-indicator-outer.arco-carousel-indicator-slider{background-color:rgba(var(--gray-4),.5)}body[arco-theme=dark] .arco-carousel-indicator-outer .arco-carousel-indicator-item:hover,body[arco-theme=dark] .arco-carousel-indicator-outer .arco-carousel-indicator-item-active{background-color:var(--color-fill-4)}.arco-cascader-panel{display:inline-flex;box-sizing:border-box;height:200px;overflow:hidden;white-space:nowrap;list-style:none;background-color:var(--color-bg-popup);border:1px solid var(--color-fill-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-cascader-search-panel{justify-content:center;width:100%;overflow:auto}.arco-cascader-popup-trigger-hover .arco-cascader-list-item{transition:fontweight 0s}.arco-cascader-highlight{font-weight:500}.arco-cascader-panel-column{position:relative;display:inline-flex;flex-direction:column;min-width:120px;height:100%;max-height:200px;background-color:var(--color-bg-popup)}.arco-cascader-panel-column-loading{display:inline-flex;align-items:center;justify-content:center}.arco-cascader-panel-column:not(:last-of-type){border-right:1px solid var(--color-fill-3)}.arco-cascader-column-content{flex:1;max-height:200px;overflow-y:auto}.arco-cascader-list-wrapper{position:relative;display:flex;flex-direction:column;box-sizing:border-box;height:100%;padding:4px 0}.arco-cascader-list-wrapper-with-footer{padding-bottom:0}.arco-cascader-list-empty{display:flex;align-items:center;height:100%}.arco-cascader-list{flex:1;box-sizing:border-box;margin:0;padding:0;list-style:none}.arco-cascader-list-multiple .arco-cascader-option-label,.arco-cascader-list-strictly .arco-cascader-option-label{padding-left:0}.arco-cascader-list-multiple .arco-cascader-option,.arco-cascader-list-strictly .arco-cascader-option{padding-left:12px}.arco-cascader-list-multiple .arco-cascader-option .arco-checkbox,.arco-cascader-list-strictly .arco-cascader-option .arco-checkbox,.arco-cascader-list-multiple .arco-cascader-option .arco-radio,.arco-cascader-list-strictly .arco-cascader-option .arco-radio{margin-right:8px;padding-left:0}.arco-cascader-search-list.arco-cascader-list-multiple .arco-cascader-option-label{padding-right:12px}.arco-cascader-list-footer{box-sizing:border-box;height:36px;padding-left:12px;line-height:36px;border-top:1px solid var(--color-fill-3)}.arco-cascader-option,.arco-cascader-search-option{position:relative;display:flex;box-sizing:border-box;min-width:100px;height:36px;color:var(--color-text-1);font-size:14px;line-height:36px;background-color:transparent;cursor:pointer}.arco-cascader-option-label,.arco-cascader-search-option-label{flex-grow:1;padding-right:34px;padding-left:12px}.arco-cascader-option .arco-icon-right,.arco-cascader-search-option .arco-icon-right,.arco-cascader-option .arco-icon-check,.arco-cascader-search-option .arco-icon-check{position:absolute;top:50%;right:10px;color:var(--color-text-2);font-size:12px;transform:translateY(-50%)}.arco-cascader-option .arco-icon-check,.arco-cascader-search-option .arco-icon-check{color:rgb(var(--primary-6))}.arco-cascader-option .arco-icon-loading,.arco-cascader-search-option .arco-icon-loading{position:absolute;top:50%;right:10px;margin-top:-6px;color:rgb(var(--primary-6));font-size:12px}.arco-cascader-option:hover,.arco-cascader-search-option-hover{color:var(--color-text-1);background-color:var(--color-fill-2)}.arco-cascader-option:hover .arco-checkbox:not(.arco-checkbox-disabled):not(.arco-checkbox-checked):hover .arco-checkbox-icon-hover:before,.arco-cascader-search-option-hover .arco-checkbox:not(.arco-checkbox-disabled):not(.arco-checkbox-checked):hover .arco-checkbox-icon-hover:before{background-color:var(--color-fill-3)}.arco-cascader-option:hover .arco-radio:not(.arco-radio-disabled):not(.arco-radio-checked):hover .arco-radio-icon-hover:before,.arco-cascader-search-option-hover .arco-radio:not(.arco-radio-disabled):not(.arco-radio-checked):hover .arco-radio-icon-hover:before{background-color:var(--color-fill-3)}.arco-cascader-option-disabled,.arco-cascader-search-option-disabled,.arco-cascader-option-disabled:hover,.arco-cascader-search-option-disabled:hover{color:var(--color-text-4);background-color:transparent;cursor:not-allowed}.arco-cascader-option-disabled .arco-icon-right,.arco-cascader-search-option-disabled .arco-icon-right,.arco-cascader-option-disabled:hover .arco-icon-right,.arco-cascader-search-option-disabled:hover .arco-icon-right{color:inherit}.arco-cascader-option-disabled .arco-icon-check,.arco-cascader-search-option-disabled .arco-icon-check,.arco-cascader-option-disabled:hover .arco-icon-check,.arco-cascader-search-option-disabled:hover .arco-icon-check{color:var(--color-primary-light-3)}.arco-cascader-option-active{color:var(--color-text-1);background-color:var(--color-fill-2);transition:all .2s cubic-bezier(0,0,1,1)}.arco-cascader-option-active:hover{color:var(--color-text-1);background-color:var(--color-fill-2)}.arco-cascader-option-active.arco-cascader-option-disabled,.arco-cascader-option-active.arco-cascader-option-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2)}.cascader-slide-enter-active,.cascader-slide-leave-active{transition:margin .3s cubic-bezier(.34,.69,.1,1)}.cascader-slide-enter-from,.cascader-slide-leave-to{margin-left:-120px}.cascader-slide-enter-to,.cascader-slide-leave-from{margin-left:0}.arco-icon-hover.arco-checkbox-icon-hover:before{width:24px;height:24px}.arco-checkbox{position:relative;display:inline-flex;align-items:center;box-sizing:border-box;padding-left:5px;font-size:14px;line-height:unset;cursor:pointer}.arco-checkbox>input[type=checkbox]{position:absolute;top:0;left:0;width:0;height:0;opacity:0}.arco-checkbox>input[type=checkbox]:focus-visible+.arco-checkbox-icon-hover:before{background-color:var(--color-fill-2)}.arco-checkbox:hover .arco-checkbox-icon-hover:before{background-color:var(--color-fill-2)}.arco-checkbox-label{margin-left:8px;color:var(--color-text-1)}.arco-checkbox-icon{position:relative;box-sizing:border-box;width:14px;height:14px;background-color:var(--color-bg-2);border:2px solid var(--color-fill-3);border-radius:var(--border-radius-small);user-select:none}.arco-checkbox-icon:after{position:absolute;top:50%;left:50%;display:block;width:6px;height:2px;background:var(--color-white);border-radius:.5px;transform:translate(-50%) translateY(-50%) scale(0);content:""}.arco-checkbox-icon-check{position:relative;display:block;width:8px;height:100%;margin:0 auto;color:var(--color-white);transform:scale(0);transform-origin:center 75%}.arco-checkbox:hover .arco-checkbox-icon{border-color:var(--color-fill-4);transition:border-color .1s cubic-bezier(0,0,1,1),transform .3s cubic-bezier(.3,1.3,.3,1)}.arco-checkbox-checked:hover .arco-checkbox-icon,.arco-checkbox-indeterminate:hover .arco-checkbox-icon{transition:transform .3s cubic-bezier(.3,1.3,.3,1)}.arco-checkbox-checked .arco-checkbox-icon{background-color:rgb(var(--primary-6));border-color:transparent}.arco-checkbox-checked .arco-checkbox-icon-check{transform:scale(1);transition:transform .3s cubic-bezier(.3,1.3,.3,1)}.arco-checkbox-indeterminate .arco-checkbox-icon{background-color:rgb(var(--primary-6));border-color:transparent}.arco-checkbox-indeterminate .arco-checkbox-icon svg{transform:scale(0)}.arco-checkbox-indeterminate .arco-checkbox-icon:after{transform:translate(-50%) translateY(-50%) scale(1);transition:transform .3s cubic-bezier(.3,1.3,.3,1)}.arco-checkbox.arco-checkbox-disabled,.arco-checkbox.arco-checkbox-disabled .arco-checkbox-icon-hover{cursor:not-allowed}.arco-checkbox.arco-checkbox-disabled:hover .arco-checkbox-mask{border-color:var(--color-fill-3)}.arco-checkbox-checked:hover .arco-checkbox-icon,.arco-checkbox-indeterminate:hover .arco-checkbox-icon{border-color:transparent}.arco-checkbox-disabled .arco-checkbox-icon{background-color:var(--color-fill-2);border-color:var(--color-fill-3)}.arco-checkbox-disabled.arco-checkbox-checked .arco-checkbox-icon,.arco-checkbox-disabled.arco-checkbox-checked:hover .arco-checkbox-icon{background-color:var(--color-primary-light-3);border-color:transparent}.arco-checkbox-disabled:hover .arco-checkbox-icon-hover:before,.arco-checkbox-checked:hover .arco-checkbox-icon-hover:before,.arco-checkbox-indeterminate:hover .arco-checkbox-icon-hover:before{background-color:transparent}.arco-checkbox-disabled:hover .arco-checkbox-icon{border-color:var(--color-fill-3)}.arco-checkbox-disabled .arco-checkbox-label{color:var(--color-text-4)}.arco-checkbox-disabled .arco-checkbox-icon-check{color:var(--color-fill-3)}.arco-checkbox-group{display:inline-block}.arco-checkbox-group .arco-checkbox{margin-right:16px}.arco-checkbox-group-direction-vertical .arco-checkbox{display:flex;margin-right:0;line-height:32px}.arco-icon-hover.arco-collapse-item-icon-hover:before{width:16px;height:16px}.arco-icon-hover.arco-collapse-item-icon-hover:hover:before{background-color:var(--color-fill-2)}.arco-collapse{overflow:hidden;line-height:1.5715;border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium)}.arco-collapse-item{box-sizing:border-box;border-bottom:1px solid var(--color-border-2)}.arco-collapse-item-active>.arco-collapse-item-header{background-color:var(--color-bg-2);border-color:var(--color-neutral-3);transition:border-color 0s ease 0s}.arco-collapse-item-active>.arco-collapse-item-header .arco-collapse-item-header-title{font-weight:500}.arco-collapse-item-active>.arco-collapse-item-header .arco-collapse-item-expand-icon{transform:rotate(90deg)}.arco-collapse-item-active>.arco-collapse-item-header .arco-collapse-item-icon-right .arco-collapse-item-expand-icon{transform:rotate(-90deg)}.arco-collapse-item-header{position:relative;display:flex;align-items:center;justify-content:space-between;box-sizing:border-box;padding-top:8px;padding-bottom:8px;overflow:hidden;color:var(--color-text-1);font-size:14px;line-height:24px;background-color:var(--color-bg-2);border-bottom:1px solid transparent;cursor:pointer;transition:border-color 0s ease .19s}.arco-collapse-item-header-left{padding-right:13px;padding-left:34px}.arco-collapse-item-header-right{padding-right:34px;padding-left:13px}.arco-collapse-item-header-right+.arco-collapse-item-content{padding-left:13px}.arco-collapse-item-header-disabled{color:var(--color-text-4);background-color:var(--color-bg-2);cursor:not-allowed}.arco-collapse-item-header-disabled .arco-collapse-item-header-icon{color:var(--color-text-4)}.arco-collapse-item-header-title{display:inline}.arco-collapse-item-header-extra{float:right}.arco-collapse-item .arco-collapse-item-icon-hover{position:absolute;top:50%;left:13px;text-align:center;transform:translateY(-50%)}.arco-collapse-item .arco-collapse-item-icon-right{right:13px;left:unset}.arco-collapse-item .arco-collapse-item-icon-right>.arco-collapse-item-header-icon-down{transform:rotate(-90deg)}.arco-collapse-item .arco-collapse-item-expand-icon{position:relative;display:block;color:var(--color-neutral-7);font-size:14px;vertical-align:middle;transition:transform .2s cubic-bezier(.34,.69,.1,1)}.arco-collapse-item-content{position:relative;padding-right:13px;padding-left:34px;overflow:hidden;color:var(--color-text-1);font-size:14px;background-color:var(--color-fill-1)}.arco-collapse-item-content-expanded{display:block;height:auto}.arco-collapse-item-content-box{padding:8px 0}.arco-collapse-item.arco-collapse-item-disabled>.arco-collapse-item-content{color:var(--color-text-4)}.arco-collapse-item-no-icon>.arco-collapse-item-header{padding-right:13px;padding-left:13px}.arco-collapse-item:last-of-type{border-bottom:none}.arco-collapse.arco-collapse-borderless{border:none}.arco-collapse:after{display:table;clear:both;content:""}.collapse-slider-enter-from,.collapse-slider-leave-to{height:0}.collapse-slider-enter-active,.collapse-slider-leave-active{transition:height .2s cubic-bezier(.34,.69,.1,1)}.arco-comment{display:flex;flex-wrap:nowrap;font-size:14px;line-height:1.5715}.arco-comment:not(:first-of-type),.arco-comment-inner-comment{margin-top:20px}.arco-comment-inner{flex:1}.arco-comment-avatar{flex-shrink:0;margin-right:12px;cursor:pointer}.arco-comment-avatar>img{width:32px;height:32px;border-radius:var(--border-radius-circle)}.arco-comment-author{margin-right:8px;color:var(--color-text-2);font-size:14px}.arco-comment-datetime{color:var(--color-text-3);font-size:12px}.arco-comment-content{color:var(--color-text-1)}.arco-comment-title-align-right{display:flex;justify-content:space-between}.arco-comment-actions{margin-top:8px;color:var(--color-text-2);font-size:14px}.arco-comment-actions>*:not(:last-child){margin-right:8px}.arco-comment-actions-align-right{display:flex;justify-content:flex-end}.arco-picker-container,.arco-picker-range-container{box-sizing:border-box;min-height:60px;overflow:hidden;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium);box-shadow:0 2px 5px #0000001a}.arco-picker-container-shortcuts-placement-left,.arco-picker-range-container-shortcuts-placement-left,.arco-picker-container-shortcuts-placement-right,.arco-picker-range-container-shortcuts-placement-right{display:flex;align-items:flex-start}.arco-picker-container-shortcuts-placement-left>.arco-picker-shortcuts,.arco-picker-range-container-shortcuts-placement-left>.arco-picker-shortcuts,.arco-picker-container-shortcuts-placement-right>.arco-picker-shortcuts,.arco-picker-range-container-shortcuts-placement-right>.arco-picker-shortcuts{display:flex;flex-direction:column;box-sizing:border-box;padding:5px 8px;overflow-x:hidden;overflow-y:auto}.arco-picker-container-shortcuts-placement-left>.arco-picker-shortcuts>*,.arco-picker-range-container-shortcuts-placement-left>.arco-picker-shortcuts>*,.arco-picker-container-shortcuts-placement-right>.arco-picker-shortcuts>*,.arco-picker-range-container-shortcuts-placement-right>.arco-picker-shortcuts>*{margin:5px 0}.arco-picker-container-shortcuts-placement-left .arco-picker-panel-wrapper,.arco-picker-range-container-shortcuts-placement-left .arco-picker-panel-wrapper,.arco-picker-container-shortcuts-placement-left .arco-picker-range-panel-wrapper,.arco-picker-range-container-shortcuts-placement-left .arco-picker-range-panel-wrapper{border-left:1px solid var(--color-neutral-3)}.arco-picker-container-shortcuts-placement-right .arco-picker-panel-wrapper,.arco-picker-range-container-shortcuts-placement-right .arco-picker-panel-wrapper,.arco-picker-container-shortcuts-placement-right .arco-picker-range-panel-wrapper,.arco-picker-range-container-shortcuts-placement-right .arco-picker-range-panel-wrapper{border-right:1px solid var(--color-neutral-3)}.arco-picker-panel-only,.arco-picker-range-panel-only{box-shadow:none}.arco-picker-panel-only .arco-panel-date-inner,.arco-picker-range-panel-only .arco-panel-date-inner,.arco-picker-range-panel-only .arco-panel-date{width:100%}.arco-picker-header{display:flex;padding:8px 16px;border-bottom:1px solid var(--color-neutral-3)}.arco-picker-header-title{flex:1;color:var(--color-text-1);font-size:14px;line-height:24px;text-align:center}.arco-picker-header-icon{width:24px;height:24px;margin-right:2px;margin-left:2px;color:var(--color-text-2);font-size:12px;line-height:24px;text-align:center;background-color:var(--color-bg-popup);border-radius:50%;transition:all .1s cubic-bezier(0,0,1,1);user-select:none}.arco-picker-header-icon:not(.arco-picker-header-icon-hidden){cursor:pointer}.arco-picker-header-icon:not(.arco-picker-header-icon-hidden):hover{background-color:var(--color-fill-3)}.arco-picker-header-label{padding:2px;border-radius:2px;cursor:pointer;transition:all .1s}.arco-picker-header-label:hover{background-color:var(--color-fill-3)}.arco-picker-body{padding:14px 16px}.arco-picker-week-list{display:flex;box-sizing:border-box;width:100%;padding:14px 16px 0}.arco-picker-week-list-item{flex:1;height:32px;padding:0;color:#7d7d7f;font-weight:400;line-height:32px;text-align:center}.arco-picker-row{display:flex;padding:2px 0}.arco-picker-cell{flex:1}.arco-picker-cell .arco-picker-date{display:flex;justify-content:center;box-sizing:border-box;width:100%;height:100%;padding:4px 0;cursor:pointer}.arco-picker-date-value{min-width:24px;height:24px;color:var(--color-text-4);font-size:14px;line-height:24px;text-align:center;border-radius:var(--border-radius-circle);cursor:pointer}.arco-picker-cell-in-view .arco-picker-date-value{color:var(--color-text-1);font-weight:500}.arco-picker-cell-selected .arco-picker-date-value{color:var(--color-white);background-color:rgb(var(--primary-6));transition:background-color .1s cubic-bezier(0,0,1,1)}.arco-picker-cell-in-view:not(.arco-picker-cell-selected):not(.arco-picker-cell-range-start):not(.arco-picker-cell-range-end):not(.arco-picker-cell-disabled):not(.arco-picker-cell-week) .arco-picker-date-value:hover{color:var(--color-text-1);background-color:var(--color-fill-3)}.arco-picker-cell-today{position:relative}.arco-picker-cell-today:after{position:absolute;bottom:-2px;left:50%;display:block;width:4px;height:4px;margin-left:-2px;background-color:rgb(var(--primary-6));border-radius:50%;content:""}.arco-picker-cell-in-range .arco-picker-date{background-color:var(--color-primary-light-1)}.arco-picker-cell-range-start .arco-picker-date{border-top-left-radius:24px;border-bottom-left-radius:24px}.arco-picker-cell-range-end .arco-picker-date{border-top-right-radius:24px;border-bottom-right-radius:24px}.arco-picker-cell-in-range-near-hover .arco-picker-date{border-radius:0}.arco-picker-cell-range-start .arco-picker-date-value,.arco-picker-cell-range-end .arco-picker-date-value{color:var(--color-white);background-color:rgb(var(--primary-6));border-radius:var(--border-radius-circle)}.arco-picker-cell-hover-in-range .arco-picker-date{background-color:var(--color-primary-light-1)}.arco-picker-cell-hover-range-start .arco-picker-date{border-radius:24px 0 0 24px}.arco-picker-cell-hover-range-end .arco-picker-date{border-radius:0 24px 24px 0}.arco-picker-cell-hover-range-start .arco-picker-date-value,.arco-picker-cell-hover-range-end .arco-picker-date-value{color:var(--color-text-1);background-color:var(--color-primary-light-2);border-radius:50%}.arco-picker-cell-disabled .arco-picker-date{background-color:var(--color-fill-1);cursor:not-allowed}.arco-picker-cell-disabled .arco-picker-date-value{color:var(--color-text-4);background-color:transparent;cursor:not-allowed}.arco-picker-footer{width:min-content;min-width:100%}.arco-picker-footer-btn-wrapper{display:flex;align-items:center;justify-content:space-between;box-sizing:border-box;padding:3px 8px;border-top:1px solid var(--color-neutral-3)}.arco-picker-footer-btn-wrapper :only-child{margin-left:auto}.arco-picker-footer-extra-wrapper{box-sizing:border-box;padding:8px 24px;color:var(--color-text-1);font-size:12px;border-top:1px solid var(--color-neutral-3)}.arco-picker-footer-now-wrapper{box-sizing:border-box;height:36px;line-height:36px;text-align:center;border-top:1px solid var(--color-neutral-3)}.arco-picker-btn-confirm{margin:5px 0}.arco-picker-shortcuts{flex:1}.arco-picker-shortcuts>*{margin:5px 10px 5px 0}.arco-panel-date{display:flex;box-sizing:border-box}.arco-panel-date-inner{width:265px}.arco-panel-date-inner .arco-picker-body{padding-top:0}.arco-panel-date-timepicker{display:flex;flex-direction:column;border-left:1px solid var(--color-neutral-3)}.arco-panel-date-timepicker-title{width:100%;height:40px;color:var(--color-text-1);font-weight:400;font-size:14px;line-height:40px;text-align:center;border-bottom:1px solid var(--color-neutral-3)}.arco-panel-date-timepicker .arco-timepicker{height:276px;padding:0 6px;overflow:hidden}.arco-panel-date-timepicker .arco-timepicker-column{box-sizing:border-box;width:auto;height:100%;padding:0 4px}.arco-panel-date-timepicker .arco-timepicker-column::-webkit-scrollbar{width:0}.arco-panel-date-timepicker .arco-timepicker-column:not(:last-child){border-right:0}.arco-panel-date-timepicker .arco-timepicker ul:after{height:244px}.arco-panel-date-timepicker .arco-timepicker-cell{width:36px}.arco-panel-date-timepicker .arco-timepicker-cell-inner{padding-left:10px}.arco-panel-date-footer{border-right:1px solid var(--color-neutral-3)}.arco-panel-date-with-view-tabs{flex-direction:column;min-width:265px}.arco-panel-date-with-view-tabs .arco-panel-date-timepicker .arco-timepicker-column{flex:1}.arco-panel-date-with-view-tabs .arco-panel-date-timepicker .arco-timepicker-column::-webkit-scrollbar{width:0}.arco-panel-date-with-view-tabs .arco-panel-date-timepicker .arco-timepicker-cell{width:100%;text-align:center}.arco-panel-date-with-view-tabs .arco-panel-date-timepicker .arco-timepicker-cell-inner{padding-left:0}.arco-panel-date-view-tabs{display:flex;border-top:1px solid var(--color-neutral-3)}.arco-panel-date-view-tab-pane{flex:1;height:50px;color:var(--color-text-4);font-size:14px;line-height:50px;text-align:center;border-right:1px solid var(--color-neutral-3);cursor:pointer}.arco-panel-date-view-tab-pane:last-child{border-right:none}.arco-panel-date-view-tab-pane-text{margin-left:8px}.arco-panel-date-view-tab-pane-active{color:var(--color-text-1)}.arco-panel-month,.arco-panel-quarter,.arco-panel-year{box-sizing:border-box;width:265px}.arco-panel-month .arco-picker-date,.arco-panel-quarter .arco-picker-date,.arco-panel-year .arco-picker-date{padding:4px}.arco-panel-month .arco-picker-date-value,.arco-panel-quarter .arco-picker-date-value,.arco-panel-year .arco-picker-date-value{width:100%;border-radius:24px}.arco-panel-month .arco-picker-cell:not(.arco-picker-cell-selected):not(.arco-picker-cell-range-start):not(.arco-picker-cell-range-end):not(.arco-picker-cell-disabled):not(.arco-picker-cell-week) .arco-picker-date-value:hover,.arco-panel-quarter .arco-picker-cell:not(.arco-picker-cell-selected):not(.arco-picker-cell-range-start):not(.arco-picker-cell-range-end):not(.arco-picker-cell-disabled):not(.arco-picker-cell-week) .arco-picker-date-value:hover,.arco-panel-year .arco-picker-cell:not(.arco-picker-cell-selected):not(.arco-picker-cell-range-start):not(.arco-picker-cell-range-end):not(.arco-picker-cell-disabled):not(.arco-picker-cell-week) .arco-picker-date-value:hover{border-radius:24px}.arco-panel-year{box-sizing:border-box;width:265px}.arco-panel-week{box-sizing:border-box}.arco-panel-week-wrapper{display:flex}.arco-panel-week-inner{width:298px}.arco-panel-week-inner .arco-picker-body{padding-top:0}.arco-panel-week .arco-picker-row-week{cursor:pointer}.arco-panel-week .arco-picker-row-week .arco-picker-date-value{width:100%;border-radius:0}.arco-panel-week .arco-picker-cell .arco-picker-date{border-radius:0}.arco-panel-week .arco-picker-cell:nth-child(2) .arco-picker-date{padding-left:4px;border-top-left-radius:24px;border-bottom-left-radius:24px}.arco-panel-week .arco-picker-cell:nth-child(2) .arco-picker-date .arco-picker-date-value{border-top-left-radius:24px;border-bottom-left-radius:24px}.arco-panel-week .arco-picker-cell:nth-child(8) .arco-picker-date{padding-right:4px;border-top-right-radius:24px;border-bottom-right-radius:24px}.arco-panel-week .arco-picker-cell:nth-child(8) .arco-picker-date .arco-picker-date-value{border-top-right-radius:24px;border-bottom-right-radius:24px}.arco-panel-week .arco-picker-row-week:hover .arco-picker-cell:not(.arco-picker-cell-week):not(.arco-picker-cell-selected):not(.arco-picker-cell-range-start):not(.arco-picker-cell-range-end) .arco-picker-date-value{background-color:var(--color-fill-3)}.arco-panel-quarter{box-sizing:border-box;width:265px}.arco-picker-range-wrapper{display:flex}.arco-datepicker-shortcuts-wrapper{box-sizing:border-box;width:106px;height:100%;max-height:300px;margin:10px 0 0;padding:0;overflow-y:auto;list-style:none}.arco-datepicker-shortcuts-wrapper>li{box-sizing:border-box;width:100%;padding:6px 16px;cursor:pointer}.arco-datepicker-shortcuts-wrapper>li:hover{color:rgb(var(--primary-6))}.arco-descriptions-table{width:100%;border-collapse:collapse}.arco-descriptions-table-layout-fixed table{table-layout:fixed}.arco-descriptions-title{margin-bottom:16px;color:var(--color-text-1);font-weight:500;font-size:16px;line-height:1.5715}.arco-descriptions-item,.arco-descriptions-item-label,.arco-descriptions-item-value{box-sizing:border-box;font-size:14px;line-height:1.5715;text-align:left}.arco-descriptions-table-layout-fixed .arco-descriptions-item-label{width:auto}.arco-descriptions-item-label-block{width:1px;padding:0 4px 12px 0;color:var(--color-text-3);font-weight:500;white-space:nowrap}.arco-descriptions-item-value-block{padding:0 4px 12px 0;color:var(--color-text-1);font-weight:400;white-space:pre-wrap;word-break:break-word}.arco-descriptions-item-label-inline,.arco-descriptions-item-value-inline{box-sizing:border-box;font-size:14px;line-height:1.5715;text-align:left}.arco-descriptions-item-label-inline{margin-bottom:2px;color:var(--color-text-3);font-weight:500}.arco-descriptions-item-value-inline{color:var(--color-text-1);font-weight:400}.arco-descriptions-layout-inline-horizontal .arco-descriptions-item-label-inline{margin-right:4px}.arco-descriptions-layout-inline-horizontal .arco-descriptions-item-label-inline,.arco-descriptions-layout-inline-horizontal .arco-descriptions-item-value-inline{display:inline-block;margin-bottom:0}.arco-descriptions-border.arco-descriptions-layout-inline-vertical .arco-descriptions-item{padding:12px 20px}.arco-descriptions-border .arco-descriptions-body{overflow:hidden;border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium)}.arco-descriptions-border .arco-descriptions-row:not(:last-child){border-bottom:1px solid var(--color-neutral-3)}.arco-descriptions-border .arco-descriptions-item,.arco-descriptions-border .arco-descriptions-item-label-block,.arco-descriptions-border .arco-descriptions-item-value-block{padding:7px 20px;border-right:1px solid var(--color-neutral-3)}.arco-descriptions-border .arco-descriptions-item-label-block{background-color:var(--color-fill-1)}.arco-descriptions-border .arco-descriptions-item-value-block:last-child{border-right:none}.arco-descriptions-border .arco-descriptions-item:last-child{border-right:none}.arco-descriptions-border.arco-descriptions-layout-vertical .arco-descriptions-item-label-block:last-child{border-right:none}.arco-descriptions-layout-vertical:not(.arco-descriptions-border) .arco-descriptions-item-value-block:first-child{padding-left:0}.arco-descriptions-size-mini .arco-descriptions-title{margin-bottom:6px}.arco-descriptions-size-mini .arco-descriptions-item-label-block,.arco-descriptions-size-mini .arco-descriptions-item-value-block{padding-right:20px;padding-bottom:2px;font-size:12px}.arco-descriptions-size-mini.arco-descriptions-border .arco-descriptions-item-label-block,.arco-descriptions-size-mini.arco-descriptions-border .arco-descriptions-item-value-block{padding:3px 20px}.arco-descriptions-size-mini.arco-descriptions-border.arco-descriptions-layout-inline-vertical .arco-descriptions-item{padding:8px 20px}.arco-descriptions-size-small .arco-descriptions-title{margin-bottom:8px}.arco-descriptions-size-small .arco-descriptions-item-label-block,.arco-descriptions-size-small .arco-descriptions-item-value-block{padding-right:20px;padding-bottom:4px;font-size:14px}.arco-descriptions-size-small.arco-descriptions-border .arco-descriptions-item-label-block,.arco-descriptions-size-small.arco-descriptions-border .arco-descriptions-item-value-block{padding:3px 20px}.arco-descriptions-size-small.arco-descriptions-border.arco-descriptions-layout-inline-vertical .arco-descriptions-item{padding:8px 20px}.arco-descriptions-size-medium .arco-descriptions-title{margin-bottom:12px}.arco-descriptions-size-medium .arco-descriptions-item-label-block,.arco-descriptions-size-medium .arco-descriptions-item-value-block{padding-right:20px;padding-bottom:8px;font-size:14px}.arco-descriptions-size-medium.arco-descriptions-border .arco-descriptions-item-label-block,.arco-descriptions-size-medium.arco-descriptions-border .arco-descriptions-item-value-block{padding:5px 20px}.arco-descriptions-size-medium.arco-descriptions-border.arco-descriptions-layout-inline-vertical .arco-descriptions-item{padding:10px 20px}.arco-descriptions-size-large .arco-descriptions-title{margin-bottom:20px}.arco-descriptions-size-large .arco-descriptions-item-label-block,.arco-descriptions-size-large .arco-descriptions-item-value-block{padding-right:20px;padding-bottom:16px;font-size:14px}.arco-descriptions-size-large.arco-descriptions-border .arco-descriptions-item-label-block,.arco-descriptions-size-large.arco-descriptions-border .arco-descriptions-item-value-block{padding:9px 20px}.arco-descriptions-size-large.arco-descriptions-border.arco-descriptions-layout-inline-vertical .arco-descriptions-item{padding:14px 20px}.arco-divider-horizontal{position:relative;clear:both;width:100%;min-width:100%;max-width:100%;margin:20px 0;border-bottom:1px solid var(--color-neutral-3)}.arco-divider-horizontal.arco-divider-with-text{margin:20px 0}.arco-divider-vertical{display:inline-block;min-width:1px;max-width:1px;height:1em;margin:0 12px;vertical-align:middle;border-left:1px solid var(--color-neutral-3)}.arco-divider-text{position:absolute;top:50%;box-sizing:border-box;padding:0 16px;color:var(--color-text-1);font-weight:500;font-size:14px;line-height:2;background:var(--color-bg-2);transform:translateY(-50%)}.arco-divider-text-center{left:50%;transform:translate(-50%,-50%)}.arco-divider-text-left{left:24px}.arco-divider-text-right{right:24px}.arco-drawer-container{position:fixed;top:0;right:0;bottom:0;left:0;z-index:1001}.arco-drawer-mask{position:absolute;top:0;right:0;bottom:0;left:0;background-color:var(--color-mask-bg)}.arco-drawer{position:absolute;display:flex;flex-direction:column;width:100%;height:100%;overflow:auto;line-height:1.5715;background-color:var(--color-bg-3)}.arco-drawer-header{display:flex;flex-shrink:0;align-items:center;box-sizing:border-box;width:100%;height:48px;padding:0 16px;border-bottom:1px solid var(--color-neutral-3)}.arco-drawer-header .arco-drawer-title{margin-right:auto;color:var(--color-text-1);font-weight:500;font-size:16px;text-align:left}.arco-drawer-header .arco-drawer-close-btn{margin-left:8px;color:var(--color-text-1);font-size:12px;cursor:pointer}.arco-drawer-footer{flex-shrink:0;box-sizing:border-box;padding:16px;text-align:right;border-top:1px solid var(--color-neutral-3)}.arco-drawer-footer>.arco-btn{margin-left:12px}.arco-drawer-body{position:relative;flex:1;box-sizing:border-box;height:100%;padding:12px 16px;overflow:auto;color:var(--color-text-1)}.fade-drawer-enter-from,.fade-drawer-appear-from{opacity:0}.fade-drawer-enter-to,.fade-drawer-appear-to{opacity:1}.fade-drawer-enter-active,.fade-drawer-appear-active{transition:opacity .3s cubic-bezier(.34,.69,.1,1)}.fade-drawer-leave-from{opacity:1}.fade-drawer-leave-to{opacity:0}.fade-drawer-leave-active{transition:opacity .3s cubic-bezier(.34,.69,.1,1)}.slide-left-drawer-enter-from,.slide-left-drawer-appear-from{transform:translate(-100%)}.slide-left-drawer-enter-to,.slide-left-drawer-appear-to{transform:translate(0)}.slide-left-drawer-enter-active,.slide-left-drawer-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-left-drawer-leave-from{transform:translate(0)}.slide-left-drawer-leave-to{transform:translate(-100%)}.slide-left-drawer-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-right-drawer-enter-from,.slide-right-drawer-appear-from{transform:translate(100%)}.slide-right-drawer-enter-to,.slide-right-drawer-appear-to{transform:translate(0)}.slide-right-drawer-enter-active,.slide-right-drawer-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-right-drawer-leave-from{transform:translate(0)}.slide-right-drawer-leave-to{transform:translate(100%)}.slide-right-drawer-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-top-drawer-enter,.slide-top-drawer-appear,.slide-top-drawer-enter-from,.slide-top-drawer-appear-from{transform:translateY(-100%)}.slide-top-drawer-enter-to,.slide-top-drawer-appear-to{transform:translateY(0)}.slide-top-drawer-enter-active,.slide-top-drawer-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-top-drawer-leave-from{transform:translateY(0)}.slide-top-drawer-leave-to{transform:translateY(-100%)}.slide-top-drawer-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-bottom-drawer-enter-from,.slide-bottom-drawer-appear-from{transform:translateY(100%)}.slide-bottom-drawer-enter-to,.slide-bottom-drawer-appear-to{transform:translateY(0)}.slide-bottom-drawer-enter-active,.slide-bottom-drawer-appear-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.slide-bottom-drawer-leave-from{transform:translateY(0)}.slide-bottom-drawer-leave-to{transform:translateY(100%)}.slide-bottom-drawer-leave-active{transition:transform .3s cubic-bezier(.34,.69,.1,1)}.arco-dropdown{box-sizing:border-box;padding:4px 0;background-color:var(--color-bg-popup);border:1px solid var(--color-fill-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-dropdown-list{margin-top:0;margin-bottom:0;padding-left:0;list-style:none}.arco-dropdown-list-wrapper{max-height:200px;overflow-y:auto}.arco-dropdown-option{position:relative;z-index:1;display:flex;align-items:center;box-sizing:border-box;width:100%;padding:0 12px;color:var(--color-text-1);font-size:14px;line-height:36px;text-align:left;background-color:transparent;cursor:pointer}.arco-dropdown-option-content{overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-dropdown-option-has-suffix{justify-content:space-between}.arco-dropdown-option-active,.arco-dropdown-option:not(.arco-dropdown-option-disabled):hover{color:var(--color-text-1);background-color:var(--color-fill-2);transition:all .1s cubic-bezier(0,0,1,1)}.arco-dropdown-option-disabled{color:var(--color-text-4);background-color:transparent;cursor:not-allowed}.arco-dropdown-option-icon{display:inline-flex;margin-right:8px}.arco-dropdown-option-suffix{margin-left:12px}.arco-dropdown-group:first-child .arco-dropdown-group-title{margin-top:8px}.arco-dropdown-group-title{box-sizing:border-box;width:100%;margin-top:8px;padding:0 12px;color:var(--color-text-3);font-size:12px;line-height:20px;cursor:default;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-dropdown-submenu{margin-top:-4px}.arco-dropdown.arco-dropdown-has-footer{padding-bottom:0}.arco-dropdown-footer{border-top:1px solid var(--color-fill-3)}.arco-empty{box-sizing:border-box;width:100%;padding:10px 0;text-align:center}.arco-empty-image{margin-bottom:4px;color:rgb(var(--gray-5));font-size:48px;line-height:1}.arco-empty-image img{height:80px}.arco-empty .arco-empty-description{color:rgb(var(--gray-5));font-size:14px}.arco-form-item-status-validating .arco-input-wrapper:not(.arco-input-disabled),.arco-form-item-status-validating .arco-textarea-wrapper:not(.arco-textarea-disabled){background-color:var(--color-fill-2);border-color:transparent}.arco-form-item-status-validating .arco-input-wrapper:not(.arco-input-disabled):hover,.arco-form-item-status-validating .arco-textarea-wrapper:not(.arco-textarea-disabled):hover{background-color:var(--color-fill-3);border-color:transparent}.arco-form-item-status-validating .arco-input-wrapper:not(.arco-input-disabled).arco-input-focus,.arco-form-item-status-validating .arco-textarea-wrapper:not(.arco-textarea-disabled).arco-textarea-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-form-item-status-validating .arco-select-view:not(.arco-select-view-disabled),.arco-form-item-status-validating .arco-input-tag:not(.arco-input-tag-disabled){background-color:var(--color-fill-2);border-color:transparent}.arco-form-item-status-validating .arco-select-view:not(.arco-select-view-disabled):hover,.arco-form-item-status-validating .arco-input-tag:not(.arco-input-tag-disabled):hover{background-color:var(--color-fill-3);border-color:transparent}.arco-form-item-status-validating .arco-select-view:not(.arco-select-view-disabled).arco-select-view-focus,.arco-form-item-status-validating .arco-input-tag:not(.arco-input-tag-disabled).arco-input-tag-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-form-item-status-validating .arco-picker:not(.arco-picker-disabled){border-color:transparent;background-color:var(--color-fill-2)}.arco-form-item-status-validating .arco-picker:not(.arco-picker-disabled):hover{border-color:transparent;background-color:var(--color-fill-3)}.arco-form-item-status-validating .arco-picker-focused:not(.arco-picker-disabled),.arco-form-item-status-validating .arco-picker-focused:not(.arco-picker-disabled):hover{border-color:rgb(var(--primary-6));background-color:var(--color-bg-2);box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-form-item-status-validating .arco-form-item-message-help,.arco-form-item-status-validating .arco-form-item-feedback{color:rgb(var(--primary-6))}.arco-form-item-status-success .arco-input-wrapper:not(.arco-input-disabled),.arco-form-item-status-success .arco-textarea-wrapper:not(.arco-textarea-disabled){background-color:var(--color-fill-2);border-color:transparent}.arco-form-item-status-success .arco-input-wrapper:not(.arco-input-disabled):hover,.arco-form-item-status-success .arco-textarea-wrapper:not(.arco-textarea-disabled):hover{background-color:var(--color-fill-3);border-color:transparent}.arco-form-item-status-success .arco-input-wrapper:not(.arco-input-disabled).arco-input-focus,.arco-form-item-status-success .arco-textarea-wrapper:not(.arco-textarea-disabled).arco-textarea-focus{background-color:var(--color-bg-2);border-color:rgb(var(--success-6));box-shadow:0 0 0 0 var(--color-success-light-2)}.arco-form-item-status-success .arco-select-view:not(.arco-select-view-disabled),.arco-form-item-status-success .arco-input-tag:not(.arco-input-tag-disabled){background-color:var(--color-fill-2);border-color:transparent}.arco-form-item-status-success .arco-select-view:not(.arco-select-view-disabled):hover,.arco-form-item-status-success .arco-input-tag:not(.arco-input-tag-disabled):hover{background-color:var(--color-fill-3);border-color:transparent}.arco-form-item-status-success .arco-select-view:not(.arco-select-view-disabled).arco-select-view-focus,.arco-form-item-status-success .arco-input-tag:not(.arco-input-tag-disabled).arco-input-tag-focus{background-color:var(--color-bg-2);border-color:rgb(var(--success-6));box-shadow:0 0 0 0 var(--color-success-light-2)}.arco-form-item-status-success .arco-picker:not(.arco-picker-disabled){border-color:transparent;background-color:var(--color-fill-2)}.arco-form-item-status-success .arco-picker:not(.arco-picker-disabled):hover{border-color:transparent;background-color:var(--color-fill-3)}.arco-form-item-status-success .arco-picker-focused:not(.arco-picker-disabled),.arco-form-item-status-success .arco-picker-focused:not(.arco-picker-disabled):hover{border-color:rgb(var(--success-6));background-color:var(--color-bg-2);box-shadow:0 0 0 0 var(--color-success-light-2)}.arco-form-item-status-success .arco-form-item-message-help,.arco-form-item-status-success .arco-form-item-feedback{color:rgb(var(--success-6))}.arco-form-item-status-warning .arco-input-wrapper:not(.arco-input-disabled),.arco-form-item-status-warning .arco-textarea-wrapper:not(.arco-textarea-disabled){background-color:var(--color-warning-light-1);border-color:transparent}.arco-form-item-status-warning .arco-input-wrapper:not(.arco-input-disabled):hover,.arco-form-item-status-warning .arco-textarea-wrapper:not(.arco-textarea-disabled):hover{background-color:var(--color-warning-light-2);border-color:transparent}.arco-form-item-status-warning .arco-input-wrapper:not(.arco-input-disabled).arco-input-focus,.arco-form-item-status-warning .arco-textarea-wrapper:not(.arco-textarea-disabled).arco-textarea-focus{background-color:var(--color-bg-2);border-color:rgb(var(--warning-6));box-shadow:0 0 0 0 var(--color-warning-light-2)}.arco-form-item-status-warning .arco-select-view:not(.arco-select-view-disabled),.arco-form-item-status-warning .arco-input-tag:not(.arco-input-tag-disabled){background-color:var(--color-warning-light-1);border-color:transparent}.arco-form-item-status-warning .arco-select-view:not(.arco-select-view-disabled):hover,.arco-form-item-status-warning .arco-input-tag:not(.arco-input-tag-disabled):hover{background-color:var(--color-warning-light-2);border-color:transparent}.arco-form-item-status-warning .arco-select-view:not(.arco-select-view-disabled).arco-select-view-focus,.arco-form-item-status-warning .arco-input-tag:not(.arco-input-tag-disabled).arco-input-tag-focus{background-color:var(--color-bg-2);border-color:rgb(var(--warning-6));box-shadow:0 0 0 0 var(--color-warning-light-2)}.arco-form-item-status-warning .arco-picker:not(.arco-picker-disabled){border-color:transparent;background-color:var(--color-warning-light-1)}.arco-form-item-status-warning .arco-picker:not(.arco-picker-disabled):hover{border-color:transparent;background-color:var(--color-warning-light-2)}.arco-form-item-status-warning .arco-picker-focused:not(.arco-picker-disabled),.arco-form-item-status-warning .arco-picker-focused:not(.arco-picker-disabled):hover{border-color:rgb(var(--warning-6));background-color:var(--color-bg-2);box-shadow:0 0 0 0 var(--color-warning-light-2)}.arco-form-item-status-warning .arco-form-item-message-help,.arco-form-item-status-warning .arco-form-item-feedback{color:rgb(var(--warning-6))}.arco-form-item-status-error .arco-input-wrapper:not(.arco-input-disabled),.arco-form-item-status-error .arco-textarea-wrapper:not(.arco-textarea-disabled){background-color:var(--color-danger-light-1);border-color:transparent}.arco-form-item-status-error .arco-input-wrapper:not(.arco-input-disabled):hover,.arco-form-item-status-error .arco-textarea-wrapper:not(.arco-textarea-disabled):hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-form-item-status-error .arco-input-wrapper:not(.arco-input-disabled).arco-input-focus,.arco-form-item-status-error .arco-textarea-wrapper:not(.arco-textarea-disabled).arco-textarea-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-form-item-status-error .arco-select-view:not(.arco-select-view-disabled),.arco-form-item-status-error .arco-input-tag:not(.arco-input-tag-disabled){background-color:var(--color-danger-light-1);border-color:transparent}.arco-form-item-status-error .arco-select-view:not(.arco-select-view-disabled):hover,.arco-form-item-status-error .arco-input-tag:not(.arco-input-tag-disabled):hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-form-item-status-error .arco-select-view:not(.arco-select-view-disabled).arco-select-view-focus,.arco-form-item-status-error .arco-input-tag:not(.arco-input-tag-disabled).arco-input-tag-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-form-item-status-error .arco-picker:not(.arco-picker-disabled){border-color:transparent;background-color:var(--color-danger-light-1)}.arco-form-item-status-error .arco-picker:not(.arco-picker-disabled):hover{border-color:transparent;background-color:var(--color-danger-light-2)}.arco-form-item-status-error .arco-picker-focused:not(.arco-picker-disabled),.arco-form-item-status-error .arco-picker-focused:not(.arco-picker-disabled):hover{border-color:rgb(var(--danger-6));background-color:var(--color-bg-2);box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-form-item-status-error .arco-form-item-message-help,.arco-form-item-status-error .arco-form-item-feedback{color:rgb(var(--danger-6))}.arco-form-item-control-children{position:relative}.arco-form-item-feedback{position:absolute;top:50%;right:9px;font-size:14px;transform:translateY(-50%)}.arco-form-item-feedback .arco-icon-loading{font-size:12px}.arco-form-item-has-feedback .arco-input,.arco-form-item-has-feedback .arco-input-inner-wrapper,.arco-form-item-has-feedback .arco-textarea{padding-right:28px}.arco-form-item-has-feedback .arco-input-number-mode-embed .arco-input-number-step-layer{right:24px}.arco-form-item-has-feedback .arco-select.arco-select-multiple .arco-select-view,.arco-form-item-has-feedback .arco-select.arco-select-single .arco-select-view{padding-right:28px}.arco-form-item-has-feedback .arco-select.arco-select-multiple .arco-select-suffix{padding-right:0}.arco-form-item-has-feedback .arco-cascader.arco-cascader-multiple .arco-cascader-view,.arco-form-item-has-feedback .arco-cascader.arco-cascader-single .arco-cascader-view{padding-right:28px}.arco-form-item-has-feedback .arco-cascader.arco-cascader-multiple .arco-cascader-suffix{padding-right:0}.arco-form-item-has-feedback .arco-tree-select.arco-tree-select-multiple .arco-tree-select-view,.arco-form-item-has-feedback .arco-tree-select.arco-tree-select-single .arco-tree-select-view{padding-right:28px}.arco-form-item-has-feedback .arco-tree-select.arco-tree-select-multiple .arco-tree-select-suffix{padding-right:0}.arco-form-item-has-feedback .arco-picker{padding-right:28px}.arco-form-item-has-feedback .arco-picker-suffix .arco-picker-suffix-icon,.arco-form-item-has-feedback .arco-picker-suffix .arco-picker-clear-icon{margin-right:0;margin-left:0}.arco-form{display:flex;flex-direction:column;width:100%}.arco-form-layout-inline{flex-direction:row;flex-wrap:wrap}.arco-form-layout-inline .arco-form-item{width:auto;margin-bottom:8px}.arco-form-auto-label-width .arco-form-item-label-col>.arco-form-item-label{white-space:nowrap}.arco-form-item{display:flex;align-items:flex-start;justify-content:flex-start;width:100%;margin-bottom:20px}.arco-form-item-layout-vertical{display:block}.arco-form-item-layout-vertical>.arco-form-item-label-col{justify-content:flex-start;margin-bottom:8px;padding:0;line-height:1.5715;white-space:normal}.arco-form-item-layout-inline{margin-right:24px}.arco-form-item-label-col{padding-right:16px}.arco-form-item.arco-form-item-error,.arco-form-item.arco-form-item-has-help{margin-bottom:0}.arco-form-item-wrapper-flex.arco-col{flex:1}.arco-form-size-mini .arco-form-item-label-col{line-height:24px}.arco-form-size-mini .arco-form-item-label-col>.arco-form-item-label{font-size:12px}.arco-form-size-mini .arco-form-item-content,.arco-form-size-mini .arco-form-item-wrapper-col{min-height:24px}.arco-form-size-small .arco-form-item-label-col{line-height:28px}.arco-form-size-small .arco-form-item-label-col>.arco-form-item-label{font-size:14px}.arco-form-size-small .arco-form-item-content,.arco-form-size-small .arco-form-item-wrapper-col{min-height:28px}.arco-form-size-large .arco-form-item-label-col{line-height:36px}.arco-form-size-large .arco-form-item-label-col>.arco-form-item-label{font-size:14px}.arco-form-size-large .arco-form-item-content,.arco-form-size-large .arco-form-item-wrapper-col{min-height:36px}.arco-form-item-extra{margin-top:4px;color:var(--color-text-3);font-size:12px}.arco-form-item-message{min-height:20px;color:rgb(var(--danger-6));font-size:12px;line-height:20px}.arco-form-item-message-help{color:var(--color-text-3)}.arco-form-item-message+.arco-form-item-extra{margin-top:0;margin-bottom:4px}.arco-form-item-label-col{display:flex;flex-shrink:0;justify-content:flex-end;line-height:32px;white-space:nowrap}.arco-form-item-label-col-left{justify-content:flex-start}.arco-form-item-label-col>.arco-form-item-label{max-width:100%;color:var(--color-text-2);font-size:14px;white-space:normal}.arco-form-item-label-col.arco-form-item-label-col-flex{box-sizing:content-box}.arco-form-item-wrapper-col{display:flex;flex-direction:column;align-items:flex-start;width:100%;min-width:0;min-height:32px}.arco-form-item-content{flex:1;max-width:100%;min-height:32px}.arco-form-item-content-wrapper{display:flex;align-items:center;justify-content:flex-start;width:100%}.arco-form-item-content-flex{display:flex;align-items:center;justify-content:flex-start}.arco-form .arco-slider{display:block}.arco-form-item-label-required-symbol{color:rgb(var(--danger-6));font-size:12px;line-height:1}.arco-form-item-label-required-symbol svg{display:inline-block;transform:scale(.5)}.arco-form-item-label-tooltip{margin-left:4px;color:var(--color-text-4)}.form-blink-enter-from,.form-blink-appear-from{opacity:0}.form-blink-enter-to,.form-blink-appear-to{opacity:1}.form-blink-enter-active,.form-blink-appear-active{transition:opacity .3s cubic-bezier(0,0,1,1);animation:arco-form-blink .5s cubic-bezier(0,0,1,1)}@keyframes arco-form-blink{0%{opacity:1}50%{opacity:.2}to{opacity:1}}.arco-row{display:flex;flex-flow:row wrap}.arco-row-nowrap{flex-wrap:nowrap}.arco-row-align-start{align-items:flex-start}.arco-row-align-center{align-items:center}.arco-row-align-end{align-items:flex-end}.arco-row-justify-start{justify-content:flex-start}.arco-row-justify-center{justify-content:center}.arco-row-justify-end{justify-content:flex-end}.arco-row-justify-space-around{justify-content:space-around}.arco-row-justify-space-between{justify-content:space-between}.arco-col{box-sizing:border-box}.arco-col-1{flex:0 0 4.16666667%;width:4.16666667%}.arco-col-2{flex:0 0 8.33333333%;width:8.33333333%}.arco-col-3{flex:0 0 12.5%;width:12.5%}.arco-col-4{flex:0 0 16.66666667%;width:16.66666667%}.arco-col-5{flex:0 0 20.83333333%;width:20.83333333%}.arco-col-6{flex:0 0 25%;width:25%}.arco-col-7{flex:0 0 29.16666667%;width:29.16666667%}.arco-col-8{flex:0 0 33.33333333%;width:33.33333333%}.arco-col-9{flex:0 0 37.5%;width:37.5%}.arco-col-10{flex:0 0 41.66666667%;width:41.66666667%}.arco-col-11{flex:0 0 45.83333333%;width:45.83333333%}.arco-col-12{flex:0 0 50%;width:50%}.arco-col-13{flex:0 0 54.16666667%;width:54.16666667%}.arco-col-14{flex:0 0 58.33333333%;width:58.33333333%}.arco-col-15{flex:0 0 62.5%;width:62.5%}.arco-col-16{flex:0 0 66.66666667%;width:66.66666667%}.arco-col-17{flex:0 0 70.83333333%;width:70.83333333%}.arco-col-18{flex:0 0 75%;width:75%}.arco-col-19{flex:0 0 79.16666667%;width:79.16666667%}.arco-col-20{flex:0 0 83.33333333%;width:83.33333333%}.arco-col-21{flex:0 0 87.5%;width:87.5%}.arco-col-22{flex:0 0 91.66666667%;width:91.66666667%}.arco-col-23{flex:0 0 95.83333333%;width:95.83333333%}.arco-col-24{flex:0 0 100%;width:100%}.arco-col-offset-1{margin-left:4.16666667%}.arco-col-offset-2{margin-left:8.33333333%}.arco-col-offset-3{margin-left:12.5%}.arco-col-offset-4{margin-left:16.66666667%}.arco-col-offset-5{margin-left:20.83333333%}.arco-col-offset-6{margin-left:25%}.arco-col-offset-7{margin-left:29.16666667%}.arco-col-offset-8{margin-left:33.33333333%}.arco-col-offset-9{margin-left:37.5%}.arco-col-offset-10{margin-left:41.66666667%}.arco-col-offset-11{margin-left:45.83333333%}.arco-col-offset-12{margin-left:50%}.arco-col-offset-13{margin-left:54.16666667%}.arco-col-offset-14{margin-left:58.33333333%}.arco-col-offset-15{margin-left:62.5%}.arco-col-offset-16{margin-left:66.66666667%}.arco-col-offset-17{margin-left:70.83333333%}.arco-col-offset-18{margin-left:75%}.arco-col-offset-19{margin-left:79.16666667%}.arco-col-offset-20{margin-left:83.33333333%}.arco-col-offset-21{margin-left:87.5%}.arco-col-offset-22{margin-left:91.66666667%}.arco-col-offset-23{margin-left:95.83333333%}.arco-col-order-1{order:1}.arco-col-order-2{order:2}.arco-col-order-3{order:3}.arco-col-order-4{order:4}.arco-col-order-5{order:5}.arco-col-order-6{order:6}.arco-col-order-7{order:7}.arco-col-order-8{order:8}.arco-col-order-9{order:9}.arco-col-order-10{order:10}.arco-col-order-11{order:11}.arco-col-order-12{order:12}.arco-col-order-13{order:13}.arco-col-order-14{order:14}.arco-col-order-15{order:15}.arco-col-order-16{order:16}.arco-col-order-17{order:17}.arco-col-order-18{order:18}.arco-col-order-19{order:19}.arco-col-order-20{order:20}.arco-col-order-21{order:21}.arco-col-order-22{order:22}.arco-col-order-23{order:23}.arco-col-order-24{order:24}.arco-col-xs-1{flex:0 0 4.16666667%;width:4.16666667%}.arco-col-xs-2{flex:0 0 8.33333333%;width:8.33333333%}.arco-col-xs-3{flex:0 0 12.5%;width:12.5%}.arco-col-xs-4{flex:0 0 16.66666667%;width:16.66666667%}.arco-col-xs-5{flex:0 0 20.83333333%;width:20.83333333%}.arco-col-xs-6{flex:0 0 25%;width:25%}.arco-col-xs-7{flex:0 0 29.16666667%;width:29.16666667%}.arco-col-xs-8{flex:0 0 33.33333333%;width:33.33333333%}.arco-col-xs-9{flex:0 0 37.5%;width:37.5%}.arco-col-xs-10{flex:0 0 41.66666667%;width:41.66666667%}.arco-col-xs-11{flex:0 0 45.83333333%;width:45.83333333%}.arco-col-xs-12{flex:0 0 50%;width:50%}.arco-col-xs-13{flex:0 0 54.16666667%;width:54.16666667%}.arco-col-xs-14{flex:0 0 58.33333333%;width:58.33333333%}.arco-col-xs-15{flex:0 0 62.5%;width:62.5%}.arco-col-xs-16{flex:0 0 66.66666667%;width:66.66666667%}.arco-col-xs-17{flex:0 0 70.83333333%;width:70.83333333%}.arco-col-xs-18{flex:0 0 75%;width:75%}.arco-col-xs-19{flex:0 0 79.16666667%;width:79.16666667%}.arco-col-xs-20{flex:0 0 83.33333333%;width:83.33333333%}.arco-col-xs-21{flex:0 0 87.5%;width:87.5%}.arco-col-xs-22{flex:0 0 91.66666667%;width:91.66666667%}.arco-col-xs-23{flex:0 0 95.83333333%;width:95.83333333%}.arco-col-xs-24{flex:0 0 100%;width:100%}.arco-col-xs-offset-1{margin-left:4.16666667%}.arco-col-xs-offset-2{margin-left:8.33333333%}.arco-col-xs-offset-3{margin-left:12.5%}.arco-col-xs-offset-4{margin-left:16.66666667%}.arco-col-xs-offset-5{margin-left:20.83333333%}.arco-col-xs-offset-6{margin-left:25%}.arco-col-xs-offset-7{margin-left:29.16666667%}.arco-col-xs-offset-8{margin-left:33.33333333%}.arco-col-xs-offset-9{margin-left:37.5%}.arco-col-xs-offset-10{margin-left:41.66666667%}.arco-col-xs-offset-11{margin-left:45.83333333%}.arco-col-xs-offset-12{margin-left:50%}.arco-col-xs-offset-13{margin-left:54.16666667%}.arco-col-xs-offset-14{margin-left:58.33333333%}.arco-col-xs-offset-15{margin-left:62.5%}.arco-col-xs-offset-16{margin-left:66.66666667%}.arco-col-xs-offset-17{margin-left:70.83333333%}.arco-col-xs-offset-18{margin-left:75%}.arco-col-xs-offset-19{margin-left:79.16666667%}.arco-col-xs-offset-20{margin-left:83.33333333%}.arco-col-xs-offset-21{margin-left:87.5%}.arco-col-xs-offset-22{margin-left:91.66666667%}.arco-col-xs-offset-23{margin-left:95.83333333%}.arco-col-xs-order-1{order:1}.arco-col-xs-order-2{order:2}.arco-col-xs-order-3{order:3}.arco-col-xs-order-4{order:4}.arco-col-xs-order-5{order:5}.arco-col-xs-order-6{order:6}.arco-col-xs-order-7{order:7}.arco-col-xs-order-8{order:8}.arco-col-xs-order-9{order:9}.arco-col-xs-order-10{order:10}.arco-col-xs-order-11{order:11}.arco-col-xs-order-12{order:12}.arco-col-xs-order-13{order:13}.arco-col-xs-order-14{order:14}.arco-col-xs-order-15{order:15}.arco-col-xs-order-16{order:16}.arco-col-xs-order-17{order:17}.arco-col-xs-order-18{order:18}.arco-col-xs-order-19{order:19}.arco-col-xs-order-20{order:20}.arco-col-xs-order-21{order:21}.arco-col-xs-order-22{order:22}.arco-col-xs-order-23{order:23}.arco-col-xs-order-24{order:24}@media (min-width: 576px){.arco-col-sm-1{flex:0 0 4.16666667%;width:4.16666667%}.arco-col-sm-2{flex:0 0 8.33333333%;width:8.33333333%}.arco-col-sm-3{flex:0 0 12.5%;width:12.5%}.arco-col-sm-4{flex:0 0 16.66666667%;width:16.66666667%}.arco-col-sm-5{flex:0 0 20.83333333%;width:20.83333333%}.arco-col-sm-6{flex:0 0 25%;width:25%}.arco-col-sm-7{flex:0 0 29.16666667%;width:29.16666667%}.arco-col-sm-8{flex:0 0 33.33333333%;width:33.33333333%}.arco-col-sm-9{flex:0 0 37.5%;width:37.5%}.arco-col-sm-10{flex:0 0 41.66666667%;width:41.66666667%}.arco-col-sm-11{flex:0 0 45.83333333%;width:45.83333333%}.arco-col-sm-12{flex:0 0 50%;width:50%}.arco-col-sm-13{flex:0 0 54.16666667%;width:54.16666667%}.arco-col-sm-14{flex:0 0 58.33333333%;width:58.33333333%}.arco-col-sm-15{flex:0 0 62.5%;width:62.5%}.arco-col-sm-16{flex:0 0 66.66666667%;width:66.66666667%}.arco-col-sm-17{flex:0 0 70.83333333%;width:70.83333333%}.arco-col-sm-18{flex:0 0 75%;width:75%}.arco-col-sm-19{flex:0 0 79.16666667%;width:79.16666667%}.arco-col-sm-20{flex:0 0 83.33333333%;width:83.33333333%}.arco-col-sm-21{flex:0 0 87.5%;width:87.5%}.arco-col-sm-22{flex:0 0 91.66666667%;width:91.66666667%}.arco-col-sm-23{flex:0 0 95.83333333%;width:95.83333333%}.arco-col-sm-24{flex:0 0 100%;width:100%}.arco-col-sm-offset-1{margin-left:4.16666667%}.arco-col-sm-offset-2{margin-left:8.33333333%}.arco-col-sm-offset-3{margin-left:12.5%}.arco-col-sm-offset-4{margin-left:16.66666667%}.arco-col-sm-offset-5{margin-left:20.83333333%}.arco-col-sm-offset-6{margin-left:25%}.arco-col-sm-offset-7{margin-left:29.16666667%}.arco-col-sm-offset-8{margin-left:33.33333333%}.arco-col-sm-offset-9{margin-left:37.5%}.arco-col-sm-offset-10{margin-left:41.66666667%}.arco-col-sm-offset-11{margin-left:45.83333333%}.arco-col-sm-offset-12{margin-left:50%}.arco-col-sm-offset-13{margin-left:54.16666667%}.arco-col-sm-offset-14{margin-left:58.33333333%}.arco-col-sm-offset-15{margin-left:62.5%}.arco-col-sm-offset-16{margin-left:66.66666667%}.arco-col-sm-offset-17{margin-left:70.83333333%}.arco-col-sm-offset-18{margin-left:75%}.arco-col-sm-offset-19{margin-left:79.16666667%}.arco-col-sm-offset-20{margin-left:83.33333333%}.arco-col-sm-offset-21{margin-left:87.5%}.arco-col-sm-offset-22{margin-left:91.66666667%}.arco-col-sm-offset-23{margin-left:95.83333333%}.arco-col-sm-order-1{order:1}.arco-col-sm-order-2{order:2}.arco-col-sm-order-3{order:3}.arco-col-sm-order-4{order:4}.arco-col-sm-order-5{order:5}.arco-col-sm-order-6{order:6}.arco-col-sm-order-7{order:7}.arco-col-sm-order-8{order:8}.arco-col-sm-order-9{order:9}.arco-col-sm-order-10{order:10}.arco-col-sm-order-11{order:11}.arco-col-sm-order-12{order:12}.arco-col-sm-order-13{order:13}.arco-col-sm-order-14{order:14}.arco-col-sm-order-15{order:15}.arco-col-sm-order-16{order:16}.arco-col-sm-order-17{order:17}.arco-col-sm-order-18{order:18}.arco-col-sm-order-19{order:19}.arco-col-sm-order-20{order:20}.arco-col-sm-order-21{order:21}.arco-col-sm-order-22{order:22}.arco-col-sm-order-23{order:23}.arco-col-sm-order-24{order:24}}@media (min-width: 768px){.arco-col-md-1{flex:0 0 4.16666667%;width:4.16666667%}.arco-col-md-2{flex:0 0 8.33333333%;width:8.33333333%}.arco-col-md-3{flex:0 0 12.5%;width:12.5%}.arco-col-md-4{flex:0 0 16.66666667%;width:16.66666667%}.arco-col-md-5{flex:0 0 20.83333333%;width:20.83333333%}.arco-col-md-6{flex:0 0 25%;width:25%}.arco-col-md-7{flex:0 0 29.16666667%;width:29.16666667%}.arco-col-md-8{flex:0 0 33.33333333%;width:33.33333333%}.arco-col-md-9{flex:0 0 37.5%;width:37.5%}.arco-col-md-10{flex:0 0 41.66666667%;width:41.66666667%}.arco-col-md-11{flex:0 0 45.83333333%;width:45.83333333%}.arco-col-md-12{flex:0 0 50%;width:50%}.arco-col-md-13{flex:0 0 54.16666667%;width:54.16666667%}.arco-col-md-14{flex:0 0 58.33333333%;width:58.33333333%}.arco-col-md-15{flex:0 0 62.5%;width:62.5%}.arco-col-md-16{flex:0 0 66.66666667%;width:66.66666667%}.arco-col-md-17{flex:0 0 70.83333333%;width:70.83333333%}.arco-col-md-18{flex:0 0 75%;width:75%}.arco-col-md-19{flex:0 0 79.16666667%;width:79.16666667%}.arco-col-md-20{flex:0 0 83.33333333%;width:83.33333333%}.arco-col-md-21{flex:0 0 87.5%;width:87.5%}.arco-col-md-22{flex:0 0 91.66666667%;width:91.66666667%}.arco-col-md-23{flex:0 0 95.83333333%;width:95.83333333%}.arco-col-md-24{flex:0 0 100%;width:100%}.arco-col-md-offset-1{margin-left:4.16666667%}.arco-col-md-offset-2{margin-left:8.33333333%}.arco-col-md-offset-3{margin-left:12.5%}.arco-col-md-offset-4{margin-left:16.66666667%}.arco-col-md-offset-5{margin-left:20.83333333%}.arco-col-md-offset-6{margin-left:25%}.arco-col-md-offset-7{margin-left:29.16666667%}.arco-col-md-offset-8{margin-left:33.33333333%}.arco-col-md-offset-9{margin-left:37.5%}.arco-col-md-offset-10{margin-left:41.66666667%}.arco-col-md-offset-11{margin-left:45.83333333%}.arco-col-md-offset-12{margin-left:50%}.arco-col-md-offset-13{margin-left:54.16666667%}.arco-col-md-offset-14{margin-left:58.33333333%}.arco-col-md-offset-15{margin-left:62.5%}.arco-col-md-offset-16{margin-left:66.66666667%}.arco-col-md-offset-17{margin-left:70.83333333%}.arco-col-md-offset-18{margin-left:75%}.arco-col-md-offset-19{margin-left:79.16666667%}.arco-col-md-offset-20{margin-left:83.33333333%}.arco-col-md-offset-21{margin-left:87.5%}.arco-col-md-offset-22{margin-left:91.66666667%}.arco-col-md-offset-23{margin-left:95.83333333%}.arco-col-md-order-1{order:1}.arco-col-md-order-2{order:2}.arco-col-md-order-3{order:3}.arco-col-md-order-4{order:4}.arco-col-md-order-5{order:5}.arco-col-md-order-6{order:6}.arco-col-md-order-7{order:7}.arco-col-md-order-8{order:8}.arco-col-md-order-9{order:9}.arco-col-md-order-10{order:10}.arco-col-md-order-11{order:11}.arco-col-md-order-12{order:12}.arco-col-md-order-13{order:13}.arco-col-md-order-14{order:14}.arco-col-md-order-15{order:15}.arco-col-md-order-16{order:16}.arco-col-md-order-17{order:17}.arco-col-md-order-18{order:18}.arco-col-md-order-19{order:19}.arco-col-md-order-20{order:20}.arco-col-md-order-21{order:21}.arco-col-md-order-22{order:22}.arco-col-md-order-23{order:23}.arco-col-md-order-24{order:24}}@media (min-width: 992px){.arco-col-lg-1{flex:0 0 4.16666667%;width:4.16666667%}.arco-col-lg-2{flex:0 0 8.33333333%;width:8.33333333%}.arco-col-lg-3{flex:0 0 12.5%;width:12.5%}.arco-col-lg-4{flex:0 0 16.66666667%;width:16.66666667%}.arco-col-lg-5{flex:0 0 20.83333333%;width:20.83333333%}.arco-col-lg-6{flex:0 0 25%;width:25%}.arco-col-lg-7{flex:0 0 29.16666667%;width:29.16666667%}.arco-col-lg-8{flex:0 0 33.33333333%;width:33.33333333%}.arco-col-lg-9{flex:0 0 37.5%;width:37.5%}.arco-col-lg-10{flex:0 0 41.66666667%;width:41.66666667%}.arco-col-lg-11{flex:0 0 45.83333333%;width:45.83333333%}.arco-col-lg-12{flex:0 0 50%;width:50%}.arco-col-lg-13{flex:0 0 54.16666667%;width:54.16666667%}.arco-col-lg-14{flex:0 0 58.33333333%;width:58.33333333%}.arco-col-lg-15{flex:0 0 62.5%;width:62.5%}.arco-col-lg-16{flex:0 0 66.66666667%;width:66.66666667%}.arco-col-lg-17{flex:0 0 70.83333333%;width:70.83333333%}.arco-col-lg-18{flex:0 0 75%;width:75%}.arco-col-lg-19{flex:0 0 79.16666667%;width:79.16666667%}.arco-col-lg-20{flex:0 0 83.33333333%;width:83.33333333%}.arco-col-lg-21{flex:0 0 87.5%;width:87.5%}.arco-col-lg-22{flex:0 0 91.66666667%;width:91.66666667%}.arco-col-lg-23{flex:0 0 95.83333333%;width:95.83333333%}.arco-col-lg-24{flex:0 0 100%;width:100%}.arco-col-lg-offset-1{margin-left:4.16666667%}.arco-col-lg-offset-2{margin-left:8.33333333%}.arco-col-lg-offset-3{margin-left:12.5%}.arco-col-lg-offset-4{margin-left:16.66666667%}.arco-col-lg-offset-5{margin-left:20.83333333%}.arco-col-lg-offset-6{margin-left:25%}.arco-col-lg-offset-7{margin-left:29.16666667%}.arco-col-lg-offset-8{margin-left:33.33333333%}.arco-col-lg-offset-9{margin-left:37.5%}.arco-col-lg-offset-10{margin-left:41.66666667%}.arco-col-lg-offset-11{margin-left:45.83333333%}.arco-col-lg-offset-12{margin-left:50%}.arco-col-lg-offset-13{margin-left:54.16666667%}.arco-col-lg-offset-14{margin-left:58.33333333%}.arco-col-lg-offset-15{margin-left:62.5%}.arco-col-lg-offset-16{margin-left:66.66666667%}.arco-col-lg-offset-17{margin-left:70.83333333%}.arco-col-lg-offset-18{margin-left:75%}.arco-col-lg-offset-19{margin-left:79.16666667%}.arco-col-lg-offset-20{margin-left:83.33333333%}.arco-col-lg-offset-21{margin-left:87.5%}.arco-col-lg-offset-22{margin-left:91.66666667%}.arco-col-lg-offset-23{margin-left:95.83333333%}.arco-col-lg-order-1{order:1}.arco-col-lg-order-2{order:2}.arco-col-lg-order-3{order:3}.arco-col-lg-order-4{order:4}.arco-col-lg-order-5{order:5}.arco-col-lg-order-6{order:6}.arco-col-lg-order-7{order:7}.arco-col-lg-order-8{order:8}.arco-col-lg-order-9{order:9}.arco-col-lg-order-10{order:10}.arco-col-lg-order-11{order:11}.arco-col-lg-order-12{order:12}.arco-col-lg-order-13{order:13}.arco-col-lg-order-14{order:14}.arco-col-lg-order-15{order:15}.arco-col-lg-order-16{order:16}.arco-col-lg-order-17{order:17}.arco-col-lg-order-18{order:18}.arco-col-lg-order-19{order:19}.arco-col-lg-order-20{order:20}.arco-col-lg-order-21{order:21}.arco-col-lg-order-22{order:22}.arco-col-lg-order-23{order:23}.arco-col-lg-order-24{order:24}}@media (min-width: 1200px){.arco-col-xl-1{flex:0 0 4.16666667%;width:4.16666667%}.arco-col-xl-2{flex:0 0 8.33333333%;width:8.33333333%}.arco-col-xl-3{flex:0 0 12.5%;width:12.5%}.arco-col-xl-4{flex:0 0 16.66666667%;width:16.66666667%}.arco-col-xl-5{flex:0 0 20.83333333%;width:20.83333333%}.arco-col-xl-6{flex:0 0 25%;width:25%}.arco-col-xl-7{flex:0 0 29.16666667%;width:29.16666667%}.arco-col-xl-8{flex:0 0 33.33333333%;width:33.33333333%}.arco-col-xl-9{flex:0 0 37.5%;width:37.5%}.arco-col-xl-10{flex:0 0 41.66666667%;width:41.66666667%}.arco-col-xl-11{flex:0 0 45.83333333%;width:45.83333333%}.arco-col-xl-12{flex:0 0 50%;width:50%}.arco-col-xl-13{flex:0 0 54.16666667%;width:54.16666667%}.arco-col-xl-14{flex:0 0 58.33333333%;width:58.33333333%}.arco-col-xl-15{flex:0 0 62.5%;width:62.5%}.arco-col-xl-16{flex:0 0 66.66666667%;width:66.66666667%}.arco-col-xl-17{flex:0 0 70.83333333%;width:70.83333333%}.arco-col-xl-18{flex:0 0 75%;width:75%}.arco-col-xl-19{flex:0 0 79.16666667%;width:79.16666667%}.arco-col-xl-20{flex:0 0 83.33333333%;width:83.33333333%}.arco-col-xl-21{flex:0 0 87.5%;width:87.5%}.arco-col-xl-22{flex:0 0 91.66666667%;width:91.66666667%}.arco-col-xl-23{flex:0 0 95.83333333%;width:95.83333333%}.arco-col-xl-24{flex:0 0 100%;width:100%}.arco-col-xl-offset-1{margin-left:4.16666667%}.arco-col-xl-offset-2{margin-left:8.33333333%}.arco-col-xl-offset-3{margin-left:12.5%}.arco-col-xl-offset-4{margin-left:16.66666667%}.arco-col-xl-offset-5{margin-left:20.83333333%}.arco-col-xl-offset-6{margin-left:25%}.arco-col-xl-offset-7{margin-left:29.16666667%}.arco-col-xl-offset-8{margin-left:33.33333333%}.arco-col-xl-offset-9{margin-left:37.5%}.arco-col-xl-offset-10{margin-left:41.66666667%}.arco-col-xl-offset-11{margin-left:45.83333333%}.arco-col-xl-offset-12{margin-left:50%}.arco-col-xl-offset-13{margin-left:54.16666667%}.arco-col-xl-offset-14{margin-left:58.33333333%}.arco-col-xl-offset-15{margin-left:62.5%}.arco-col-xl-offset-16{margin-left:66.66666667%}.arco-col-xl-offset-17{margin-left:70.83333333%}.arco-col-xl-offset-18{margin-left:75%}.arco-col-xl-offset-19{margin-left:79.16666667%}.arco-col-xl-offset-20{margin-left:83.33333333%}.arco-col-xl-offset-21{margin-left:87.5%}.arco-col-xl-offset-22{margin-left:91.66666667%}.arco-col-xl-offset-23{margin-left:95.83333333%}.arco-col-xl-order-1{order:1}.arco-col-xl-order-2{order:2}.arco-col-xl-order-3{order:3}.arco-col-xl-order-4{order:4}.arco-col-xl-order-5{order:5}.arco-col-xl-order-6{order:6}.arco-col-xl-order-7{order:7}.arco-col-xl-order-8{order:8}.arco-col-xl-order-9{order:9}.arco-col-xl-order-10{order:10}.arco-col-xl-order-11{order:11}.arco-col-xl-order-12{order:12}.arco-col-xl-order-13{order:13}.arco-col-xl-order-14{order:14}.arco-col-xl-order-15{order:15}.arco-col-xl-order-16{order:16}.arco-col-xl-order-17{order:17}.arco-col-xl-order-18{order:18}.arco-col-xl-order-19{order:19}.arco-col-xl-order-20{order:20}.arco-col-xl-order-21{order:21}.arco-col-xl-order-22{order:22}.arco-col-xl-order-23{order:23}.arco-col-xl-order-24{order:24}}@media (min-width: 1600px){.arco-col-xxl-1{flex:0 0 4.16666667%;width:4.16666667%}.arco-col-xxl-2{flex:0 0 8.33333333%;width:8.33333333%}.arco-col-xxl-3{flex:0 0 12.5%;width:12.5%}.arco-col-xxl-4{flex:0 0 16.66666667%;width:16.66666667%}.arco-col-xxl-5{flex:0 0 20.83333333%;width:20.83333333%}.arco-col-xxl-6{flex:0 0 25%;width:25%}.arco-col-xxl-7{flex:0 0 29.16666667%;width:29.16666667%}.arco-col-xxl-8{flex:0 0 33.33333333%;width:33.33333333%}.arco-col-xxl-9{flex:0 0 37.5%;width:37.5%}.arco-col-xxl-10{flex:0 0 41.66666667%;width:41.66666667%}.arco-col-xxl-11{flex:0 0 45.83333333%;width:45.83333333%}.arco-col-xxl-12{flex:0 0 50%;width:50%}.arco-col-xxl-13{flex:0 0 54.16666667%;width:54.16666667%}.arco-col-xxl-14{flex:0 0 58.33333333%;width:58.33333333%}.arco-col-xxl-15{flex:0 0 62.5%;width:62.5%}.arco-col-xxl-16{flex:0 0 66.66666667%;width:66.66666667%}.arco-col-xxl-17{flex:0 0 70.83333333%;width:70.83333333%}.arco-col-xxl-18{flex:0 0 75%;width:75%}.arco-col-xxl-19{flex:0 0 79.16666667%;width:79.16666667%}.arco-col-xxl-20{flex:0 0 83.33333333%;width:83.33333333%}.arco-col-xxl-21{flex:0 0 87.5%;width:87.5%}.arco-col-xxl-22{flex:0 0 91.66666667%;width:91.66666667%}.arco-col-xxl-23{flex:0 0 95.83333333%;width:95.83333333%}.arco-col-xxl-24{flex:0 0 100%;width:100%}.arco-col-xxl-offset-1{margin-left:4.16666667%}.arco-col-xxl-offset-2{margin-left:8.33333333%}.arco-col-xxl-offset-3{margin-left:12.5%}.arco-col-xxl-offset-4{margin-left:16.66666667%}.arco-col-xxl-offset-5{margin-left:20.83333333%}.arco-col-xxl-offset-6{margin-left:25%}.arco-col-xxl-offset-7{margin-left:29.16666667%}.arco-col-xxl-offset-8{margin-left:33.33333333%}.arco-col-xxl-offset-9{margin-left:37.5%}.arco-col-xxl-offset-10{margin-left:41.66666667%}.arco-col-xxl-offset-11{margin-left:45.83333333%}.arco-col-xxl-offset-12{margin-left:50%}.arco-col-xxl-offset-13{margin-left:54.16666667%}.arco-col-xxl-offset-14{margin-left:58.33333333%}.arco-col-xxl-offset-15{margin-left:62.5%}.arco-col-xxl-offset-16{margin-left:66.66666667%}.arco-col-xxl-offset-17{margin-left:70.83333333%}.arco-col-xxl-offset-18{margin-left:75%}.arco-col-xxl-offset-19{margin-left:79.16666667%}.arco-col-xxl-offset-20{margin-left:83.33333333%}.arco-col-xxl-offset-21{margin-left:87.5%}.arco-col-xxl-offset-22{margin-left:91.66666667%}.arco-col-xxl-offset-23{margin-left:95.83333333%}.arco-col-xxl-order-1{order:1}.arco-col-xxl-order-2{order:2}.arco-col-xxl-order-3{order:3}.arco-col-xxl-order-4{order:4}.arco-col-xxl-order-5{order:5}.arco-col-xxl-order-6{order:6}.arco-col-xxl-order-7{order:7}.arco-col-xxl-order-8{order:8}.arco-col-xxl-order-9{order:9}.arco-col-xxl-order-10{order:10}.arco-col-xxl-order-11{order:11}.arco-col-xxl-order-12{order:12}.arco-col-xxl-order-13{order:13}.arco-col-xxl-order-14{order:14}.arco-col-xxl-order-15{order:15}.arco-col-xxl-order-16{order:16}.arco-col-xxl-order-17{order:17}.arco-col-xxl-order-18{order:18}.arco-col-xxl-order-19{order:19}.arco-col-xxl-order-20{order:20}.arco-col-xxl-order-21{order:21}.arco-col-xxl-order-22{order:22}.arco-col-xxl-order-23{order:23}.arco-col-xxl-order-24{order:24}}.arco-grid{display:grid}.arco-image-trigger{padding:6px 4px;background:var(--color-bg-5);border:1px solid var(--color-neutral-3);border-radius:4px}.arco-image-trigger .arco-trigger-arrow{background-color:var(--color-bg-5);border:1px solid var(--color-neutral-3)}.arco-image{position:relative;display:inline-block;border-radius:var(--border-radius-small)}.arco-image-img{vertical-align:middle;border-radius:inherit}.arco-image-overlay{position:absolute;top:0;left:0;width:100%;height:100%}.arco-image-footer{display:flex;width:100%;max-width:100%}.arco-image-footer-caption{flex:1 1 auto}.arco-image-footer-caption-title{font-weight:500;font-size:16px}.arco-image-footer-caption-description{font-size:14px}.arco-image-footer-extra{flex:0 0 auto;padding-left:12px}.arco-image-with-footer-inner .arco-image-footer{position:absolute;bottom:0;left:0;align-items:center;box-sizing:border-box;padding:9px 16px;color:var(--color-white);background:linear-gradient(360deg,rgba(0,0,0,.3) 0%,rgba(0,0,0,0) 100%);border-bottom-right-radius:var(--border-radius-small);border-bottom-left-radius:var(--border-radius-small)}.arco-image-with-footer-inner .arco-image-footer-caption-title,.arco-image-with-footer-inner .arco-image-footer-caption-description{color:var(--color-white)}.arco-image-with-footer-outer .arco-image-footer{margin-top:4px;color:var(--color-neutral-8)}.arco-image-with-footer-outer .arco-image-footer-caption-title{color:var(--color-text-1)}.arco-image-with-footer-outer .arco-image-footer-caption-description{color:var(--color-neutral-6)}.arco-image-error{display:flex;flex-direction:column;align-items:center;justify-content:center;box-sizing:border-box;width:100%;height:100%;color:var(--color-neutral-4);background-color:var(--color-neutral-1)}.arco-image-error-icon{width:60px;max-width:100%;height:60px;max-height:100%}.arco-image-error-icon>svg{width:100%;height:100%}.arco-image-error-alt{padding:8px 16px;font-size:12px;line-height:1.6667;text-align:center}.arco-image-loader{position:absolute;top:0;left:0;width:100%;height:100%;background-color:var(--color-neutral-1)}.arco-image-loader-spin{position:absolute;top:50%;left:50%;color:rgb(var(--primary-6));font-size:32px;text-align:center;transform:translate(-50%,-50%)}.arco-image-loader-spin-text{color:var(--color-neutral-6);font-size:16px}.arco-image-simple.arco-image-with-footer-inner .arco-image-footer{padding:12px 16px}.arco-image-loading .arco-image-img,.arco-image-loading-error .arco-image-img{visibility:hidden}.arco-image-preview{position:fixed;top:0;left:0;z-index:1001;width:100%;height:100%}.arco-image-preview-hide{display:none}.arco-image-preview-mask,.arco-image-preview-wrapper{position:absolute;top:0;left:0;width:100%;height:100%}.arco-image-preview-mask{background-color:var(--color-mask-bg)}.arco-image-preview-img-container{width:100%;height:100%;text-align:center}.arco-image-preview-img-container:before{display:inline-block;width:0;height:100%;vertical-align:middle;content:""}.arco-image-preview-img-container .arco-image-preview-img{display:inline-block;max-width:100%;max-height:100%;vertical-align:middle;cursor:grab;user-select:none}.arco-image-preview-img-container .arco-image-preview-img.arco-image-preview-img-moving{cursor:grabbing}.arco-image-preview-scale-value{box-sizing:border-box;padding:7px 10px;color:var(--color-white);font-size:12px;line-height:initial;background-color:#ffffff14;position:absolute;top:50%;left:50%;transform:translate(-50%,-50%)}.arco-image-preview-toolbar{position:absolute;bottom:46px;left:50%;display:flex;align-items:flex-start;padding:4px 16px;background-color:var(--color-bg-2);border-radius:var(--border-radius-medium);transform:translate(-50%)}.arco-image-preview-toolbar-action{display:flex;align-items:center;color:var(--color-neutral-8);font-size:14px;background-color:transparent;border-radius:var(--border-radius-small);cursor:pointer}.arco-image-preview-toolbar-action:not(:last-of-type){margin-right:0}.arco-image-preview-toolbar-action:hover{color:rgb(var(--primary-6));background-color:var(--color-neutral-2)}.arco-image-preview-toolbar-action-disabled,.arco-image-preview-toolbar-action-disabled:hover{color:var(--color-text-4);background-color:transparent;cursor:not-allowed}.arco-image-preview-toolbar-action-name{padding-right:12px;font-size:12px}.arco-image-preview-toolbar-action-content{padding:13px;line-height:1}.arco-image-preview-loading{display:flex;align-items:center;justify-content:center;box-sizing:border-box;width:48px;height:48px;padding:10px;color:rgb(var(--primary-6));font-size:18px;background-color:#232324;border-radius:var(--border-radius-medium);position:absolute;top:50%;left:50%;transform:translate(-50%,-50%)}.arco-image-preview-close-btn{position:absolute;top:36px;right:36px;display:flex;align-items:center;justify-content:center;width:32px;height:32px;color:var(--color-white);font-size:14px;line-height:32px;text-align:center;background:rgba(0,0,0,.5);border-radius:50%;cursor:pointer}.arco-image-preview-arrow-left,.arco-image-preview-arrow-right{position:absolute;z-index:2;display:flex;align-items:center;justify-content:center;width:32px;height:32px;color:var(--color-white);background-color:#ffffff4d;border-radius:50%;cursor:pointer}.arco-image-preview-arrow-left>svg,.arco-image-preview-arrow-right>svg{color:var(--color-white);font-size:16px}.arco-image-preview-arrow-left:hover,.arco-image-preview-arrow-right:hover{background-color:#ffffff80}.arco-image-preview-arrow-left{top:50%;left:20px;transform:translateY(-50%)}.arco-image-preview-arrow-right{top:50%;right:20px;transform:translateY(-50%)}.arco-image-preview-arrow-disabled{color:#ffffff4d;background-color:#fff3;cursor:not-allowed}.arco-image-preview-arrow-disabled>svg{color:#ffffff4d}.arco-image-preview-arrow-disabled:hover{background-color:#fff3}.image-fade-enter-from,.image-fade-leave-to{opacity:0}.image-fade-enter-to,.image-fade-leave-from{opacity:1}.image-fade-enter-active,.image-fade-leave-active{transition:opacity .4s cubic-bezier(.3,1.3,.3,1)}.arco-input-number{position:relative;box-sizing:border-box;width:100%;border-radius:var(--border-radius-small)}.arco-input-number-step-button{display:flex;align-items:center;justify-content:center;box-sizing:border-box;padding:0;color:var(--color-text-2);background-color:var(--color-fill-2);cursor:pointer;user-select:none;transition:all .1s cubic-bezier(0,0,1,1)}.arco-input-number-step-button:hover{background-color:var(--color-fill-3);border-color:var(--color-fill-3)}.arco-input-number-step-button:active{background-color:var(--color-fill-4);border-color:var(--color-fill-4)}.arco-input-number-step-button:disabled{color:var(--color-text-4);background-color:var(--color-fill-2);cursor:not-allowed}.arco-input-number-step-button:disabled:hover,.arco-input-number-step-button:disabled:active{background-color:var(--color-fill-2);border-color:var(--color-neutral-3)}.arco-input-number-prefix,.arco-input-number-suffix{transition:all .1s cubic-bezier(0,0,1,1)}.arco-input-number-mode-embed .arco-input-number-step{position:absolute;top:4px;right:4px;bottom:4px;width:18px;overflow:hidden;border-radius:1px;opacity:0;transition:all .1s cubic-bezier(0,0,1,1)}.arco-input-number-mode-embed .arco-input-number-step .arco-input-number-step-button{width:100%;height:50%;font-size:10px;border:none;border-color:var(--color-neutral-3)}.arco-input-number-mode-embed .arco-input-suffix{justify-content:flex-end;min-width:6px}.arco-input-number-mode-embed .arco-input-suffix-has-feedback{min-width:32px}.arco-input-number-mode-embed .arco-input-suffix-has-feedback .arco-input-number-step{right:30px}.arco-input-number-mode-embed:not(.arco-input-disabled):not(.arco-input-outer-disabled):hover .arco-input-number-step,.arco-input-number-mode-embed:not(.arco-input-disabled):not(.arco-input-outer-disabled):focus-within .arco-input-number-step{opacity:1}.arco-input-number-mode-embed:not(.arco-input-disabled):not(.arco-input-outer-disabled):hover .arco-input-number-step~.arco-input-suffix,.arco-input-number-mode-embed:not(.arco-input-disabled):not(.arco-input-outer-disabled):focus-within .arco-input-number-step~.arco-input-suffix{opacity:0;pointer-events:none}.arco-input-number-mode-embed.arco-input-wrapper:not(.arco-input-focus) .arco-input-number-step-button:not(.arco-input-number-step-button-disabled):hover{background-color:var(--color-fill-4)}.arco-input-number-mode-button .arco-input-prepend,.arco-input-number-mode-button .arco-input-append{padding:0;border:none}.arco-input-number-mode-button .arco-input-prepend .arco-input-number-step-button{border-right:1px solid transparent;border-top-right-radius:0;border-bottom-right-radius:0}.arco-input-number-mode-button .arco-input-prepend .arco-input-number-step-button:not(.arco-input-number-mode-button .arco-input-prepend .arco-input-number-step-button:active){border-right-color:var(--color-neutral-3)}.arco-input-number-mode-button .arco-input-append .arco-input-number-step-button{border-left:1px solid transparent;border-top-left-radius:0;border-bottom-left-radius:0}.arco-input-number-mode-button .arco-input-append .arco-input-number-step-button:not(.arco-input-number-mode-button .arco-input-append .arco-input-number-step-button:active){border-left-color:var(--color-neutral-3)}.arco-input-tag{display:inline-flex;box-sizing:border-box;width:100%;padding-right:12px;padding-left:12px;color:var(--color-text-1);font-size:14px;background-color:var(--color-fill-2);border:1px solid transparent;border-radius:var(--border-radius-small);cursor:text;transition:color .1s cubic-bezier(0,0,1,1),border-color .1s cubic-bezier(0,0,1,1),background-color .1s cubic-bezier(0,0,1,1)}.arco-input-tag:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-input-tag:focus-within,.arco-input-tag.arco-input-tag-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-input-tag.arco-input-tag-disabled{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent;cursor:not-allowed}.arco-input-tag.arco-input-tag-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent}.arco-input-tag.arco-input-tag-disabled .arco-input-tag-prefix,.arco-input-tag.arco-input-tag-disabled .arco-input-tag-suffix{color:inherit}.arco-input-tag.arco-input-tag-error{background-color:var(--color-danger-light-1);border-color:transparent}.arco-input-tag.arco-input-tag-error:hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-input-tag.arco-input-tag-error:focus-within,.arco-input-tag.arco-input-tag-error.arco-input-tag-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-input-tag .arco-input-tag-prefix,.arco-input-tag .arco-input-tag-suffix{display:inline-flex;flex-shrink:0;align-items:center;white-space:nowrap;user-select:none}.arco-input-tag .arco-input-tag-prefix>svg,.arco-input-tag .arco-input-tag-suffix>svg{font-size:14px}.arco-input-tag .arco-input-tag-prefix{padding-right:12px;color:var(--color-text-2)}.arco-input-tag .arco-input-tag-suffix{padding-left:12px;color:var(--color-text-2)}.arco-input-tag .arco-input-tag-suffix .arco-feedback-icon{display:inline-flex}.arco-input-tag .arco-input-tag-suffix .arco-feedback-icon-status-validating{color:rgb(var(--primary-6))}.arco-input-tag .arco-input-tag-suffix .arco-feedback-icon-status-success{color:rgb(var(--success-6))}.arco-input-tag .arco-input-tag-suffix .arco-feedback-icon-status-warning{color:rgb(var(--warning-6))}.arco-input-tag .arco-input-tag-suffix .arco-feedback-icon-status-error{color:rgb(var(--danger-6))}.arco-input-tag .arco-input-tag-clear-btn{align-self:center;color:var(--color-text-2);font-size:12px;visibility:hidden;cursor:pointer}.arco-input-tag .arco-input-tag-clear-btn>svg{position:relative;transition:color .1s cubic-bezier(0,0,1,1)}.arco-input-tag:hover .arco-input-tag-clear-btn{visibility:visible}.arco-input-tag:not(.arco-input-tag-focus) .arco-input-tag-icon-hover:hover:before{background-color:var(--color-fill-4)}.arco-input-tag.arco-input-tag-has-tag{padding-right:4px;padding-left:4px}.arco-input-tag.arco-input-tag-has-prefix{padding-left:12px}.arco-input-tag.arco-input-tag-has-suffix{padding-right:12px}.arco-input-tag .arco-input-tag-inner{flex:1;overflow:hidden;line-height:0}.arco-input-tag .arco-input-tag-inner .arco-input-tag-tag{display:inline-flex;align-items:center;margin-right:4px;color:var(--color-text-1);font-size:12px;white-space:pre-wrap;word-break:break-word;background-color:var(--color-bg-2);border-color:var(--color-fill-3)}.arco-input-tag .arco-input-tag-inner .arco-input-tag-tag .arco-icon-hover:hover:before{background-color:var(--color-fill-2)}.arco-input-tag .arco-input-tag-inner .arco-input-tag-tag.arco-tag-custom-color{color:var(--color-white)}.arco-input-tag .arco-input-tag-inner .arco-input-tag-tag.arco-tag-custom-color .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:#fff3}.arco-input-tag .arco-input-tag-inner .arco-input-tag-input{width:100%;padding-right:0;padding-left:0;color:inherit;line-height:1.5715;background:none;border:none;border-radius:0;outline:none;cursor:inherit;-webkit-appearance:none;-webkit-tap-highlight-color:rgba(0,0,0,0);box-sizing:border-box}.arco-input-tag .arco-input-tag-inner .arco-input-tag-input::placeholder{color:var(--color-text-3)}.arco-input-tag .arco-input-tag-inner .arco-input-tag-input[disabled]::placeholder{color:var(--color-text-4)}.arco-input-tag .arco-input-tag-inner .arco-input-tag-input[disabled]{-webkit-text-fill-color:var(--color-text-4)}.arco-input-tag .arco-input-tag-mirror{position:absolute;top:0;left:0;white-space:pre;visibility:hidden;pointer-events:none}.arco-input-tag.arco-input-tag-focus .arco-input-tag-tag{background-color:var(--color-fill-2);border-color:var(--color-fill-2)}.arco-input-tag.arco-input-tag-focus .arco-input-tag-tag .arco-icon-hover:hover:before{background-color:var(--color-fill-3)}.arco-input-tag.arco-input-tag-disabled .arco-input-tag-tag{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:var(--color-fill-3)}.arco-input-tag.arco-input-tag-readonly,.arco-input-tag.arco-input-tag-disabled-input{cursor:default}.arco-input-tag.arco-input-tag-size-mini{font-size:12px}.arco-input-tag.arco-input-tag-size-mini .arco-input-tag-inner{padding-top:0;padding-bottom:0}.arco-input-tag.arco-input-tag-size-mini .arco-input-tag-tag,.arco-input-tag.arco-input-tag-size-mini .arco-input-tag-input{margin-top:1px;margin-bottom:1px;line-height:18px;vertical-align:middle}.arco-input-tag.arco-input-tag-size-mini .arco-input-tag-tag{height:auto;min-height:20px}.arco-input-tag.arco-input-tag-size-mini .arco-input-tag-input{height:20px}.arco-input-tag.arco-input-tag-size-medium{font-size:14px}.arco-input-tag.arco-input-tag-size-medium .arco-input-tag-inner{padding-top:2px;padding-bottom:2px}.arco-input-tag.arco-input-tag-size-medium .arco-input-tag-tag,.arco-input-tag.arco-input-tag-size-medium .arco-input-tag-input{margin-top:1px;margin-bottom:1px;line-height:22px;vertical-align:middle}.arco-input-tag.arco-input-tag-size-medium .arco-input-tag-tag{height:auto;min-height:24px}.arco-input-tag.arco-input-tag-size-medium .arco-input-tag-input{height:24px}.arco-input-tag.arco-input-tag-size-small{font-size:14px}.arco-input-tag.arco-input-tag-size-small .arco-input-tag-inner{padding-top:2px;padding-bottom:2px}.arco-input-tag.arco-input-tag-size-small .arco-input-tag-tag,.arco-input-tag.arco-input-tag-size-small .arco-input-tag-input{margin-top:1px;margin-bottom:1px;line-height:18px;vertical-align:middle}.arco-input-tag.arco-input-tag-size-small .arco-input-tag-tag{height:auto;min-height:20px}.arco-input-tag.arco-input-tag-size-small .arco-input-tag-input{height:20px}.arco-input-tag.arco-input-tag-size-large{font-size:14px}.arco-input-tag.arco-input-tag-size-large .arco-input-tag-inner{padding-top:2px;padding-bottom:2px}.arco-input-tag.arco-input-tag-size-large .arco-input-tag-tag,.arco-input-tag.arco-input-tag-size-large .arco-input-tag-input{margin-top:1px;margin-bottom:1px;line-height:26px;vertical-align:middle}.arco-input-tag.arco-input-tag-size-large .arco-input-tag-tag{height:auto;min-height:28px}.arco-input-tag.arco-input-tag-size-large .arco-input-tag-input{height:28px}.input-tag-zoom-enter-from{transform:scale(.5);opacity:0}.input-tag-zoom-enter-to{transform:scale(1);opacity:1}.input-tag-zoom-enter-active{transition:all .3s cubic-bezier(.34,.69,.1,1)}.input-tag-zoom-leave-from{transform:scale(1);opacity:1}.input-tag-zoom-leave-to{transform:scale(.5);opacity:0}.input-tag-zoom-leave-active{position:absolute;transition:all .3s cubic-bezier(.3,1.3,.3,1)}.input-tag-zoom-move{transition:all .3s cubic-bezier(.3,1.3,.3,1)}.arco-input-wrapper{display:inline-flex;box-sizing:border-box;width:100%;padding-right:12px;padding-left:12px;color:var(--color-text-1);font-size:14px;background-color:var(--color-fill-2);border:1px solid transparent;border-radius:var(--border-radius-small);cursor:text;transition:color .1s cubic-bezier(0,0,1,1),border-color .1s cubic-bezier(0,0,1,1),background-color .1s cubic-bezier(0,0,1,1)}.arco-input-wrapper:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-input-wrapper:focus-within,.arco-input-wrapper.arco-input-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-input-wrapper.arco-input-disabled{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent;cursor:not-allowed}.arco-input-wrapper.arco-input-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent}.arco-input-wrapper.arco-input-disabled .arco-input-prefix,.arco-input-wrapper.arco-input-disabled .arco-input-suffix{color:inherit}.arco-input-wrapper.arco-input-error{background-color:var(--color-danger-light-1);border-color:transparent}.arco-input-wrapper.arco-input-error:hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-input-wrapper.arco-input-error:focus-within,.arco-input-wrapper.arco-input-error.arco-input-wrapper-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-input-wrapper .arco-input-prefix,.arco-input-wrapper .arco-input-suffix{display:inline-flex;flex-shrink:0;align-items:center;white-space:nowrap;user-select:none}.arco-input-wrapper .arco-input-prefix>svg,.arco-input-wrapper .arco-input-suffix>svg{font-size:14px}.arco-input-wrapper .arco-input-prefix{padding-right:12px;color:var(--color-text-2)}.arco-input-wrapper .arco-input-suffix{padding-left:12px;color:var(--color-text-2)}.arco-input-wrapper .arco-input-suffix .arco-feedback-icon{display:inline-flex}.arco-input-wrapper .arco-input-suffix .arco-feedback-icon-status-validating{color:rgb(var(--primary-6))}.arco-input-wrapper .arco-input-suffix .arco-feedback-icon-status-success{color:rgb(var(--success-6))}.arco-input-wrapper .arco-input-suffix .arco-feedback-icon-status-warning{color:rgb(var(--warning-6))}.arco-input-wrapper .arco-input-suffix .arco-feedback-icon-status-error{color:rgb(var(--danger-6))}.arco-input-wrapper .arco-input-clear-btn{align-self:center;color:var(--color-text-2);font-size:12px;visibility:hidden;cursor:pointer}.arco-input-wrapper .arco-input-clear-btn>svg{position:relative;transition:color .1s cubic-bezier(0,0,1,1)}.arco-input-wrapper:hover .arco-input-clear-btn{visibility:visible}.arco-input-wrapper:not(.arco-input-focus) .arco-input-icon-hover:hover:before{background-color:var(--color-fill-4)}.arco-input-wrapper .arco-input{width:100%;padding-right:0;padding-left:0;color:inherit;line-height:1.5715;background:none;border:none;border-radius:0;outline:none;cursor:inherit;-webkit-appearance:none;-webkit-tap-highlight-color:rgba(0,0,0,0)}.arco-input-wrapper .arco-input::placeholder{color:var(--color-text-3)}.arco-input-wrapper .arco-input[disabled]::placeholder{color:var(--color-text-4)}.arco-input-wrapper .arco-input[disabled]{-webkit-text-fill-color:var(--color-text-4)}.arco-input-wrapper .arco-input.arco-input-size-mini{padding-top:1px;padding-bottom:1px;font-size:12px;line-height:1.667}.arco-input-wrapper .arco-input.arco-input-size-small{padding-top:2px;padding-bottom:2px;font-size:14px;line-height:1.5715}.arco-input-wrapper .arco-input.arco-input-size-medium{padding-top:4px;padding-bottom:4px;font-size:14px;line-height:1.5715}.arco-input-wrapper .arco-input.arco-input-size-large{padding-top:6px;padding-bottom:6px;font-size:14px;line-height:1.5715}.arco-input-wrapper .arco-input-word-limit{color:var(--color-text-3);font-size:12px}.arco-input-outer{display:inline-flex;width:100%}.arco-input-outer>.arco-input-wrapper{border-radius:0}.arco-input-outer>:first-child{border-top-left-radius:var(--border-radius-small);border-bottom-left-radius:var(--border-radius-small)}.arco-input-outer>:last-child{border-top-right-radius:var(--border-radius-small);border-bottom-right-radius:var(--border-radius-small)}.arco-input-outer.arco-input-outer-size-mini .arco-input-outer,.arco-input-outer.arco-input-outer-size-mini .arco-input-wrapper .arco-input-prefix,.arco-input-outer.arco-input-outer-size-mini .arco-input-wrapper .arco-input-suffix{font-size:12px}.arco-input-outer.arco-input-outer-size-mini .arco-input-wrapper .arco-input-prefix>svg,.arco-input-outer.arco-input-outer-size-mini .arco-input-wrapper .arco-input-suffix>svg{font-size:12px}.arco-input-outer.arco-input-outer-size-mini .arco-input-prepend,.arco-input-outer.arco-input-outer-size-mini .arco-input-append{font-size:12px}.arco-input-outer.arco-input-outer-size-mini .arco-input-prepend>svg,.arco-input-outer.arco-input-outer-size-mini .arco-input-append>svg{font-size:12px}.arco-input-outer.arco-input-outer-size-mini .arco-input-prepend .arco-input{width:auto;height:100%;margin:-1px -13px -1px -12px;border-color:transparent;border-top-left-radius:0;border-bottom-left-radius:0}.arco-input-outer.arco-input-outer-size-mini .arco-input-prepend .arco-select{width:auto;height:100%;margin:-1px -13px -1px -12px}.arco-input-outer.arco-input-outer-size-mini .arco-input-prepend .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-outer.arco-input-outer-size-mini .arco-input-prepend .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-outer.arco-input-outer-size-mini .arco-input-append .arco-input{width:auto;height:100%;margin:-1px -12px -1px -13px;border-color:transparent;border-top-right-radius:0;border-bottom-right-radius:0}.arco-input-outer.arco-input-outer-size-mini .arco-input-append .arco-select{width:auto;height:100%;margin:-1px -12px -1px -13px}.arco-input-outer.arco-input-outer-size-mini .arco-input-append .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-outer.arco-input-outer-size-mini .arco-input-append .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-outer.arco-input-outer-size-small .arco-input-outer,.arco-input-outer.arco-input-outer-size-small .arco-input-wrapper .arco-input-prefix,.arco-input-outer.arco-input-outer-size-small .arco-input-wrapper .arco-input-suffix{font-size:14px}.arco-input-outer.arco-input-outer-size-small .arco-input-wrapper .arco-input-prefix>svg,.arco-input-outer.arco-input-outer-size-small .arco-input-wrapper .arco-input-suffix>svg{font-size:14px}.arco-input-outer.arco-input-outer-size-small .arco-input-prepend,.arco-input-outer.arco-input-outer-size-small .arco-input-append{font-size:14px}.arco-input-outer.arco-input-outer-size-small .arco-input-prepend>svg,.arco-input-outer.arco-input-outer-size-small .arco-input-append>svg{font-size:14px}.arco-input-outer.arco-input-outer-size-small .arco-input-prepend .arco-input{width:auto;height:100%;margin:-1px -13px -1px -12px;border-color:transparent;border-top-left-radius:0;border-bottom-left-radius:0}.arco-input-outer.arco-input-outer-size-small .arco-input-prepend .arco-select{width:auto;height:100%;margin:-1px -13px -1px -12px}.arco-input-outer.arco-input-outer-size-small .arco-input-prepend .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-outer.arco-input-outer-size-small .arco-input-prepend .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-outer.arco-input-outer-size-small .arco-input-append .arco-input{width:auto;height:100%;margin:-1px -12px -1px -13px;border-color:transparent;border-top-right-radius:0;border-bottom-right-radius:0}.arco-input-outer.arco-input-outer-size-small .arco-input-append .arco-select{width:auto;height:100%;margin:-1px -12px -1px -13px}.arco-input-outer.arco-input-outer-size-small .arco-input-append .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-outer.arco-input-outer-size-small .arco-input-append .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-outer.arco-input-outer-size-large .arco-input-outer,.arco-input-outer.arco-input-outer-size-large .arco-input-wrapper .arco-input-prefix,.arco-input-outer.arco-input-outer-size-large .arco-input-wrapper .arco-input-suffix{font-size:14px}.arco-input-outer.arco-input-outer-size-large .arco-input-wrapper .arco-input-prefix>svg,.arco-input-outer.arco-input-outer-size-large .arco-input-wrapper .arco-input-suffix>svg{font-size:14px}.arco-input-outer.arco-input-outer-size-large .arco-input-prepend,.arco-input-outer.arco-input-outer-size-large .arco-input-append{font-size:14px}.arco-input-outer.arco-input-outer-size-large .arco-input-prepend>svg,.arco-input-outer.arco-input-outer-size-large .arco-input-append>svg{font-size:14px}.arco-input-outer.arco-input-outer-size-large .arco-input-prepend .arco-input{width:auto;height:100%;margin:-1px -13px -1px -12px;border-color:transparent;border-top-left-radius:0;border-bottom-left-radius:0}.arco-input-outer.arco-input-outer-size-large .arco-input-prepend .arco-select{width:auto;height:100%;margin:-1px -13px -1px -12px}.arco-input-outer.arco-input-outer-size-large .arco-input-prepend .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-outer.arco-input-outer-size-large .arco-input-prepend .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-outer.arco-input-outer-size-large .arco-input-append .arco-input{width:auto;height:100%;margin:-1px -12px -1px -13px;border-color:transparent;border-top-right-radius:0;border-bottom-right-radius:0}.arco-input-outer.arco-input-outer-size-large .arco-input-append .arco-select{width:auto;height:100%;margin:-1px -12px -1px -13px}.arco-input-outer.arco-input-outer-size-large .arco-input-append .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-outer.arco-input-outer-size-large .arco-input-append .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-outer-disabled{cursor:not-allowed}.arco-input-prepend,.arco-input-append{display:inline-flex;flex-shrink:0;align-items:center;box-sizing:border-box;padding:0 12px;color:var(--color-text-1);white-space:nowrap;background-color:var(--color-fill-2);border:1px solid transparent}.arco-input-prepend>svg,.arco-input-append>svg{font-size:14px}.arco-input-prepend{border-right:1px solid var(--color-neutral-3)}.arco-input-prepend .arco-input{width:auto;height:100%;margin:-1px -12px -1px -13px;border-color:transparent;border-top-right-radius:0;border-bottom-right-radius:0}.arco-input-prepend .arco-select{width:auto;height:100%;margin:-1px -12px -1px -13px}.arco-input-prepend .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-prepend .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-append{border-left:1px solid var(--color-neutral-3)}.arco-input-append .arco-input{width:auto;height:100%;margin:-1px -13px -1px -12px;border-color:transparent;border-top-left-radius:0;border-bottom-left-radius:0}.arco-input-append .arco-select{width:auto;height:100%;margin:-1px -13px -1px -12px}.arco-input-append .arco-select .arco-select-view{background-color:inherit;border-color:transparent;border-radius:0}.arco-input-append .arco-select.arco-select-single .arco-select-view{height:100%}.arco-input-group{display:inline-flex;align-items:center}.arco-input-group>*{border-radius:0}.arco-input-group>*.arco-input-outer>:last-child,.arco-input-group>*.arco-input-outer>:first-child{border-radius:0}.arco-input-group>*:not(:last-child){position:relative;box-sizing:border-box}.arco-input-group>*:first-child,.arco-input-group>*:first-child .arco-input-group>*:first-child{border-top-left-radius:var(--border-radius-small);border-bottom-left-radius:var(--border-radius-small)}.arco-input-group>*:first-child .arco-select-view,.arco-input-group>*:first-child .arco-input-group>*:first-child .arco-select-view{border-top-left-radius:var(--border-radius-small);border-bottom-left-radius:var(--border-radius-small)}.arco-input-group>*:last-child,.arco-input-group>*:last-child .arco-input-outer>*:last-child{border-top-right-radius:var(--border-radius-small);border-bottom-right-radius:var(--border-radius-small)}.arco-input-group>*:last-child .arco-select-view,.arco-input-group>*:last-child .arco-input-outer>*:last-child .arco-select-view{border-top-right-radius:var(--border-radius-small);border-bottom-right-radius:var(--border-radius-small)}.arco-input-group>.arco-input-wrapper:not(:last-child),.arco-input-group>.arco-input-outer:not(:last-child),.arco-input-group>.arco-input-tag:not(:last-child),.arco-input-group>.arco-select-view:not(:last-child){border-right:1px solid var(--color-neutral-3)}.arco-input-group>.arco-input-wrapper:not(:last-child):focus-within,.arco-input-group>.arco-input-outer:not(:last-child):focus-within,.arco-input-group>.arco-input-tag:not(:last-child):focus-within,.arco-input-group>.arco-select-view:not(:last-child):focus-within{border-right-color:rgb(var(--primary-6))}.size-height-size-mini{padding-top:1px;padding-bottom:1px;font-size:12px;line-height:1.667}.size-height-size-small{padding-top:2px;padding-bottom:2px;font-size:14px}.size-height-size-large{padding-top:6px;padding-bottom:6px;font-size:14px}.arco-textarea-wrapper{position:relative;display:inline-block;width:100%}.arco-textarea-clear-wrapper:hover .arco-textarea-clear-icon{display:inline-block}.arco-textarea-clear-wrapper .arco-textarea{padding-right:20px}.arco-textarea-word-limit{position:absolute;right:10px;bottom:6px;color:var(--color-text-3);font-size:12px;user-select:none}.arco-textarea-clear-icon{position:absolute;top:10px;right:10px;display:none;font-size:12px}.arco-input-search .arco-input-append{padding:0;border:none}.arco-input-search .arco-input-suffix{color:var(--color-text-2);font-size:14px}.arco-input-search .arco-input-search-btn{border-top-left-radius:0;border-bottom-left-radius:0}.arco-input-wrapper.arco-input-password:not(.arco-input-disabled) .arco-input-suffix{color:var(--color-text-2);font-size:12px;cursor:pointer}.arco-layout{display:flex;flex:1;flex-direction:column;margin:0;padding:0}.arco-layout-sider{position:relative;flex:none;width:auto;margin:0;padding:0;background:var(--color-menu-dark-bg);transition:width .2s cubic-bezier(.34,.69,.1,1)}.arco-layout-sider-children{height:100%;overflow:auto}.arco-layout-sider-collapsed .arco-layout-sider-children::-webkit-scrollbar{width:0}.arco-layout-sider-has-trigger{box-sizing:border-box;padding-bottom:48px}.arco-layout-sider-trigger{z-index:1;display:flex;align-items:center;justify-content:center;box-sizing:border-box;width:100%;height:48px;color:var(--color-white);background:rgba(255,255,255,.2);cursor:pointer;transition:width .2s cubic-bezier(.34,.69,.1,1)}.arco-layout-sider-trigger-light{color:var(--color-text-1);background:var(--color-menu-light-bg);border-top:1px solid var(--color-bg-5)}.arco-layout-sider-light{background:var(--color-menu-light-bg);box-shadow:0 2px 5px #00000014}.arco-layout-header{flex:0 0 auto;box-sizing:border-box;margin:0}.arco-layout-content{flex:1}.arco-layout-footer{flex:0 0 auto;margin:0}.arco-layout-has-sider{flex-direction:row}.arco-layout-has-sider>.arco-layout,.arco-layout-has-sider>.arco-layout-content{overflow-x:hidden}.arco-link{display:inline-flex;align-items:center;justify-content:center;padding:1px 4px;color:rgb(var(--link-6));font-size:14px;line-height:1.5715;text-decoration:none;background-color:transparent;border-radius:var(--border-radius-small);cursor:pointer;transition:all .1s cubic-bezier(0,0,1,1)}.arco-link:hover{color:rgb(var(--link-6));background-color:var(--color-fill-2)}.arco-link:active{color:rgb(var(--link-6));background-color:var(--color-fill-3);transition:none}.arco-link.arco-link-hoverless{display:inline;padding:0;background-color:unset}.arco-link.arco-link-hoverless:active,.arco-link.arco-link-hoverless:hover{background-color:unset}.arco-link.arco-link-disabled{color:var(--color-link-light-3);background:none;cursor:not-allowed}.arco-link.arco-link-loading{color:var(--color-link-light-3);background:none;cursor:default}.arco-link-status-success,.arco-link-status-success:hover,.arco-link-status-success:active{color:rgb(var(--success-6))}.arco-link-status-success.arco-link-disabled,.arco-link-status-success.arco-link-loading{color:var(--color-success-light-3)}.arco-link-status-danger,.arco-link-status-danger:hover,.arco-link-status-danger:active{color:rgb(var(--danger-6))}.arco-link-status-danger.arco-link-disabled,.arco-link-status-danger.arco-link-loading{color:var(--color-danger-light-3)}.arco-link-status-warning,.arco-link-status-warning:hover,.arco-link-status-warning:active{color:rgb(var(--warning-6))}.arco-link-status-warning.arco-link-disabled,.arco-link-status-warning.arco-link-loading{color:var(--color-warning-light-2)}.arco-link-icon{margin-right:6px;font-size:12px;vertical-align:middle}.arco-list{display:flex;flex-direction:column;box-sizing:border-box;width:100%;overflow-y:auto;color:var(--color-text-1);font-size:14px;line-height:1.5715;border-radius:var(--border-radius-medium)}.arco-list-wrapper{overflow:hidden}.arco-list-wrapper .arco-list-spin{display:block;height:100%;overflow:hidden}.arco-list-content{overflow:hidden}.arco-list-small .arco-list-content-wrapper .arco-list-header{padding:8px 20px}.arco-list-small .arco-list-content-wrapper .arco-list-footer,.arco-list-small .arco-list-content-wrapper .arco-list-content>.arco-list-item,.arco-list-small .arco-list-content-wrapper .arco-list-content .arco-list-col>.arco-list-item,.arco-list-small .arco-list-content-wrapper .arco-list-content.arco-list-virtual .arco-list-item{padding:9px 20px}.arco-list-medium .arco-list-content-wrapper .arco-list-header{padding:12px 20px}.arco-list-medium .arco-list-content-wrapper .arco-list-footer,.arco-list-medium .arco-list-content-wrapper .arco-list-content>.arco-list-item,.arco-list-medium .arco-list-content-wrapper .arco-list-content .arco-list-col>.arco-list-item,.arco-list-medium .arco-list-content-wrapper .arco-list-content.arco-list-virtual .arco-list-item{padding:13px 20px}.arco-list-large .arco-list-content-wrapper .arco-list-header{padding:16px 20px}.arco-list-large .arco-list-content-wrapper .arco-list-footer,.arco-list-large .arco-list-content-wrapper .arco-list-content>.arco-list-item,.arco-list-large .arco-list-content-wrapper .arco-list-content .arco-list-col>.arco-list-item,.arco-list-large .arco-list-content-wrapper .arco-list-content.arco-list-virtual .arco-list-item{padding:17px 20px}.arco-list-bordered{border:1px solid var(--color-neutral-3)}.arco-list-split .arco-list-header,.arco-list-split .arco-list-item:not(:last-child){border-bottom:1px solid var(--color-neutral-3)}.arco-list-split .arco-list-footer{border-top:1px solid var(--color-neutral-3)}.arco-list-header{color:var(--color-text-1);font-weight:500;font-size:16px;line-height:1.5}.arco-list-item{display:flex;justify-content:space-between;box-sizing:border-box;width:100%;overflow:hidden}.arco-list-item-main{flex:1}.arco-list-item-main .arco-list-item-action:not(:first-child){margin-top:4px}.arco-list-item-meta{display:flex;align-items:center;padding:4px 0}.arco-list-item-meta-avatar{display:flex}.arco-list-item-meta-avatar:not(:last-child){margin-right:16px}.arco-list-item-meta-title{color:var(--color-text-1);font-weight:500}.arco-list-item-meta-title:not(:last-child){margin-bottom:2px}.arco-list-item-meta-description{color:var(--color-text-2)}.arco-list-item-action{display:flex;flex-wrap:nowrap;align-self:center;margin:0;padding:0;list-style:none}.arco-list-item-action>li{display:inline-block;cursor:pointer}.arco-list-item-action>li:not(:last-child){margin-right:20px}.arco-list-hover .arco-list-item:hover{background-color:var(--color-fill-1)}.arco-list-pagination{float:right;margin-top:24px}.arco-list-pagination:after{display:block;clear:both;height:0;overflow:hidden;visibility:hidden;content:""}.arco-list-scroll-loading{display:flex;align-items:center;justify-content:center}.arco-list-content{flex:auto}.arco-list-content .arco-empty{display:flex;align-items:center;justify-content:center;height:100%}.arco-mention{position:relative;display:inline-block;box-sizing:border-box;width:100%}.arco-mention-measure{position:absolute;top:0;right:0;bottom:0;left:0;overflow:auto;visibility:hidden;pointer-events:none}.arco-menu{position:relative;box-sizing:border-box;width:100%;font-size:14px;line-height:1.5715;transition:width .2s cubic-bezier(.34,.69,.1,1)}.arco-menu:focus-visible{outline:3px solid var(--color-primary-light-2)}.arco-menu-indent{display:inline-block;width:20px}.arco-menu .arco-menu-item,.arco-menu .arco-menu-group-title,.arco-menu .arco-menu-pop-header,.arco-menu .arco-menu-inline-header{position:relative;box-sizing:border-box;border-radius:var(--border-radius-small);cursor:pointer}.arco-menu .arco-menu-item.arco-menu-disabled,.arco-menu .arco-menu-group-title.arco-menu-disabled,.arco-menu .arco-menu-pop-header.arco-menu-disabled,.arco-menu .arco-menu-inline-header.arco-menu-disabled{cursor:not-allowed}.arco-menu .arco-menu-item.arco-menu-selected,.arco-menu .arco-menu-group-title.arco-menu-selected,.arco-menu .arco-menu-pop-header.arco-menu-selected,.arco-menu .arco-menu-inline-header.arco-menu-selected{font-weight:500;transition:color .2s cubic-bezier(0,0,1,1)}.arco-menu .arco-menu-item .arco-icon,.arco-menu .arco-menu-group-title .arco-icon,.arco-menu .arco-menu-pop-header .arco-icon,.arco-menu .arco-menu-inline-header .arco-icon,.arco-menu .arco-menu-item .arco-menu-icon,.arco-menu .arco-menu-group-title .arco-menu-icon,.arco-menu .arco-menu-pop-header .arco-menu-icon,.arco-menu .arco-menu-inline-header .arco-menu-icon{margin-right:16px}.arco-menu .arco-menu-item .arco-menu-icon .arco-icon,.arco-menu .arco-menu-group-title .arco-menu-icon .arco-icon,.arco-menu .arco-menu-pop-header .arco-menu-icon .arco-icon,.arco-menu .arco-menu-inline-header .arco-menu-icon .arco-icon{margin-right:0}.arco-menu-light{background-color:var(--color-menu-light-bg)}.arco-menu-light .arco-menu-item,.arco-menu-light .arco-menu-group-title,.arco-menu-light .arco-menu-pop-header,.arco-menu-light .arco-menu-inline-header{color:var(--color-text-2);background-color:var(--color-menu-light-bg)}.arco-menu-light .arco-menu-item .arco-icon,.arco-menu-light .arco-menu-group-title .arco-icon,.arco-menu-light .arco-menu-pop-header .arco-icon,.arco-menu-light .arco-menu-inline-header .arco-icon,.arco-menu-light .arco-menu-item .arco-menu-icon,.arco-menu-light .arco-menu-group-title .arco-menu-icon,.arco-menu-light .arco-menu-pop-header .arco-menu-icon,.arco-menu-light .arco-menu-inline-header .arco-menu-icon{color:var(--color-text-3)}.arco-menu-light .arco-menu-item:hover,.arco-menu-light .arco-menu-group-title:hover,.arco-menu-light .arco-menu-pop-header:hover,.arco-menu-light .arco-menu-inline-header:hover{color:var(--color-text-2);background-color:var(--color-fill-2)}.arco-menu-light .arco-menu-item:hover .arco-icon,.arco-menu-light .arco-menu-group-title:hover .arco-icon,.arco-menu-light .arco-menu-pop-header:hover .arco-icon,.arco-menu-light .arco-menu-inline-header:hover .arco-icon,.arco-menu-light .arco-menu-item:hover .arco-menu-icon,.arco-menu-light .arco-menu-group-title:hover .arco-menu-icon,.arco-menu-light .arco-menu-pop-header:hover .arco-menu-icon,.arco-menu-light .arco-menu-inline-header:hover .arco-menu-icon{color:var(--color-text-3)}.arco-menu-light .arco-menu-item.arco-menu-selected,.arco-menu-light .arco-menu-group-title.arco-menu-selected,.arco-menu-light .arco-menu-pop-header.arco-menu-selected,.arco-menu-light .arco-menu-inline-header.arco-menu-selected,.arco-menu-light .arco-menu-item.arco-menu-selected .arco-icon,.arco-menu-light .arco-menu-group-title.arco-menu-selected .arco-icon,.arco-menu-light .arco-menu-pop-header.arco-menu-selected .arco-icon,.arco-menu-light .arco-menu-inline-header.arco-menu-selected .arco-icon,.arco-menu-light .arco-menu-item.arco-menu-selected .arco-menu-icon,.arco-menu-light .arco-menu-group-title.arco-menu-selected .arco-menu-icon,.arco-menu-light .arco-menu-pop-header.arco-menu-selected .arco-menu-icon,.arco-menu-light .arco-menu-inline-header.arco-menu-selected .arco-menu-icon{color:rgb(var(--primary-6))}.arco-menu-light .arco-menu-item.arco-menu-disabled,.arco-menu-light .arco-menu-group-title.arco-menu-disabled,.arco-menu-light .arco-menu-pop-header.arco-menu-disabled,.arco-menu-light .arco-menu-inline-header.arco-menu-disabled{color:var(--color-text-4);background-color:var(--color-menu-light-bg)}.arco-menu-light .arco-menu-item.arco-menu-disabled .arco-icon,.arco-menu-light .arco-menu-group-title.arco-menu-disabled .arco-icon,.arco-menu-light .arco-menu-pop-header.arco-menu-disabled .arco-icon,.arco-menu-light .arco-menu-inline-header.arco-menu-disabled .arco-icon,.arco-menu-light .arco-menu-item.arco-menu-disabled .arco-menu-icon,.arco-menu-light .arco-menu-group-title.arco-menu-disabled .arco-menu-icon,.arco-menu-light .arco-menu-pop-header.arco-menu-disabled .arco-menu-icon,.arco-menu-light .arco-menu-inline-header.arco-menu-disabled .arco-menu-icon{color:var(--color-text-4)}.arco-menu-light .arco-menu-item.arco-menu-selected{background-color:var(--color-fill-2)}.arco-menu-light .arco-menu-inline-header.arco-menu-selected,.arco-menu-light .arco-menu-inline-header.arco-menu-selected .arco-icon,.arco-menu-light .arco-menu-inline-header.arco-menu-selected .arco-menu-icon{color:rgb(var(--primary-6))}.arco-menu-light .arco-menu-inline-header.arco-menu-selected:hover{background-color:var(--color-fill-2)}.arco-menu-light.arco-menu-horizontal .arco-menu-item.arco-menu-selected,.arco-menu-light.arco-menu-horizontal .arco-menu-group-title.arco-menu-selected,.arco-menu-light.arco-menu-horizontal .arco-menu-pop-header.arco-menu-selected,.arco-menu-light.arco-menu-horizontal .arco-menu-inline-header.arco-menu-selected{background:none;transition:color .2s cubic-bezier(0,0,1,1)}.arco-menu-light.arco-menu-horizontal .arco-menu-item.arco-menu-selected:hover,.arco-menu-light.arco-menu-horizontal .arco-menu-group-title.arco-menu-selected:hover,.arco-menu-light.arco-menu-horizontal .arco-menu-pop-header.arco-menu-selected:hover,.arco-menu-light.arco-menu-horizontal .arco-menu-inline-header.arco-menu-selected:hover{background-color:var(--color-fill-2)}.arco-menu-light .arco-menu-group-title{color:var(--color-text-3);pointer-events:none}.arco-menu-light .arco-menu-collapse-button{color:var(--color-text-3);background-color:var(--color-fill-1)}.arco-menu-light .arco-menu-collapse-button:hover{background-color:var(--color-fill-3)}.arco-menu-dark{background-color:var(--color-menu-dark-bg)}.arco-menu-dark .arco-menu-item,.arco-menu-dark .arco-menu-group-title,.arco-menu-dark .arco-menu-pop-header,.arco-menu-dark .arco-menu-inline-header{color:var(--color-text-4);background-color:var(--color-menu-dark-bg)}.arco-menu-dark .arco-menu-item .arco-icon,.arco-menu-dark .arco-menu-group-title .arco-icon,.arco-menu-dark .arco-menu-pop-header .arco-icon,.arco-menu-dark .arco-menu-inline-header .arco-icon,.arco-menu-dark .arco-menu-item .arco-menu-icon,.arco-menu-dark .arco-menu-group-title .arco-menu-icon,.arco-menu-dark .arco-menu-pop-header .arco-menu-icon,.arco-menu-dark .arco-menu-inline-header .arco-menu-icon{color:var(--color-text-3)}.arco-menu-dark .arco-menu-item:hover,.arco-menu-dark .arco-menu-group-title:hover,.arco-menu-dark .arco-menu-pop-header:hover,.arco-menu-dark .arco-menu-inline-header:hover{color:var(--color-text-4);background-color:var(--color-menu-dark-hover)}.arco-menu-dark .arco-menu-item:hover .arco-icon,.arco-menu-dark .arco-menu-group-title:hover .arco-icon,.arco-menu-dark .arco-menu-pop-header:hover .arco-icon,.arco-menu-dark .arco-menu-inline-header:hover .arco-icon,.arco-menu-dark .arco-menu-item:hover .arco-menu-icon,.arco-menu-dark .arco-menu-group-title:hover .arco-menu-icon,.arco-menu-dark .arco-menu-pop-header:hover .arco-menu-icon,.arco-menu-dark .arco-menu-inline-header:hover .arco-menu-icon{color:var(--color-text-3)}.arco-menu-dark .arco-menu-item.arco-menu-selected,.arco-menu-dark .arco-menu-group-title.arco-menu-selected,.arco-menu-dark .arco-menu-pop-header.arco-menu-selected,.arco-menu-dark .arco-menu-inline-header.arco-menu-selected,.arco-menu-dark .arco-menu-item.arco-menu-selected .arco-icon,.arco-menu-dark .arco-menu-group-title.arco-menu-selected .arco-icon,.arco-menu-dark .arco-menu-pop-header.arco-menu-selected .arco-icon,.arco-menu-dark .arco-menu-inline-header.arco-menu-selected .arco-icon,.arco-menu-dark .arco-menu-item.arco-menu-selected .arco-menu-icon,.arco-menu-dark .arco-menu-group-title.arco-menu-selected .arco-menu-icon,.arco-menu-dark .arco-menu-pop-header.arco-menu-selected .arco-menu-icon,.arco-menu-dark .arco-menu-inline-header.arco-menu-selected .arco-menu-icon{color:var(--color-white)}.arco-menu-dark .arco-menu-item.arco-menu-disabled,.arco-menu-dark .arco-menu-group-title.arco-menu-disabled,.arco-menu-dark .arco-menu-pop-header.arco-menu-disabled,.arco-menu-dark .arco-menu-inline-header.arco-menu-disabled{color:var(--color-text-2);background-color:var(--color-menu-dark-bg)}.arco-menu-dark .arco-menu-item.arco-menu-disabled .arco-icon,.arco-menu-dark .arco-menu-group-title.arco-menu-disabled .arco-icon,.arco-menu-dark .arco-menu-pop-header.arco-menu-disabled .arco-icon,.arco-menu-dark .arco-menu-inline-header.arco-menu-disabled .arco-icon,.arco-menu-dark .arco-menu-item.arco-menu-disabled .arco-menu-icon,.arco-menu-dark .arco-menu-group-title.arco-menu-disabled .arco-menu-icon,.arco-menu-dark .arco-menu-pop-header.arco-menu-disabled .arco-menu-icon,.arco-menu-dark .arco-menu-inline-header.arco-menu-disabled .arco-menu-icon{color:var(--color-text-2)}.arco-menu-dark .arco-menu-item.arco-menu-selected{background-color:var(--color-menu-dark-hover)}.arco-menu-dark .arco-menu-inline-header.arco-menu-selected,.arco-menu-dark .arco-menu-inline-header.arco-menu-selected .arco-icon,.arco-menu-dark .arco-menu-inline-header.arco-menu-selected .arco-menu-icon{color:rgb(var(--primary-6))}.arco-menu-dark .arco-menu-inline-header.arco-menu-selected:hover{background-color:var(--color-menu-dark-hover)}.arco-menu-dark.arco-menu-horizontal .arco-menu-item.arco-menu-selected,.arco-menu-dark.arco-menu-horizontal .arco-menu-group-title.arco-menu-selected,.arco-menu-dark.arco-menu-horizontal .arco-menu-pop-header.arco-menu-selected,.arco-menu-dark.arco-menu-horizontal .arco-menu-inline-header.arco-menu-selected{background:none;transition:color .2s cubic-bezier(0,0,1,1)}.arco-menu-dark.arco-menu-horizontal .arco-menu-item.arco-menu-selected:hover,.arco-menu-dark.arco-menu-horizontal .arco-menu-group-title.arco-menu-selected:hover,.arco-menu-dark.arco-menu-horizontal .arco-menu-pop-header.arco-menu-selected:hover,.arco-menu-dark.arco-menu-horizontal .arco-menu-inline-header.arco-menu-selected:hover{background-color:var(--color-menu-dark-hover)}.arco-menu-dark .arco-menu-group-title{color:var(--color-text-3);pointer-events:none}.arco-menu-dark .arco-menu-collapse-button{color:var(--color-white);background-color:rgb(var(--primary-6))}.arco-menu-dark .arco-menu-collapse-button:hover{background-color:rgb(var(--primary-7))}.arco-menu a,.arco-menu a:hover,.arco-menu a:focus,.arco-menu a:active{color:inherit;text-decoration:none;cursor:inherit}.arco-menu-inner{box-sizing:border-box;width:100%;height:100%;overflow:auto}.arco-menu-icon-suffix.is-open{transform:rotate(180deg)}.arco-menu-vertical .arco-menu-item,.arco-menu-vertical .arco-menu-group-title,.arco-menu-vertical .arco-menu-pop-header,.arco-menu-vertical .arco-menu-inline-header{padding:0 12px;line-height:40px}.arco-menu-vertical .arco-menu-item .arco-menu-icon-suffix .arco-icon,.arco-menu-vertical .arco-menu-group-title .arco-menu-icon-suffix .arco-icon,.arco-menu-vertical .arco-menu-pop-header .arco-menu-icon-suffix .arco-icon,.arco-menu-vertical .arco-menu-inline-header .arco-menu-icon-suffix .arco-icon{margin-right:0}.arco-menu-vertical .arco-menu-item,.arco-menu-vertical .arco-menu-group-title,.arco-menu-vertical .arco-menu-pop-header,.arco-menu-vertical .arco-menu-inline-header{margin-bottom:4px}.arco-menu-vertical .arco-menu-item:not(.arco-menu-has-icon),.arco-menu-vertical .arco-menu-group-title:not(.arco-menu-has-icon),.arco-menu-vertical .arco-menu-pop-header:not(.arco-menu-has-icon),.arco-menu-vertical .arco-menu-inline-header:not(.arco-menu-has-icon){overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-menu-vertical .arco-menu-item.arco-menu-has-icon,.arco-menu-vertical .arco-menu-group-title.arco-menu-has-icon,.arco-menu-vertical .arco-menu-pop-header.arco-menu-has-icon,.arco-menu-vertical .arco-menu-inline-header.arco-menu-has-icon{display:flex;align-items:center}.arco-menu-vertical .arco-menu-item.arco-menu-has-icon>.arco-menu-indent-list,.arco-menu-vertical .arco-menu-group-title.arco-menu-has-icon>.arco-menu-indent-list,.arco-menu-vertical .arco-menu-pop-header.arco-menu-has-icon>.arco-menu-indent-list,.arco-menu-vertical .arco-menu-inline-header.arco-menu-has-icon>.arco-menu-indent-list,.arco-menu-vertical .arco-menu-item.arco-menu-has-icon>.arco-menu-icon,.arco-menu-vertical .arco-menu-group-title.arco-menu-has-icon>.arco-menu-icon,.arco-menu-vertical .arco-menu-pop-header.arco-menu-has-icon>.arco-menu-icon,.arco-menu-vertical .arco-menu-inline-header.arco-menu-has-icon>.arco-menu-icon{flex:none}.arco-menu-vertical .arco-menu-item.arco-menu-has-icon .arco-menu-icon,.arco-menu-vertical .arco-menu-group-title.arco-menu-has-icon .arco-menu-icon,.arco-menu-vertical .arco-menu-pop-header.arco-menu-has-icon .arco-menu-icon,.arco-menu-vertical .arco-menu-inline-header.arco-menu-has-icon .arco-menu-icon{line-height:1}.arco-menu-vertical .arco-menu-item.arco-menu-has-icon .arco-menu-title,.arco-menu-vertical .arco-menu-group-title.arco-menu-has-icon .arco-menu-title,.arco-menu-vertical .arco-menu-pop-header.arco-menu-has-icon .arco-menu-title,.arco-menu-vertical .arco-menu-inline-header.arco-menu-has-icon .arco-menu-title{overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-menu-vertical .arco-menu-item .arco-menu-item-inner,.arco-menu-vertical .arco-menu-group-title .arco-menu-item-inner,.arco-menu-vertical .arco-menu-pop-header .arco-menu-item-inner,.arco-menu-vertical .arco-menu-inline-header .arco-menu-item-inner{overflow:hidden;white-space:nowrap;text-overflow:ellipsis;width:100%}.arco-menu-vertical .arco-menu-item .arco-menu-icon-suffix,.arco-menu-vertical .arco-menu-group-title .arco-menu-icon-suffix,.arco-menu-vertical .arco-menu-pop-header .arco-menu-icon-suffix,.arco-menu-vertical .arco-menu-inline-header .arco-menu-icon-suffix{position:absolute;right:12px}.arco-menu-vertical .arco-menu-inner{padding:4px 8px}.arco-menu-vertical .arco-menu-item.arco-menu-item-indented{display:flex}.arco-menu-vertical .arco-menu-pop-header,.arco-menu-vertical .arco-menu-inline-header{padding-right:28px}.arco-menu-horizontal{width:100%;height:auto}.arco-menu-horizontal .arco-menu-item,.arco-menu-horizontal .arco-menu-group-title,.arco-menu-horizontal .arco-menu-pop-header,.arco-menu-horizontal .arco-menu-inline-header{padding:0 12px;line-height:30px}.arco-menu-horizontal .arco-menu-item .arco-menu-icon-suffix .arco-icon,.arco-menu-horizontal .arco-menu-group-title .arco-menu-icon-suffix .arco-icon,.arco-menu-horizontal .arco-menu-pop-header .arco-menu-icon-suffix .arco-icon,.arco-menu-horizontal .arco-menu-inline-header .arco-menu-icon-suffix .arco-icon{margin-right:0}.arco-menu-horizontal .arco-menu-item .arco-icon,.arco-menu-horizontal .arco-menu-group-title .arco-icon,.arco-menu-horizontal .arco-menu-pop-header .arco-icon,.arco-menu-horizontal .arco-menu-inline-header .arco-icon,.arco-menu-horizontal .arco-menu-item .arco-menu-icon,.arco-menu-horizontal .arco-menu-group-title .arco-menu-icon,.arco-menu-horizontal .arco-menu-pop-header .arco-menu-icon,.arco-menu-horizontal .arco-menu-inline-header .arco-menu-icon{margin-right:16px}.arco-menu-horizontal .arco-menu-item .arco-menu-icon-suffix,.arco-menu-horizontal .arco-menu-group-title .arco-menu-icon-suffix,.arco-menu-horizontal .arco-menu-pop-header .arco-menu-icon-suffix,.arco-menu-horizontal .arco-menu-inline-header .arco-menu-icon-suffix{margin-left:6px}.arco-menu-horizontal .arco-menu-inner{display:flex;align-items:center;padding:14px 20px}.arco-menu-horizontal .arco-menu-item,.arco-menu-horizontal .arco-menu-pop{display:inline-block;flex-shrink:0;vertical-align:middle}.arco-menu-horizontal .arco-menu-item:not(:first-child),.arco-menu-horizontal .arco-menu-pop:not(:first-child){margin-left:12px}.arco-menu-horizontal .arco-menu-pop:after{position:absolute;bottom:-14px;left:0;width:100%;height:14px;content:" "}.arco-menu-overflow-wrap{width:100%}.arco-menu-overflow-sub-menu-mirror,.arco-menu-overflow-hidden-menu-item{position:absolute!important;white-space:nowrap;visibility:hidden;pointer-events:none}.arco-menu-selected-label{position:absolute;right:12px;bottom:-14px;left:12px;height:3px;background-color:rgb(var(--primary-6))}.arco-menu-pop-button{width:auto;background:none;box-shadow:none}.arco-menu-pop-button.arco-menu-collapsed{width:auto}.arco-menu-pop-button .arco-menu-item,.arco-menu-pop-button .arco-menu-group-title,.arco-menu-pop-button .arco-menu-pop-header,.arco-menu-pop-button .arco-menu-inline-header{width:40px;height:40px;margin-bottom:16px;line-height:40px;border:1px solid transparent;border-radius:50%;box-shadow:0 4px 10px #0000001a}.arco-menu-collapsed{width:48px}.arco-menu-collapsed .arco-menu-inner{padding:4px}.arco-menu-collapsed .arco-menu-icon-suffix{display:none}.arco-menu-collapsed .arco-menu-has-icon>*:not(.arco-menu-icon){opacity:0}.arco-menu-collapsed .arco-menu-item .arco-icon,.arco-menu-collapsed .arco-menu-group-title .arco-icon,.arco-menu-collapsed .arco-menu-pop-header .arco-icon,.arco-menu-collapsed .arco-menu-inline-header .arco-icon{margin-right:100%}.arco-menu-collapse-button{position:absolute;right:12px;bottom:12px;display:flex;align-items:center;justify-content:center;width:24px;height:24px;border-radius:var(--border-radius-small);cursor:pointer}.arco-menu-inline-content{height:auto;overflow:hidden;transition:height .2s cubic-bezier(.34,.69,.1,1)}.arco-menu-inline-content-hide{height:0}.arco-menu-item-tooltip a{color:inherit;cursor:text}.arco-menu-item-tooltip a:hover,.arco-menu-item-tooltip a:focus,.arco-menu-item-tooltip a:active{color:inherit}.arco-menu-pop-trigger.arco-trigger-position-bl{transform:translateY(14px)}.arco-menu-pop-trigger.arco-trigger-position-bl .arco-trigger-arrow{z-index:0;border-top:1px solid var(--color-neutral-3);border-left:1px solid var(--color-neutral-3)}.arco-menu-pop-trigger.arco-trigger-position-rt{transform:translate(8px)}.arco-menu-pop-trigger.arco-trigger-position-rt .arco-trigger-arrow{z-index:0;border-bottom:1px solid var(--color-neutral-3);border-left:1px solid var(--color-neutral-3)}.arco-menu-pop-trigger.arco-menu-pop-trigger-dark .arco-trigger-arrow{background-color:var(--color-menu-dark-bg);border-color:var(--color-menu-dark-bg)}.arco-trigger-menu{position:relative;box-sizing:border-box;max-height:200px;padding:4px 0;overflow:auto;background-color:var(--color-bg-popup);border:1px solid var(--color-fill-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-trigger-menu-hidden{display:none}.arco-trigger-menu-item,.arco-trigger-menu-pop-header{position:relative;z-index:1;box-sizing:border-box;width:100%;height:36px;padding:0 12px;color:var(--color-text-1);font-size:14px;line-height:36px;text-align:left;background-color:transparent;cursor:pointer;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-trigger-menu-item.arco-trigger-menu-selected,.arco-trigger-menu-pop-header.arco-trigger-menu-selected{color:var(--color-text-1);font-weight:500;background-color:transparent;transition:all .1s cubic-bezier(0,0,1,1)}.arco-trigger-menu-item:hover,.arco-trigger-menu-pop-header:hover{color:var(--color-text-1);background-color:var(--color-fill-2)}.arco-trigger-menu-item.arco-trigger-menu-disabled,.arco-trigger-menu-pop-header.arco-trigger-menu-disabled{color:var(--color-text-4);background-color:transparent;cursor:not-allowed}.arco-trigger-menu .arco-trigger-menu-has-icon{display:flex;align-items:center}.arco-trigger-menu .arco-trigger-menu-has-icon .arco-trigger-menu-icon{margin-right:8px;line-height:1}.arco-trigger-menu .arco-trigger-menu-has-icon>*{flex:none}.arco-trigger-menu .arco-trigger-menu-has-icon .arco-trigger-menu-title{flex:auto;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-trigger-menu-pop-header{display:flex;align-items:center;justify-content:space-between}.arco-trigger-menu-pop-header .arco-trigger-menu-icon-suffix{margin-left:12px}.arco-trigger-menu-group:first-child .arco-trigger-menu-group-title{padding-top:4px}.arco-trigger-menu-group-title{box-sizing:border-box;width:100%;padding:8px 12px 0;color:var(--color-text-3);font-size:12px;line-height:20px;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-trigger-menu-pop-trigger .arco-trigger-arrow{display:none}.arco-trigger-menu-dark{background-color:var(--color-menu-dark-bg);border-color:var(--color-menu-dark-bg)}.arco-trigger-menu-dark .arco-trigger-menu-item,.arco-trigger-menu-dark .arco-trigger-menu-pop-header{color:var(--color-text-4);background-color:transparent}.arco-trigger-menu-dark .arco-trigger-menu-item.arco-trigger-menu-selected,.arco-trigger-menu-dark .arco-trigger-menu-pop-header.arco-trigger-menu-selected{color:var(--color-white);background-color:transparent}.arco-trigger-menu-dark .arco-trigger-menu-item.arco-trigger-menu-selected:hover,.arco-trigger-menu-dark .arco-trigger-menu-pop-header.arco-trigger-menu-selected:hover{color:var(--color-white)}.arco-trigger-menu-dark .arco-trigger-menu-item:hover,.arco-trigger-menu-dark .arco-trigger-menu-pop-header:hover{color:var(--color-text-4);background-color:var(--color-menu-dark-hover)}.arco-trigger-menu-dark .arco-trigger-menu-item.arco-trigger-menu-disabled,.arco-trigger-menu-dark .arco-trigger-menu-pop-header.arco-trigger-menu-disabled{color:var(--color-text-2);background-color:transparent}.arco-trigger-menu-dark .arco-trigger-menu-group-title{color:var(--color-text-3)}.arco-message-list{position:fixed;z-index:1003;display:flex;flex-direction:column;align-items:center;box-sizing:border-box;width:100%;margin:0;padding:0 10px;text-align:center;pointer-events:none}.arco-message-list-top{top:40px}.arco-message-list-bottom{bottom:40px}.arco-message{position:relative;display:inline-flex;align-items:center;margin-bottom:16px;padding:10px 16px;overflow:hidden;line-height:1;text-align:center;list-style:none;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-small);box-shadow:0 4px 10px #0000001a;transition:all .1s cubic-bezier(0,0,1,1);pointer-events:auto}.arco-message-icon{display:inline-block;margin-right:8px;color:var(--color-text-1);font-size:20px;vertical-align:middle;animation:arco-msg-fade .1s cubic-bezier(0,0,1,1),arco-msg-fade .4s cubic-bezier(.3,1.3,.3,1)}.arco-message-content{font-size:14px;color:var(--color-text-1);vertical-align:middle}.arco-message-info{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-message-info .arco-message-icon{color:rgb(var(--primary-6))}.arco-message-info .arco-message-content{color:var(--color-text-1)}.arco-message-success{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-message-success .arco-message-icon{color:rgb(var(--success-6))}.arco-message-success .arco-message-content{color:var(--color-text-1)}.arco-message-warning{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-message-warning .arco-message-icon{color:rgb(var(--warning-6))}.arco-message-warning .arco-message-content{color:var(--color-text-1)}.arco-message-error{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-message-error .arco-message-icon{color:rgb(var(--danger-6))}.arco-message-error .arco-message-content{color:var(--color-text-1)}.arco-message-loading{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-message-loading .arco-message-icon{color:rgb(var(--primary-6))}.arco-message-loading .arco-message-content{color:var(--color-text-1)}.arco-message-close-btn{margin-left:8px;color:var(--color-text-1);font-size:12px}.arco-message .arco-icon-hover.arco-message-icon-hover:before{width:20px;height:20px}.fade-message-enter-from,.fade-message-appear-from{opacity:0}.fade-message-enter-to,.fade-message-appear-to{opacity:1}.fade-message-enter-active,.fade-message-appear-active{transition:opacity .1s cubic-bezier(0,0,1,1)}.fade-message-leave-from{opacity:1}.fade-message-leave-to{opacity:0}.fade-message-leave-active{position:absolute}.flip-list-move{transition:transform .8s ease}@keyframes arco-msg-fade{0%{opacity:0}to{opacity:1}}@keyframes arco-msg-scale{0%{transform:scale(0)}to{transform:scale(1)}}.arco-modal-container{position:fixed;top:0;right:0;bottom:0;left:0}.arco-modal-mask{position:absolute;top:0;right:0;bottom:0;left:0;background-color:var(--color-mask-bg)}.arco-modal-wrapper{position:absolute;top:0;right:0;bottom:0;left:0;overflow:auto;text-align:center}.arco-modal-wrapper.arco-modal-wrapper-align-center{white-space:nowrap}.arco-modal-wrapper.arco-modal-wrapper-align-center:after{display:inline-block;width:0;height:100%;vertical-align:middle;content:""}.arco-modal-wrapper.arco-modal-wrapper-align-center .arco-modal{top:0;vertical-align:middle}.arco-modal-wrapper.arco-modal-wrapper-moved{text-align:left}.arco-modal-wrapper.arco-modal-wrapper-moved .arco-modal{top:0;vertical-align:top}.arco-modal{position:relative;top:100px;display:inline-block;width:520px;margin:0 auto;line-height:1.5715;white-space:initial;text-align:left;background-color:var(--color-bg-3);border-radius:var(--border-radius-medium)}.arco-modal-draggable .arco-modal-header{cursor:move}.arco-modal-header{display:flex;flex-shrink:0;align-items:center;box-sizing:border-box;width:100%;height:48px;padding:0 20px;border-bottom:1px solid var(--color-neutral-3)}.arco-modal-header .arco-modal-title{display:flex;flex:1;align-items:center;justify-content:center}.arco-modal-header .arco-modal-title-align-start{justify-content:flex-start}.arco-modal-header .arco-modal-title-align-center{justify-content:center}.arco-modal-body{position:relative;padding:24px 20px;overflow:auto;color:var(--color-text-1);font-size:14px}.arco-modal-footer{box-sizing:border-box;flex-shrink:0;width:100%;padding:16px 20px;text-align:right;border-top:1px solid var(--color-neutral-3)}.arco-modal-footer>.arco-btn:not(:nth-child(1)){margin-left:12px}.arco-modal-close-btn{margin-left:-12px;color:var(--color-text-1);font-size:12px;cursor:pointer}.arco-modal-title{color:var(--color-text-1);font-weight:500;font-size:16px}.arco-modal-title-icon{margin-right:10px;font-size:18px;vertical-align:-.15em}.arco-modal-title-icon .arco-icon-info-circle-fill{color:rgb(var(--primary-6))}.arco-modal-title-icon .arco-icon-check-circle-fill{color:rgb(var(--success-6))}.arco-modal-title-icon .arco-icon-exclamation-circle-fill{color:rgb(var(--warning-6))}.arco-modal-title-icon .arco-icon-close-circle-fill{color:rgb(var(--danger-6))}.arco-modal-simple{width:400px;padding:24px 32px 32px}.arco-modal-simple .arco-modal-header,.arco-modal-simple .arco-modal-footer{height:unset;padding:0;border:none}.arco-modal-simple .arco-modal-header{margin-bottom:24px}.arco-modal-simple .arco-modal-title{justify-content:center}.arco-modal-simple .arco-modal-title-align-start{justify-content:flex-start}.arco-modal-simple .arco-modal-title-align-center{justify-content:center}.arco-modal-simple .arco-modal-footer{margin-top:32px;text-align:center}.arco-modal-simple .arco-modal-body{padding:0}.arco-modal-fullscreen{top:0;display:inline-flex;flex-direction:column;box-sizing:border-box;width:100%;height:100%}.arco-modal-fullscreen .arco-modal-footer{margin-top:auto}.zoom-modal-enter-from,.zoom-modal-appear-from{transform:scale(.5);opacity:0}.zoom-modal-enter-to,.zoom-modal-appear-to{transform:scale(1);opacity:1}.zoom-modal-enter-active,.zoom-modal-appear-active{transition:opacity .4s cubic-bezier(.3,1.3,.3,1),transform .4s cubic-bezier(.3,1.3,.3,1)}.zoom-modal-leave-from{transform:scale(1);opacity:1}.zoom-modal-leave-to{transform:scale(.5);opacity:0}.zoom-modal-leave-active{transition:opacity .4s cubic-bezier(.3,1.3,.3,1),transform .4s cubic-bezier(.3,1.3,.3,1)}.fade-modal-enter-from,.fade-modal-appear-from{opacity:0}.fade-modal-enter-to,.fade-modal-appear-to{opacity:1}.fad-modal-enter-active,.fade-modal-appear-active{transition:opacity .4s cubic-bezier(.3,1.3,.3,1)}.fade-modal-leave-from{opacity:1}.fade-modal-leave-to{opacity:0}.fade-modal-leave-active{transition:opacity .4s cubic-bezier(.3,1.3,.3,1)}.arco-notification-list{position:fixed;z-index:1003;margin:0;padding-left:0}.arco-notification-list-top-left{top:20px;left:20px}.arco-notification-list-top-right{top:20px;right:20px}.arco-notification-list-top-right .arco-notification{margin-left:auto}.arco-notification-list-bottom-left{bottom:20px;left:20px}.arco-notification-list-bottom-right{right:20px;bottom:20px}.arco-notification-list-bottom-right .arco-notification{margin-left:auto}.arco-notification{position:relative;display:flex;box-sizing:border-box;width:340px;padding:20px;overflow:hidden;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 12px #00000026;opacity:1;transition:opacity .2s cubic-bezier(0,0,1,1)}.arco-notification:not(:last-child){margin-bottom:20px}.arco-notification-icon{display:flex;align-items:center;font-size:24px}.arco-notification-info{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-notification-info .arco-notification-icon{color:rgb(var(--primary-6))}.arco-notification-success{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-notification-success .arco-notification-icon{color:rgb(var(--success-6))}.arco-notification-warning{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-notification-warning .arco-notification-icon{color:rgb(var(--warning-6))}.arco-notification-error{background-color:var(--color-bg-popup);border-color:var(--color-neutral-3)}.arco-notification-error .arco-notification-icon{color:rgb(var(--danger-6))}.arco-notification-left{padding-right:16px}.arco-notification-right{flex:1;word-break:break-word}.arco-notification-title{color:var(--color-text-1);font-weight:500;font-size:16px}.arco-notification-title+.arco-notification-content{margin-top:4px}.arco-notification-content{color:var(--color-text-1);font-size:14px}.arco-notification-info .arco-notification-title,.arco-notification-info .arco-notification-content,.arco-notification-success .arco-notification-title,.arco-notification-success .arco-notification-content,.arco-notification-warning .arco-notification-title,.arco-notification-warning .arco-notification-content,.arco-notification-error .arco-notification-title,.arco-notification-error .arco-notification-content{color:var(--color-text-1)}.arco-notification-footer{margin-top:16px;text-align:right}.arco-notification-close-btn{position:absolute;top:12px;right:12px;color:var(--color-text-1);font-size:12px;cursor:pointer}.arco-notification-close-btn>svg{position:relative}.arco-notification .arco-icon-hover.arco-notification-icon-hover:before{width:20px;height:20px}.slide-left-notification-enter-from,.slide-left-notification-appear-from{transform:translate(-100%)}.slide-left-notification-enter-to,.slide-left-notification-appear-to{transform:translate(0)}.slide-left-notification-enter-active,.slide-left-notification-appear-active{transition:transform .4s cubic-bezier(.3,1.3,.3,1)}.slide-left-notification-leave-from{opacity:1}.slide-left-notification-leave-to{height:0;margin-top:0;margin-bottom:0;padding-top:0;padding-bottom:0;opacity:0}.slide-left-notification-leave-active{transition:all .3s cubic-bezier(.34,.69,.1,1)}.slide-right-notification-enter-from,.slide-right-notification-appear-from{transform:translate(100%)}.slide-right-notification-enter-to,.slide-right-notification-appear-to{transform:translate(0)}.slide-right-notification-enter-active,.slide-right-notification-appear-active{transition:transform .4s cubic-bezier(.3,1.3,.3,1)}.slide-right-notification-leave-from{opacity:1}.slide-right-notification-leave-to{height:0;margin-top:0;margin-bottom:0;padding-top:0;padding-bottom:0;opacity:0}.slide-right-notification-leave-active{transition:all .3s cubic-bezier(.34,.69,.1,1)}.arco-overflow-list{display:flex;align-items:center;justify-content:flex-start}.arco-overflow-list>*:not(:last-child){flex-shrink:0}.arco-overflow-list-spacer{flex:1;min-width:0;height:1px}.arco-page-header{padding:16px 0}.arco-page-header-breadcrumb+.arco-page-header-header{margin-top:4px}.arco-page-header-wrapper{padding-right:20px;padding-left:24px}.arco-page-header-header{display:flex;align-items:center;justify-content:space-between;line-height:28px}.arco-page-header-header-left{display:flex;align-items:center}.arco-page-header-main{display:flex;align-items:center;min-height:30px}.arco-page-header-main-with-back{margin-left:-8px;padding-left:8px}.arco-page-header-extra{overflow:hidden;white-space:nowrap}.arco-page-header .arco-icon-hover.arco-page-header-icon-hover:before{width:30px;height:30px}.arco-page-header .arco-icon-hover.arco-page-header-icon-hover:hover:before{background-color:var(--color-fill-2)}.arco-page-header-back-btn{margin-right:12px;color:var(--color-text-2);font-size:14px}.arco-page-header-back-btn-icon{position:relative}.arco-page-header-title{overflow:hidden;white-space:nowrap;text-overflow:ellipsis;color:var(--color-text-1);font-weight:600;font-size:20px}.arco-page-header-divider{width:1px;height:16px;margin-right:12px;margin-left:12px;background-color:var(--color-fill-3)}.arco-page-header-subtitle{overflow:hidden;white-space:nowrap;text-overflow:ellipsis;color:var(--color-text-3);font-size:14px}.arco-page-header-content{padding:20px 32px;border-top:1px solid var(--color-neutral-3)}.arco-page-header-footer{padding:16px 20px 0 24px}.arco-page-header-with-breadcrumb{padding:12px 0}.arco-page-header-with-breadcrumb .arco-page-header-footer{padding-top:12px}.arco-page-header-with-content .arco-page-header-wrapper{padding-bottom:12px}.arco-page-header-with-footer{padding-bottom:0}.arco-page-header-wrapper .arco-page-header-header{flex-wrap:wrap}.arco-page-header-wrapper .arco-page-header-header .arco-page-header-head-extra{margin-top:4px}.arco-pagination{display:flex;align-items:center;font-size:14px}.arco-pagination-list{display:inline-block;margin:0;padding:0;white-space:nowrap;list-style:none}.arco-pagination-item{display:inline-block;box-sizing:border-box;padding:0 8px;color:var(--color-text-2);text-align:center;vertical-align:middle;list-style:none;background-color:transparent;border:0 solid transparent;border-radius:var(--border-radius-small);outline:0;cursor:pointer;user-select:none;min-width:32px;height:32px;font-size:14px;line-height:32px}.arco-pagination-item-previous,.arco-pagination-item-next{font-size:12px}.arco-pagination-item:hover{color:var(--color-text-2);background-color:var(--color-fill-1);border-color:transparent}.arco-pagination-item-active,.arco-pagination-item-active:hover{color:rgb(var(--primary-6));background-color:var(--color-primary-light-1);border-color:transparent;transition:color .2s cubic-bezier(0,0,1,1),background-color .2s cubic-bezier(0,0,1,1)}.arco-pagination-item-disabled,.arco-pagination-item-disabled:hover{color:var(--color-text-4);background-color:transparent;border-color:transparent;cursor:not-allowed}.arco-pagination-item:not(:last-child){margin-right:8px}.arco-pagination-item-previous,.arco-pagination-item-next{color:var(--color-text-2);font-size:12px;background-color:transparent}.arco-pagination-item-previous:not(.arco-pagination-item-disabled):hover,.arco-pagination-item-next:not(.arco-pagination-item-disabled):hover{color:rgb(var(--primary-6));background-color:var(--color-fill-1)}.arco-pagination-item-previous:after,.arco-pagination-item-next:after{display:inline-block;font-size:0;vertical-align:middle;content:"."}.arco-pagination .arco-pagination-item-previous.arco-pagination-item-disabled,.arco-pagination .arco-pagination-item-next.arco-pagination-item-disabled{color:var(--color-text-4);background-color:transparent}.arco-pagination-item-jumper{font-size:16px}.arco-pagination-jumper{display:flex;align-items:center;margin-left:8px}.arco-pagination-jumper>span{font-size:14px}.arco-pagination-jumper-text-goto,.arco-pagination-jumper-prepend,.arco-pagination-jumper-append{color:var(--color-text-3);white-space:nowrap}.arco-pagination-jumper-prepend{margin-right:8px}.arco-pagination-jumper-append{margin-left:8px}.arco-pagination-jumper .arco-pagination-jumper-input{width:40px;padding-right:2px;padding-left:2px}.arco-pagination-jumper .arco-pagination-jumper-input input{text-align:center}.arco-pagination-options{position:relative;display:inline-block;flex:0 0 auto;min-width:0;margin-left:8px;text-align:center;vertical-align:middle}.arco-pagination-options .arco-select{width:auto}.arco-pagination-options .arco-select-view-value{padding-right:6px;overflow:inherit}.arco-pagination-total{display:inline-block;height:100%;margin-right:8px;color:var(--color-text-1);font-size:14px;line-height:32px;white-space:nowrap}.arco-pagination-jumper{flex:0 0 auto}.arco-pagination-jumper-separator{padding:0 12px}.arco-pagination-jumper-total-page{margin-right:8px}.arco-pagination-simple{display:flex;align-items:center}.arco-pagination-simple .arco-pagination-item{margin-right:0}.arco-pagination-simple .arco-pagination-jumper{margin:0 4px;color:var(--color-text-1)}.arco-pagination-simple .arco-pagination-jumper .arco-pagination-jumper-input{width:40px;margin-left:0}.arco-pagination-simple .arco-pagination-item-previous,.arco-pagination-simple .arco-pagination-item-next{color:var(--color-text-2);background-color:transparent}.arco-pagination-simple .arco-pagination-item-previous:not(.arco-pagination-item-disabled):hover,.arco-pagination-simple .arco-pagination-item-next:not(.arco-pagination-item-disabled):hover{color:rgb(var(--primary-6));background-color:var(--color-fill-1)}.arco-pagination-simple .arco-pagination-item-previous.arco-pagination-item-disabled,.arco-pagination-simple .arco-pagination-item-next.arco-pagination-item-disabled{color:var(--color-text-4);background-color:transparent}.arco-pagination-disabled{cursor:not-allowed}.arco-pagination-disabled .arco-pagination-item,.arco-pagination-disabled .arco-pagination-item:not(.arco-pagination-item-disabled):not(.arco-pagination-item-active):hover{color:var(--color-text-4);background-color:transparent;border-color:transparent;cursor:not-allowed}.arco-pagination.arco-pagination-disabled .arco-pagination-item-active{color:var(--color-primary-light-3);background-color:var(--color-fill-1);border-color:transparent}.arco-pagination-size-mini .arco-pagination-item{min-width:24px;height:24px;font-size:12px;line-height:24px}.arco-pagination-size-mini .arco-pagination-item-previous,.arco-pagination-size-mini .arco-pagination-item-next{font-size:12px}.arco-pagination-size-mini .arco-pagination-total{font-size:12px;line-height:24px}.arco-pagination-size-mini .arco-pagination-option{height:24px;font-size:12px;line-height:0}.arco-pagination-size-mini .arco-pagination-jumper>span{font-size:12px}.arco-pagination-size-small .arco-pagination-item{min-width:28px;height:28px;font-size:14px;line-height:28px}.arco-pagination-size-small .arco-pagination-item-previous,.arco-pagination-size-small .arco-pagination-item-next{font-size:12px}.arco-pagination-size-small .arco-pagination-total{font-size:14px;line-height:28px}.arco-pagination-size-small .arco-pagination-option{height:28px;font-size:14px;line-height:0}.arco-pagination-size-small .arco-pagination-jumper>span{font-size:14px}.arco-pagination-size-large .arco-pagination-item{min-width:36px;height:36px;font-size:14px;line-height:36px}.arco-pagination-size-large .arco-pagination-item-previous,.arco-pagination-size-large .arco-pagination-item-next{font-size:14px}.arco-pagination-size-large .arco-pagination-total{font-size:14px;line-height:36px}.arco-pagination-size-large .arco-pagination-option{height:36px;font-size:14px;line-height:0}.arco-pagination-size-large .arco-pagination-jumper>span{font-size:14px}.arco-popconfirm-popup-content{box-sizing:border-box;padding:16px;color:var(--color-text-2);font-size:14px;line-height:1.5715;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-popconfirm-popup-content .arco-popconfirm-body{position:relative;display:flex;align-items:flex-start;margin-bottom:16px;color:var(--color-text-1);font-size:14px}.arco-popconfirm-popup-content .arco-popconfirm-body .arco-popconfirm-icon{display:inline-flex;align-items:center;height:22.001px;margin-right:8px;font-size:18px}.arco-popconfirm-popup-content .arco-popconfirm-body .arco-popconfirm-icon .arco-icon-exclamation-circle-fill{color:rgb(var(--warning-6))}.arco-popconfirm-popup-content .arco-popconfirm-body .arco-popconfirm-icon .arco-icon-check-circle-fill{color:rgb(var(--success-6))}.arco-popconfirm-popup-content .arco-popconfirm-body .arco-popconfirm-icon .arco-icon-info-circle-fill{color:rgb(var(--primary-6))}.arco-popconfirm-popup-content .arco-popconfirm-body .arco-popconfirm-icon .arco-icon-close-circle-fill{color:rgb(var(--danger-6))}.arco-popconfirm-popup-content .arco-popconfirm-body .arco-popconfirm-content{text-align:left;word-wrap:break-word}.arco-popconfirm-popup-content .arco-popconfirm-footer{text-align:right}.arco-popconfirm-popup-content .arco-popconfirm-footer>button{margin-left:8px}.arco-popconfirm-popup-arrow{z-index:1;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3)}.arco-popover-popup-content{box-sizing:border-box;padding:12px 16px;color:var(--color-text-2);font-size:14px;line-height:1.5715;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-popover-title{color:var(--color-text-1);font-weight:500;font-size:16px}.arco-popover-content{margin-top:4px;text-align:left;word-wrap:break-word}.arco-popover-popup-arrow{z-index:1;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3)}.arco-progress{position:relative;line-height:1;font-size:12px}.arco-progress-type-line,.arco-progress-type-steps{display:inline-block;max-width:100%;width:100%}.arco-progress-type-line.arco-progress-size-mini{width:auto}.arco-progress-line-wrapper,.arco-progress-steps-wrapper{display:flex;align-items:center;width:100%;max-width:100%;height:100%}.arco-progress-line-text,.arco-progress-steps-text{font-size:12px;margin-left:16px;color:var(--color-text-2);white-space:nowrap;text-align:right;flex-grow:1;flex-shrink:0;min-width:32px}.arco-progress-line-text .arco-icon,.arco-progress-steps-text .arco-icon{font-size:12px;margin-left:4px}.arco-progress-line{background-color:var(--color-fill-3);border-radius:100px;width:100%;position:relative;display:inline-block;overflow:hidden}.arco-progress-line-bar{height:100%;border-radius:100px;background-color:rgb(var(--primary-6));position:relative;transition:width .6s cubic-bezier(.34,.69,.1,1),background .3s cubic-bezier(.34,.69,.1,1);max-width:100%}.arco-progress-line-bar-buffer{position:absolute;background-color:var(--color-primary-light-3);height:100%;top:0;left:0;border-radius:0 100px 100px 0;max-width:100%;transition:all .6s cubic-bezier(.34,.69,.1,1)}.arco-progress-line-bar-animate:after{content:"";display:block;position:absolute;top:0;width:100%;height:100%;border-radius:inherit;background:linear-gradient(90deg,transparent 25%,rgba(255,255,255,.5) 50%,transparent 75%);background-size:400% 100%;animation:arco-progress-loading 1.5s cubic-bezier(.34,.69,.1,1) infinite}.arco-progress-line-text .arco-icon{color:var(--color-text-2)}.arco-progress-type-steps.arco-progress-size-small{width:auto}.arco-progress-type-steps.arco-progress-size-small .arco-progress-steps-item{width:2px;flex:unset;border-radius:2px}.arco-progress-type-steps.arco-progress-size-small .arco-progress-steps-item:not(:last-of-type){margin-right:3px}.arco-progress-steps{display:flex;width:100%}.arco-progress-steps-text{margin-left:8px;min-width:unset}.arco-progress-steps-text .arco-icon{color:var(--color-text-2)}.arco-progress-steps-item{height:100%;flex:1;background-color:var(--color-fill-3);position:relative;display:inline-block}.arco-progress-steps-item:not(:last-of-type){margin-right:3px}.arco-progress-steps-item:last-of-type{border-top-right-radius:100px;border-bottom-right-radius:100px}.arco-progress-steps-item:first-of-type{border-top-left-radius:100px;border-bottom-left-radius:100px}.arco-progress-steps-item-active{background-color:rgb(var(--primary-6))}.arco-progress-status-warning .arco-progress-line-bar,.arco-progress-status-warning .arco-progress-steps-item-active{background-color:rgb(var(--warning-6))}.arco-progress-status-warning .arco-progress-line-text .arco-icon,.arco-progress-status-warning .arco-progress-steps-text .arco-icon{color:rgb(var(--warning-6))}.arco-progress-status-success .arco-progress-line-bar,.arco-progress-status-success .arco-progress-steps-item-active{background-color:rgb(var(--success-6))}.arco-progress-status-success .arco-progress-line-text .arco-icon,.arco-progress-status-success .arco-progress-steps-text .arco-icon{color:rgb(var(--success-6))}.arco-progress-status-danger .arco-progress-line-bar,.arco-progress-status-danger .arco-progress-steps-item-active{background-color:rgb(var(--danger-6))}.arco-progress-status-danger .arco-progress-line-text .arco-icon,.arco-progress-status-danger .arco-progress-steps-text .arco-icon{color:rgb(var(--danger-6))}.arco-progress-size-small .arco-progress-line-text{font-size:12px;margin-left:16px}.arco-progress-size-small .arco-progress-line-text .arco-icon{font-size:12px}.arco-progress-size-large .arco-progress-line-text{font-size:16px;margin-left:16px}.arco-progress-size-large .arco-progress-line-text .arco-icon{font-size:14px}.arco-progress-type-circle{display:inline-block}.arco-progress-circle-wrapper{position:relative;text-align:center;line-height:1;display:inline-block;vertical-align:text-bottom}.arco-progress-circle-svg{transform:rotate(-90deg)}.arco-progress-circle-text{position:absolute;top:50%;left:50%;color:var(--color-text-3);transform:translate(-50%,-50%);font-size:14px}.arco-progress-circle-text .arco-icon{font-size:16px;color:var(--color-text-2)}.arco-progress-circle-bg{stroke:var(--color-fill-3)}.arco-progress-circle-bar{stroke:rgb(var(--primary-6));transition:stroke-dashoffset .6s cubic-bezier(0,0,1,1) 0s,stroke .6s cubic-bezier(0,0,1,1)}.arco-progress-size-mini .arco-progress-circle-bg{stroke:var(--color-primary-light-3)}.arco-progress-size-mini .arco-progress-circle-bar{stroke:rgb(var(--primary-6))}.arco-progress-size-mini.arco-progress-status-warning .arco-progress-circle-bg{stroke:var(--color-warning-light-3)}.arco-progress-size-mini.arco-progress-status-danger .arco-progress-circle-bg{stroke:var(--color-danger-light-3)}.arco-progress-size-mini.arco-progress-status-success .arco-progress-circle-bg{stroke:var(--color-success-light-3)}.arco-progress-size-mini .arco-progress-circle-wrapper .arco-icon-check{position:absolute;top:50%;left:50%;transform:translate(-50%) translateY(-50%)}.arco-progress-size-mini .arco-progress-circle-text{position:static;top:unset;left:unset;transform:unset}.arco-progress-size-small .arco-progress-circle-text{font-size:13px}.arco-progress-size-small .arco-progress-circle-text .arco-icon{font-size:14px}.arco-progress-size-large .arco-progress-circle-text,.arco-progress-size-large .arco-progress-circle-text .arco-icon{font-size:16px}.arco-progress-status-warning .arco-progress-circle-bar{stroke:rgb(var(--warning-6))}.arco-progress-status-warning .arco-icon{color:rgb(var(--warning-6))}.arco-progress-status-success .arco-progress-circle-bar{stroke:rgb(var(--success-6))}.arco-progress-status-success .arco-icon{color:rgb(var(--success-6))}.arco-progress-status-danger .arco-progress-circle-bar{stroke:rgb(var(--danger-6))}.arco-progress-status-danger .arco-icon{color:rgb(var(--danger-6))}@keyframes arco-progress-loading{0%{background-position:100% 50%}to{background-position:0 50%}}.arco-radio>input[type=radio],.arco-radio-button>input[type=radio]{position:absolute;top:0;left:0;width:0;height:0;opacity:0}.arco-radio>input[type=radio]:focus+.arco-radio-icon-hover:before,.arco-radio-button>input[type=radio]:focus+.arco-radio-icon-hover:before{background-color:var(--color-fill-2)}.arco-icon-hover.arco-radio-icon-hover:before{width:24px;height:24px}.arco-radio{position:relative;display:inline-flex;align-items:center;padding-left:5px;font-size:14px;line-height:unset;cursor:pointer}.arco-radio-label{margin-left:8px;color:var(--color-text-1)}.arco-radio-icon{position:relative;display:block;box-sizing:border-box;width:14px;height:14px;line-height:14px;border:2px solid var(--color-neutral-3);border-radius:var(--border-radius-circle)}.arco-radio-icon:after{position:absolute;top:0;left:0;display:inline-block;box-sizing:border-box;width:10px;height:10px;background-color:var(--color-bg-2);border-radius:var(--border-radius-circle);transform:scale(1);transition:transform .3s cubic-bezier(.3,1.3,.3,1);content:""}.arco-radio:hover .arco-radio-icon{border-color:var(--color-neutral-3)}.arco-radio-checked .arco-radio-icon{background-color:rgb(var(--primary-6));border-color:rgb(var(--primary-6))}.arco-radio-checked .arco-radio-icon:after{background-color:var(--color-white);transform:scale(.4)}.arco-radio-checked:hover .arco-radio-icon{border-color:rgb(var(--primary-6))}.arco-radio-disabled,.arco-radio-disabled .arco-radio-icon-hover{cursor:not-allowed}.arco-radio-disabled .arco-radio-label{color:var(--color-text-4)}.arco-radio-disabled .arco-radio-icon{border-color:var(--color-neutral-3)}.arco-radio-disabled .arco-radio-icon:after{background-color:var(--color-fill-2)}.arco-radio-disabled:hover .arco-radio-icon{border-color:var(--color-neutral-3)}.arco-radio-checked.arco-radio-disabled .arco-radio-icon,.arco-radio-checked.arco-radio-disabled:hover .arco-radio-icon{background-color:var(--color-primary-light-3);border-color:transparent}.arco-radio-checked.arco-radio-disabled .arco-radio-icon:after{background-color:var(--color-fill-2)}.arco-radio-checked.arco-radio-disabled .arco-radio-label{color:var(--color-text-4)}.arco-radio:hover .arco-radio-icon-hover:before{background-color:var(--color-fill-2)}.arco-radio-group{display:inline-block;box-sizing:border-box}.arco-radio-group .arco-radio{margin-right:20px}.arco-radio-group-button{display:inline-flex;padding:1.5px;line-height:26px;background-color:var(--color-fill-2);border-radius:var(--border-radius-small)}.arco-radio-button{position:relative;display:inline-block;margin:1.5px;color:var(--color-text-2);font-size:14px;line-height:26px;background-color:transparent;border-radius:var(--border-radius-small);cursor:pointer;transition:all .1s cubic-bezier(0,0,1,1)}.arco-radio-button-content{position:relative;display:block;padding:0 12px}.arco-radio-button:not(:first-of-type):before{position:absolute;top:50%;left:-2px;display:block;width:1px;height:14px;background-color:var(--color-neutral-3);transform:translateY(-50%);transition:all .1s cubic-bezier(0,0,1,1);content:""}.arco-radio-button:hover:before,.arco-radio-button:hover+.arco-radio-button:before,.arco-radio-button.arco-radio-checked:before,.arco-radio-button.arco-radio-checked+.arco-radio-button:before{opacity:0}.arco-radio-button:hover{color:var(--color-text-1);background-color:var(--color-bg-5)}.arco-radio-button.arco-radio-checked{color:rgb(var(--primary-6));background-color:var(--color-bg-5)}.arco-radio-button.arco-radio-disabled{color:var(--color-text-4);background-color:transparent;cursor:not-allowed}.arco-radio-button.arco-radio-disabled.arco-radio-checked{color:var(--color-primary-light-3);background-color:var(--color-bg-5)}.arco-radio-group-size-small{line-height:28px}.arco-radio-group-size-small.arco-radio-group-button,.arco-radio-group-size-small .arco-radio-button{font-size:14px;line-height:22px}.arco-radio-group-size-large{line-height:36px}.arco-radio-group-size-large.arco-radio-group-button,.arco-radio-group-size-large .arco-radio-button{font-size:14px;line-height:30px}.arco-radio-group-size-mini{line-height:24px}.arco-radio-group-size-mini.arco-radio-group-button,.arco-radio-group-size-mini .arco-radio-button{font-size:12px;line-height:18px}.arco-radio-group-direction-vertical .arco-radio{display:flex;margin-right:0;line-height:32px}body[arco-theme=dark] .arco-radio-button.arco-radio-checked,body[arco-theme=dark] .arco-radio-button:not(.arco-radio-disabled):hover{background-color:var(--color-fill-3)}body[arco-theme=dark] .arco-radio-button:after{background-color:var(--color-bg-3)}.arco-rate{display:inline-flex;align-items:center;min-height:32px;font-size:24px;line-height:1;user-select:none}.arco-rate-disabled{cursor:not-allowed}.arco-rate-character{position:relative;color:var(--color-fill-3);transition:transform .2s cubic-bezier(.34,.69,.1,1)}.arco-rate-character:not(:last-child){margin-right:8px}.arco-rate-character-left,.arco-rate-character-right{transition:inherit}.arco-rate-character-left>*,.arco-rate-character-right>*{float:left}.arco-rate-character-left{position:absolute;top:0;left:0;width:50%;overflow:hidden;white-space:nowrap;opacity:0}.arco-rate-character-scale{animation:arco-rate-scale .4s cubic-bezier(.34,.69,.1,1)}.arco-rate-character-full .arco-rate-character-right{color:rgb(var(--gold-6))}.arco-rate-character-half .arco-rate-character-left{color:rgb(var(--gold-6));opacity:1}.arco-rate-character-disabled{cursor:not-allowed}.arco-rate:not(.arco-rate-readonly):not(.arco-rate-disabled) .arco-rate-character{cursor:pointer}.arco-rate:not(.arco-rate-readonly):not(.arco-rate-disabled) .arco-rate-character:hover,.arco-rate:not(.arco-rate-readonly):not(.arco-rate-disabled) .arco-rate-character:focus{transform:scale(1.2)}@keyframes arco-rate-scale{0%{transform:scale(1)}50%{transform:scale(1.2)}to{transform:scale(1)}}.arco-resizebox{position:relative;width:100%;overflow:hidden}.arco-resizebox-direction-left,.arco-resizebox-direction-right,.arco-resizebox-direction-top,.arco-resizebox-direction-bottom{position:absolute;top:0;left:0;box-sizing:border-box;user-select:none}.arco-resizebox-direction-right{right:0;left:unset}.arco-resizebox-direction-bottom{top:unset;bottom:0}.arco-resizebox-trigger-icon-wrapper{display:flex;align-items:center;justify-content:center;height:100%;color:var(--color-text-1);font-size:12px;line-height:1;background-color:var(--color-neutral-3)}.arco-resizebox-trigger-icon{display:inline-block;margin:-3px}.arco-resizebox-trigger-vertical{height:100%;cursor:col-resize}.arco-resizebox-trigger-horizontal{width:100%;cursor:row-resize}.arco-result{box-sizing:border-box;width:100%;padding:32px 32px 24px}.arco-result-icon{margin-bottom:16px;font-size:20px;text-align:center}.arco-result-icon-tip{display:flex;width:45px;height:45px;align-items:center;justify-content:center;border-radius:50%;margin:0 auto}.arco-result-icon-custom .arco-result-icon-tip{font-size:45px;color:inherit;width:unset;height:unset}.arco-result-icon-success .arco-result-icon-tip{color:rgb(var(--success-6));background-color:var(--color-success-light-1)}.arco-result-icon-error .arco-result-icon-tip{color:rgb(var(--danger-6));background-color:var(--color-danger-light-1)}.arco-result-icon-info .arco-result-icon-tip{color:rgb(var(--primary-6));background-color:var(--color-primary-light-1)}.arco-result-icon-warning .arco-result-icon-tip{color:rgb(var(--warning-6));background-color:var(--color-warning-light-1)}.arco-result-icon-404,.arco-result-icon-403,.arco-result-icon-500{padding-top:24px}.arco-result-icon-404 .arco-result-icon-tip,.arco-result-icon-403 .arco-result-icon-tip,.arco-result-icon-500 .arco-result-icon-tip{width:92px;height:92px;line-height:92px}.arco-result-title{color:var(--color-text-1);font-weight:500;font-size:14px;line-height:1.5715;text-align:center}.arco-result-subtitle{color:var(--color-text-2);font-size:14px;line-height:1.5715;text-align:center}.arco-result-extra{margin-top:20px;text-align:center}.arco-result-content{margin-top:20px}.arco-scrollbar{position:relative}.arco-scrollbar-container{position:relative;scrollbar-width:none}.arco-scrollbar-container::-webkit-scrollbar{display:none}.arco-scrollbar-track{position:absolute;z-index:100}.arco-scrollbar-track-direction-horizontal{bottom:0;left:0;box-sizing:border-box;width:100%;height:15px}.arco-scrollbar-track-direction-vertical{top:0;right:0;box-sizing:border-box;width:15px;height:100%}.arco-scrollbar-thumb{position:absolute;display:block;box-sizing:border-box}.arco-scrollbar-thumb-bar{width:100%;height:100%;background-color:var(--color-neutral-4);border-radius:6px}.arco-scrollbar-thumb:hover .arco-scrollbar-thumb-bar,.arco-scrollbar-thumb-dragging .arco-scrollbar-thumb-bar{background-color:var(--color-neutral-6)}.arco-scrollbar-thumb-direction-horizontal .arco-scrollbar-thumb-bar{height:9px;margin:3px 0}.arco-scrollbar-thumb-direction-vertical .arco-scrollbar-thumb-bar{width:9px;margin:0 3px}.arco-scrollbar.arco-scrollbar-type-embed .arco-scrollbar-thumb{opacity:0;transition:opacity ease .2s}.arco-scrollbar.arco-scrollbar-type-embed .arco-scrollbar-thumb-dragging,.arco-scrollbar.arco-scrollbar-type-embed:hover .arco-scrollbar-thumb{opacity:.8}.arco-scrollbar.arco-scrollbar-type-track .arco-scrollbar-track{background-color:var(--color-neutral-1)}.arco-scrollbar.arco-scrollbar-type-track .arco-scrollbar-track-direction-horizontal{border-top:1px solid var(--color-neutral-3);border-bottom:1px solid var(--color-neutral-3)}.arco-scrollbar.arco-scrollbar-type-track .arco-scrollbar-track-direction-vertical{border-right:1px solid var(--color-neutral-3);border-left:1px solid var(--color-neutral-3)}.arco-scrollbar.arco-scrollbar-type-track .arco-scrollbar-thumb-direction-horizontal{margin:-1px 0}.arco-scrollbar.arco-scrollbar-type-track .arco-scrollbar-thumb-direction-vertical{margin:0 -1px}.arco-scrollbar.arco-scrollbar-type-track.arco-scrollbar-both .arco-scrollbar-track-direction-vertical:after{position:absolute;right:-1px;bottom:0;display:block;box-sizing:border-box;width:15px;height:15px;background-color:var(--color-neutral-1);border-right:1px solid var(--color-neutral-3);border-bottom:1px solid var(--color-neutral-3);content:""}.arco-select-dropdown{box-sizing:border-box;padding:4px 0;background-color:var(--color-bg-popup);border:1px solid var(--color-fill-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-select-dropdown .arco-select-dropdown-loading{display:flex;align-items:center;justify-content:center;min-height:50px}.arco-select-dropdown-list{margin-top:0;margin-bottom:0;padding-left:0;list-style:none}.arco-select-dropdown-list-wrapper{max-height:200px;overflow-y:auto}.arco-select-dropdown .arco-select-option{position:relative;z-index:1;display:flex;align-items:center;box-sizing:border-box;width:100%;padding:0 12px;color:var(--color-text-1);font-size:14px;line-height:36px;text-align:left;background-color:var(--color-bg-popup);cursor:pointer}.arco-select-dropdown .arco-select-option-content{overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-select-dropdown .arco-select-option-checkbox{overflow:hidden}.arco-select-dropdown .arco-select-option-checkbox .arco-checkbox-label{overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-select-dropdown .arco-select-option-has-suffix{justify-content:space-between}.arco-select-dropdown .arco-select-option-active,.arco-select-dropdown .arco-select-option:not(.arco-select-dropdown .arco-select-option-disabled):hover{color:var(--color-text-1);background-color:var(--color-fill-2);transition:all .1s cubic-bezier(0,0,1,1)}.arco-select-dropdown .arco-select-option-disabled{color:var(--color-text-4);background-color:var(--color-bg-popup);cursor:not-allowed}.arco-select-dropdown .arco-select-option-icon{display:inline-flex;margin-right:8px}.arco-select-dropdown .arco-select-option-suffix{margin-left:12px}.arco-select-dropdown .arco-select-group:first-child .arco-select-dropdown .arco-select-group-title{margin-top:8px}.arco-select-dropdown .arco-select-group-title{box-sizing:border-box;width:100%;margin-top:8px;padding:0 12px;color:var(--color-text-3);font-size:12px;line-height:20px;cursor:default;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-select-dropdown.arco-select-dropdown-has-header{padding-top:0}.arco-select-dropdown-header{border-bottom:1px solid var(--color-fill-3)}.arco-select-dropdown.arco-select-dropdown-has-footer{padding-bottom:0}.arco-select-dropdown-footer{border-top:1px solid var(--color-fill-3)}.arco-skeleton-shape{width:48px;height:48px;background-color:var(--color-fill-2);border-radius:var(--border-radius-small)}.arco-skeleton-shape-circle{border-radius:50%}.arco-skeleton-shape-small{width:36px;height:36px}.arco-skeleton-shape-large{width:60px;height:60px}.arco-skeleton-line{margin:0;padding:0;list-style:none}.arco-skeleton-line-row{height:16px;background-color:var(--color-fill-2)}.arco-skeleton-line-row:not(:last-child){margin-bottom:16px}.arco-skeleton-animation .arco-skeleton-shape,.arco-skeleton-animation .arco-skeleton-line-row{background:linear-gradient(90deg,var(--color-fill-2) 25%,var(--color-fill-3) 37%,var(--color-fill-2) 63%);background-size:400% 100%;animation:arco-skeleton-circle 1.5s cubic-bezier(0,0,1,1) infinite}@keyframes arco-skeleton-circle{0%{background-position:100% 50%}to{background-position:0 50%}}.arco-slider{display:inline-flex;align-items:center;width:100%}.arco-slider-vertical{display:inline-block;width:auto;min-width:22px;height:auto}.arco-slider-vertical .arco-slider-wrapper{flex-direction:column}.arco-slider-with-marks{margin-bottom:24px;padding:20px}.arco-slider-vertical.arco-slider-with-marks{margin-bottom:0;padding:0}.arco-slider-track{position:relative;flex:1;width:100%;height:12px;cursor:pointer}.arco-slider-track:before{position:absolute;top:50%;display:block;width:100%;height:2px;background-color:var(--color-fill-3);border-radius:2px;transform:translateY(-50%);content:""}.arco-slider-track.arco-slider-track-vertical{width:12px;max-width:12px;height:100%;min-height:200px;margin-right:0;margin-bottom:6px;margin-top:6px;transform:translateY(0)}.arco-slider-track.arco-slider-track-vertical:before{top:unset;left:50%;width:2px;height:100%;transform:translate(-50%)}.arco-slider-track.arco-slider-track-disabled:before{background-color:var(--color-fill-2)}.arco-slider-track.arco-slider-track-disabled .arco-slider-bar{background-color:var(--color-fill-3)}.arco-slider-track.arco-slider-track-disabled .arco-slider-btn{cursor:not-allowed}.arco-slider-track.arco-slider-track-disabled .arco-slider-btn:after{border-color:var(--color-fill-3)}.arco-slider-track.arco-slider-track-disabled .arco-slider-dots .arco-slider-dot{border-color:var(--color-fill-2)}.arco-slider-track.arco-slider-track-disabled .arco-slider-dots .arco-slider-dot-active{border-color:var(--color-fill-3)}.arco-slider-track.arco-slider-track-disabled .arco-slider-ticks .arco-slider-tick{background:var(--color-fill-2)}.arco-slider-track.arco-slider-track-disabled .arco-slider-ticks .arco-slider-tick-active{background:var(--color-fill-3)}.arco-slider-bar{position:absolute;top:50%;height:2px;background-color:rgb(var(--primary-6));border-radius:2px;transform:translateY(-50%)}.arco-slider-track-vertical .arco-slider-bar{top:unset;left:50%;width:2px;height:unset;transform:translate(-50%)}.arco-slider-btn{position:absolute;top:0;left:0;width:12px;height:12px;transform:translate(-50%)}.arco-slider-btn:after{position:absolute;top:0;left:0;display:inline-block;box-sizing:border-box;width:12px;height:12px;background:var(--color-bg-2);border:2px solid rgb(var(--primary-6));border-radius:50%;transition:all .3s cubic-bezier(.3,1.3,.3,1);content:""}.arco-slider-btn.arco-slider-btn-active:after,.arco-slider-btn:hover:after{box-shadow:0 2px 5px #0000001a;transform:scale(1.16666667)}.arco-slider-track-vertical .arco-slider-btn{top:unset;bottom:0;left:0;transform:translateY(50%)}.arco-slider-marks{position:absolute;top:12px;width:100%}.arco-slider-marks .arco-slider-mark{position:absolute;color:var(--color-text-3);font-size:14px;line-height:1;transform:translate(-50%);cursor:pointer}.arco-slider-track-vertical .arco-slider-marks{top:0;left:15px;height:100%}.arco-slider-track-vertical .arco-slider-marks .arco-slider-mark{transform:translateY(50%)}.arco-slider-dots{height:100%}.arco-slider-dots .arco-slider-dot-wrapper{position:absolute;top:50%;font-size:12px;transform:translate(-50%,-50%)}.arco-slider-track-vertical .arco-slider-dots .arco-slider-dot-wrapper{top:unset;left:50%;transform:translate(-50%,50%)}.arco-slider-dots .arco-slider-dot-wrapper .arco-slider-dot{box-sizing:border-box;width:8px;height:8px;background-color:var(--color-bg-2);border:2px solid var(--color-fill-3);border-radius:50%}.arco-slider-dots .arco-slider-dot-wrapper .arco-slider-dot-active{border-color:rgb(var(--primary-6))}.arco-slider-ticks .arco-slider-tick{position:absolute;top:50%;width:1px;height:3px;margin-top:-1px;background:var(--color-fill-3);transform:translate(-50%,-100%)}.arco-slider-ticks .arco-slider-tick-active{background:rgb(var(--primary-6))}.arco-slider-vertical .arco-slider-ticks .arco-slider-tick{top:unset;left:50%;width:3px;height:1px;margin-top:unset;transform:translate(1px,50%)}.arco-slider-input{display:flex;align-items:center;margin-left:20px}.arco-slider-vertical .arco-slider-input{margin-left:0}.arco-slider-input>.arco-input-number{width:60px;height:32px;overflow:visible;line-height:normal}.arco-slider-input>.arco-input-number input{text-align:center}.arco-slider-input-hyphens{margin:0 6px;width:8px;height:2px;background:rgb(var(--gray-6))}.arco-space{display:inline-flex}.arco-space-horizontal .arco-space-item{display:flex;align-items:center}.arco-space-vertical{flex-direction:column}.arco-space-align-baseline{align-items:baseline}.arco-space-align-start{align-items:flex-start}.arco-space-align-end{align-items:flex-end}.arco-space-align-center{align-items:center}.arco-space-wrap{flex-wrap:wrap}.arco-space-fill{display:flex}.arco-dot-loading{position:relative;display:inline-block;width:56px;height:8px;transform-style:preserve-3d;perspective:200px}.arco-dot-loading-item{position:absolute;top:0;left:50%;width:8px;height:8px;background-color:rgb(var(--primary-6));border-radius:var(--border-radius-circle);transform:translate(-50%) scale(0);animation:arco-dot-loading 2s cubic-bezier(0,0,1,1) infinite forwards}.arco-dot-loading-item:nth-child(2){background-color:rgb(var(--primary-5));animation-delay:.4s}.arco-dot-loading-item:nth-child(3){background-color:rgb(var(--primary-4));animation-delay:.8s}.arco-dot-loading-item:nth-child(4){background-color:rgb(var(--primary-4));animation-delay:1.2s}.arco-dot-loading-item:nth-child(5){background-color:rgb(var(--primary-2));animation-delay:1.6s}@keyframes arco-dot-loading{0%{transform:translate3D(-48.621%,0,-.985px) scale(.511)}2.778%{transform:translate3D(-95.766%,0,-.94px) scale(.545)}5.556%{transform:translate3D(-140%,0,-.866px) scale(.6)}8.333%{transform:translate3D(-179.981%,0,-.766px) scale(.675)}11.111%{transform:translate3D(-214.492%,0,-.643px) scale(.768)}13.889%{transform:translate3D(-242.487%,0,-.5px) scale(.875)}16.667%{transform:translate3D(-263.114%,0,-.342px) scale(.993)}19.444%{transform:translate3D(-275.746%,0,-.174px) scale(1.12)}22.222%{transform:translate3D(-280%,0,0) scale(1.25)}25%{transform:translate3D(-275.746%,0,.174px) scale(1.38)}27.778%{transform:translate3D(-263.114%,0,.342px) scale(1.507)}30.556%{transform:translate3D(-242.487%,0,.5px) scale(1.625)}33.333%{transform:translate3D(-214.492%,0,.643px) scale(1.732)}36.111%{transform:translate3D(-179.981%,0,.766px) scale(1.825)}38.889%{transform:translate3D(-140%,0,.866px) scale(1.9)}41.667%{transform:translate3D(-95.766%,0,.94px) scale(1.955)}44.444%{transform:translate3D(-48.621%,0,.985px) scale(1.989)}47.222%{transform:translateZ(1px) scale(2)}50%{transform:translate3D(48.621%,0,.985px) scale(1.989)}52.778%{transform:translate3D(95.766%,0,.94px) scale(1.955)}55.556%{transform:translate3D(140%,0,.866px) scale(1.9)}58.333%{transform:translate3D(179.981%,0,.766px) scale(1.825)}61.111%{transform:translate3D(214.492%,0,.643px) scale(1.732)}63.889%{transform:translate3D(242.487%,0,.5px) scale(1.625)}66.667%{transform:translate3D(263.114%,0,.342px) scale(1.507)}69.444%{transform:translate3D(275.746%,0,.174px) scale(1.38)}72.222%{transform:translate3D(280%,0,0) scale(1.25)}75%{transform:translate3D(275.746%,0,-.174px) scale(1.12)}77.778%{transform:translate3D(263.114%,0,-.342px) scale(.993)}80.556%{transform:translate3D(242.487%,0,-.5px) scale(.875)}83.333%{transform:translate3D(214.492%,0,-.643px) scale(.768)}86.111%{transform:translate3D(179.981%,0,-.766px) scale(.675)}88.889%{transform:translate3D(140%,0,-.866px) scale(.6)}91.667%{transform:translate3D(95.766%,0,-.94px) scale(.545)}94.444%{transform:translate3D(48.621%,0,-.985px) scale(.511)}97.222%{transform:translateZ(-1px) scale(.5)}}.arco-spin{display:inline-block}.arco-spin-with-tip{text-align:center}.arco-spin-icon{color:rgb(var(--primary-6));font-size:20px}.arco-spin-tip{margin-top:6px;color:rgb(var(--primary-6));font-weight:500;font-size:14px}.arco-spin-mask{position:absolute;top:0;right:0;bottom:0;left:0;z-index:11;text-align:center;background-color:var(--color-spin-layer-bg);transition:opacity .1s cubic-bezier(0,0,1,1);user-select:none}.arco-spin-loading{position:relative;user-select:none}.arco-spin-loading .arco-spin-mask-icon{position:absolute;top:50%;left:50%;z-index:12;transform:translate(-50%,-50%)}.arco-spin-loading .arco-spin-children:after{opacity:1;pointer-events:auto}.arco-split{display:flex}.arco-split-pane{overflow:auto}.arco-split-pane-second{flex:1}.arco-split-horizontal{flex-direction:row}.arco-split-vertical{flex-direction:column}.arco-split-trigger-icon-wrapper{display:flex;align-items:center;justify-content:center;height:100%;color:var(--color-text-1);font-size:12px;line-height:1;background-color:var(--color-neutral-3)}.arco-split-trigger-icon{display:inline-block;margin:-3px}.arco-split-trigger-vertical{height:100%;cursor:col-resize}.arco-split-trigger-horizontal{width:100%;cursor:row-resize}.arco-statistic{display:inline-block;color:var(--color-text-2);line-height:1.5715}.arco-statistic-title{margin-bottom:8px;font-size:14px;color:var(--color-text-2)}.arco-statistic-content .arco-statistic-value{color:var(--color-text-1);font-weight:500;font-size:26px;white-space:nowrap}.arco-statistic-content .arco-statistic-value-integer{font-size:26px;white-space:nowrap}.arco-statistic-content .arco-statistic-value-decimal{display:inline-block;font-size:26px}.arco-statistic-prefix,.arco-statistic-suffix{font-size:14px}.arco-statistic-extra{margin-top:8px;color:var(--color-text-2)}.arco-steps-item{position:relative;flex:1;margin-right:12px;overflow:hidden;white-space:nowrap;text-align:left}.arco-steps-item:last-child{flex:none;margin-right:0}.arco-steps-item-active .arco-steps-item-title{font-weight:500}.arco-steps-item-node{display:inline-block;margin-right:12px;font-weight:500;font-size:16px;vertical-align:top}.arco-steps-icon{box-sizing:border-box;width:28px;height:28px;line-height:26px;text-align:center;border-radius:var(--border-radius-circle);font-size:16px}.arco-steps-item-wait .arco-steps-icon{color:var(--color-text-2);background-color:var(--color-fill-2);border:1px solid transparent}.arco-steps-item-process .arco-steps-icon{color:var(--color-white);background-color:rgb(var(--primary-6));border:1px solid transparent}.arco-steps-item-finish .arco-steps-icon{color:rgb(var(--primary-6));background-color:var(--color-primary-light-1);border:1px solid transparent}.arco-steps-item-error .arco-steps-icon{color:var(--color-white);background-color:rgb(var(--danger-6));border:1px solid transparent}.arco-steps-item-title{position:relative;display:inline-block;padding-right:12px;color:var(--color-text-2);font-size:16px;line-height:28px;white-space:nowrap}.arco-steps-item-wait .arco-steps-item-title{color:var(--color-text-2)}.arco-steps-item-process .arco-steps-item-title,.arco-steps-item-finish .arco-steps-item-title,.arco-steps-item-error .arco-steps-item-title{color:var(--color-text-1)}.arco-steps-item-content{display:inline-block}.arco-steps-item-description{max-width:140px;margin-top:2px;color:var(--color-text-3);font-size:12px;white-space:normal}.arco-steps-item-wait .arco-steps-item-description,.arco-steps-item-process .arco-steps-item-description,.arco-steps-item-finish .arco-steps-item-description,.arco-steps-item-error .arco-steps-item-description{color:var(--color-text-3)}.arco-steps-label-horizontal .arco-steps-item:not(:last-child) .arco-steps-item-title:after{position:absolute;top:13.5px;left:100%;display:block;box-sizing:border-box;width:5000px;height:1px;background-color:var(--color-neutral-3);content:""}.arco-steps-label-horizontal .arco-steps-item.arco-steps-item-process .arco-steps-item-title:after{background-color:var(--color-neutral-3)}.arco-steps-label-horizontal .arco-steps-item.arco-steps-item-finish .arco-steps-item-title:after{background-color:rgb(var(--primary-6))}.arco-steps-label-horizontal .arco-steps-item.arco-steps-item-next-error .arco-steps-item-title:after{background-color:rgb(var(--danger-6))}.arco-steps-item:not(:last-child) .arco-steps-item-tail{position:absolute;top:13.5px;box-sizing:border-box;width:100%;height:1px}.arco-steps-item:not(:last-child) .arco-steps-item-tail:after{display:block;width:100%;height:100%;background-color:var(--color-neutral-3);content:""}.arco-steps-vertical .arco-steps-item:not(:last-child) .arco-steps-item-tail{position:absolute;top:0;left:13.5px;box-sizing:border-box;width:1px;height:100%;padding:34px 0 6px}.arco-steps-vertical .arco-steps-item:not(:last-child) .arco-steps-item-tail:after{display:block;width:100%;height:100%;background-color:var(--color-neutral-3);content:""}.arco-steps-size-small.arco-steps-vertical .arco-steps-item:not(:last-child) .arco-steps-item-tail{left:11.5px;padding:30px 0 6px}.arco-steps-item:not(:last-child).arco-steps-item-finish .arco-steps-item-tail:after{background-color:rgb(var(--primary-6))}.arco-steps-item:not(:last-child).arco-steps-item-next-error .arco-steps-item-tail:after{background-color:rgb(var(--danger-6))}.arco-steps-size-small:not(.arco-steps-vertical) .arco-steps-item:not(:last-child) .arco-steps-item-tail{top:11.5px}.arco-steps-size-small .arco-steps-item-node{font-size:14px}.arco-steps-size-small .arco-steps-item-title{font-size:14px;line-height:24px}.arco-steps-size-small .arco-steps-item-description{font-size:12px}.arco-steps-size-small .arco-steps-icon{width:24px;height:24px;font-size:14px;line-height:22px}.arco-steps-size-small.arco-steps-label-horizontal .arco-steps-item:not(:last-child) .arco-steps-item-title:after{top:11.5px}.arco-steps-label-vertical .arco-steps-item{overflow:visible}.arco-steps-label-vertical .arco-steps-item-title{margin-top:2px;padding-right:0}.arco-steps-label-vertical .arco-steps-item-node{margin-left:56px}.arco-steps-label-vertical .arco-steps-item-tail{left:96px;padding-right:40px}.arco-steps-label-vertical.arco-steps-size-small .arco-steps-item-node{margin-left:58px}.arco-steps-label-vertical.arco-steps-size-small .arco-steps-item-tail{left:94px;padding-right:36px}.arco-steps-mode-dot .arco-steps-item{position:relative;flex:1;margin-right:16px;overflow:visible;white-space:nowrap;text-align:left}.arco-steps-mode-dot .arco-steps-item:last-child{flex:none;margin-right:0}.arco-steps-mode-dot .arco-steps-item-active .arco-steps-item-title{font-weight:500}.arco-steps-mode-dot .arco-steps-item-node{display:inline-block;box-sizing:border-box;width:8px;height:8px;vertical-align:top;border-radius:var(--border-radius-circle)}.arco-steps-mode-dot .arco-steps-item-active .arco-steps-item-node{width:10px;height:10px}.arco-steps-mode-dot .arco-steps-item-wait .arco-steps-item-node{background-color:var(--color-fill-4);border-color:var(--color-fill-4)}.arco-steps-mode-dot .arco-steps-item-process .arco-steps-item-node,.arco-steps-mode-dot .arco-steps-item-finish .arco-steps-item-node{background-color:rgb(var(--primary-6));border-color:rgb(var(--primary-6))}.arco-steps-mode-dot .arco-steps-item-error .arco-steps-item-node{background-color:rgb(var(--danger-6));border-color:rgb(var(--danger-6))}.arco-steps-mode-dot.arco-steps-horizontal .arco-steps-item-node{margin-left:66px}.arco-steps-mode-dot.arco-steps-horizontal .arco-steps-item-active .arco-steps-item-node{margin-top:-1px;margin-left:65px}.arco-steps-mode-dot .arco-steps-item-content{display:inline-block}.arco-steps-mode-dot .arco-steps-item-title{position:relative;display:inline-block;margin-top:4px;font-size:16px}.arco-steps-mode-dot .arco-steps-item-wait .arco-steps-item-title{color:var(--color-text-2)}.arco-steps-mode-dot .arco-steps-item-process .arco-steps-item-title,.arco-steps-mode-dot .arco-steps-item-finish .arco-steps-item-title,.arco-steps-mode-dot .arco-steps-item-error .arco-steps-item-title{color:var(--color-text-1)}.arco-steps-mode-dot .arco-steps-item-description{margin-top:4px;font-size:12px;white-space:normal}.arco-steps-mode-dot .arco-steps-item-wait .arco-steps-item-description,.arco-steps-mode-dot .arco-steps-item-process .arco-steps-item-description,.arco-steps-mode-dot .arco-steps-item-finish .arco-steps-item-description,.arco-steps-mode-dot .arco-steps-item-error .arco-steps-item-description{color:var(--color-text-3)}.arco-steps-mode-dot .arco-steps-item:not(:last-child) .arco-steps-item-tail{position:absolute;top:3.5px;left:78px;box-sizing:border-box;width:100%;height:1px;background-color:var(--color-neutral-3)}.arco-steps-mode-dot .arco-steps-item:not(:last-child).arco-steps-item-process .arco-steps-item-tail{background-color:var(--color-neutral-3)}.arco-steps-mode-dot .arco-steps-item:not(:last-child).arco-steps-item-finish .arco-steps-item-tail{background-color:rgb(var(--primary-6))}.arco-steps-mode-dot .arco-steps-item:not(:last-child).arco-steps-item-next-error .arco-steps-item-tail{background-color:rgb(var(--danger-6))}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item-node{margin-right:16px}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item-content{overflow:hidden}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item-title{margin-top:-2px}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item-description{margin-top:4px}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item:not(:last-child) .arco-steps-item-tail{position:absolute;bottom:0;left:4px;box-sizing:border-box;width:1px;height:100%;padding-top:16px;padding-bottom:2px;background-color:transparent;transform:translate(-50%)}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item:not(:last-child) .arco-steps-item-tail:after{display:block;width:100%;height:100%;background-color:var(--color-neutral-3);content:""}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item:not(:last-child).arco-steps-item-process .arco-steps-item-tail:after{background-color:var(--color-neutral-3)}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item:not(:last-child).arco-steps-item-finish .arco-steps-item-tail:after{background-color:rgb(var(--primary-6))}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item:not(:last-child).arco-steps-item-next-error .arco-steps-item-tail:after{background-color:rgb(var(--danger-6))}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item .arco-steps-item-node{margin-top:8px}.arco-steps-mode-dot.arco-steps-vertical .arco-steps-item-active .arco-steps-item-node{margin-top:6px;margin-left:-1px}.arco-steps-mode-arrow .arco-steps-item{position:relative;display:flex;flex:1;align-items:center;height:72px;overflow:visible;white-space:nowrap}.arco-steps-mode-arrow .arco-steps-item:not(:last-child){margin-right:4px}.arco-steps-mode-arrow .arco-steps-item-wait{background-color:var(--color-fill-1)}.arco-steps-mode-arrow .arco-steps-item-process{background-color:rgb(var(--primary-6))}.arco-steps-mode-arrow .arco-steps-item-finish{background-color:var(--color-primary-light-1)}.arco-steps-mode-arrow .arco-steps-item-error{background-color:rgb(var(--danger-6))}.arco-steps-mode-arrow .arco-steps-item-content{display:inline-block;box-sizing:border-box}.arco-steps-mode-arrow .arco-steps-item:first-child .arco-steps-item-content{padding-left:16px}.arco-steps-mode-arrow .arco-steps-item:not(:first-child) .arco-steps-item-content{padding-left:52px}.arco-steps-mode-arrow .arco-steps-item-title{position:relative;display:inline-block;font-size:16px;white-space:nowrap}.arco-steps-mode-arrow .arco-steps-item-title:after{display:none!important}.arco-steps-mode-arrow .arco-steps-item-wait .arco-steps-item-title{color:var(--color-text-2)}.arco-steps-mode-arrow .arco-steps-item-process .arco-steps-item-title{color:var(--color-white)}.arco-steps-mode-arrow .arco-steps-item-finish .arco-steps-item-title{color:var(--color-text-1)}.arco-steps-mode-arrow .arco-steps-item-error .arco-steps-item-title{color:var(--color-white)}.arco-steps-mode-arrow .arco-steps-item-active .arco-steps-item-title{font-weight:500}.arco-steps-mode-arrow .arco-steps-item-description{max-width:none;margin-top:0;font-size:12px;white-space:nowrap}.arco-steps-mode-arrow .arco-steps-item-wait .arco-steps-item-description{color:var(--color-text-3)}.arco-steps-mode-arrow .arco-steps-item-process .arco-steps-item-description{color:var(--color-white)}.arco-steps-mode-arrow .arco-steps-item-finish .arco-steps-item-description{color:var(--color-text-3)}.arco-steps-mode-arrow .arco-steps-item-error .arco-steps-item-description{color:var(--color-white)}.arco-steps-mode-arrow .arco-steps-item:not(:first-child):before{position:absolute;top:0;left:0;z-index:1;display:block;width:0;height:0;border-top:36px solid transparent;border-bottom:36px solid transparent;border-left:36px solid var(--color-bg-2);content:""}.arco-steps-mode-arrow .arco-steps-item:not(:last-child):after{position:absolute;top:0;right:-36px;z-index:2;display:block;clear:both;width:0;height:0;border-top:36px solid transparent;border-bottom:36px solid transparent;content:""}.arco-steps-mode-arrow .arco-steps-item:not(:last-child).arco-steps-item-wait:after{border-left:36px solid var(--color-fill-1)}.arco-steps-mode-arrow .arco-steps-item:not(:last-child).arco-steps-item-process:after{border-left:36px solid rgb(var(--primary-6))}.arco-steps-mode-arrow .arco-steps-item:not(:last-child).arco-steps-item-error:after{border-left:36px solid rgb(var(--danger-6))}.arco-steps-mode-arrow .arco-steps-item:not(:last-child).arco-steps-item-finish:after{border-left:36px solid var(--color-primary-light-1)}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item{height:40px}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item-title{font-size:14px}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item-description{display:none}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item:not(:first-child):before{border-top:20px solid transparent;border-bottom:20px solid transparent;border-left:20px solid var(--color-bg-2)}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item:not(:last-child):after{right:-20px;border-top:20px solid transparent;border-bottom:20px solid transparent;border-left:20px solid var(--color-fill-1)}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item:first-child .arco-steps-item-content{padding-left:20px}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item:not(:first-child) .arco-steps-item-content{padding-left:40px}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item-error:not(:last-child):after{border-left:20px solid rgb(var(--danger-6))}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item:not(:last-child).arco-steps-item-wait:after{border-left:20px solid var(--color-fill-1)}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item:not(:last-child).arco-steps-item-process:after{border-left:20px solid rgb(var(--primary-6))}.arco-steps-mode-arrow.arco-steps-size-small .arco-steps-item:not(:last-child).arco-steps-item-finish:after{border-left:20px solid var(--color-primary-light-1)}.arco-steps-mode-navigation.arco-steps-label-horizontal .arco-steps-item:not(:last-child) .arco-steps-item-title:after{display:none}.arco-steps-mode-navigation .arco-steps-item{padding-left:20px;padding-right:10px;margin-right:32px}.arco-steps-mode-navigation .arco-steps-item:last-child{flex:1}.arco-steps-mode-navigation .arco-steps-item-content{margin-bottom:20px}.arco-steps-mode-navigation .arco-steps-item-description{padding-right:20px}.arco-steps-mode-navigation .arco-steps-item-active:after{content:"";position:absolute;display:block;height:2px;left:0;right:30px;bottom:0;background-color:rgb(var(--primary-6))}.arco-steps-mode-navigation .arco-steps-item-active:last-child:after{width:100%}.arco-steps-mode-navigation .arco-steps-item:not(:last-child) .arco-steps-item-content:after{position:absolute;top:10px;right:30px;display:inline-block;width:6px;height:6px;background-color:var(--color-bg-2);border:2px solid var(--color-text-4);border-bottom:none;border-left:none;-webkit-transform:rotate(45deg);transform:rotate(45deg);content:""}.arco-steps{display:flex}.arco-steps-changeable .arco-steps-item-title,.arco-steps-changeable .arco-steps-item-description{transition:all .1s cubic-bezier(0,0,1,1)}.arco-steps-changeable .arco-steps-item:not(.arco-steps-item-active):not(.arco-steps-item-disabled){cursor:pointer}.arco-steps-changeable .arco-steps-item:not(.arco-steps-item-active):not(.arco-steps-item-disabled):hover .arco-steps-item-content .arco-steps-item-title,.arco-steps-changeable .arco-steps-item:not(.arco-steps-item-active):not(.arco-steps-item-disabled):hover .arco-steps-item-content .arco-steps-item-description{color:rgb(var(--primary-6))}.arco-steps-line-less .arco-steps-item-title:after{display:none!important}.arco-steps-vertical{flex-direction:column}.arco-steps-vertical .arco-steps-item:not(:last-child){min-height:90px}.arco-steps-vertical .arco-steps-item-title:after{display:none!important}.arco-steps-vertical .arco-steps-item-description{max-width:none}.arco-steps-label-vertical .arco-steps-item-content{display:block;width:140px;text-align:center}.arco-steps-label-vertical .arco-steps-item-description{max-width:none}.switch-slide-text-enter-from{left:-100%!important}.switch-slide-text-enter-to{left:8px!important}.switch-slide-text-enter-active{transition:left .2s cubic-bezier(.34,.69,.1,1)}.switch-slide-text-leave-from{left:100%!important}.switch-slide-text-leave-to{left:26px!important}.switch-slide-text-leave-active{transition:left .2s cubic-bezier(.34,.69,.1,1)}.arco-switch{position:relative;box-sizing:border-box;min-width:40px;height:24px;padding:0;overflow:hidden;line-height:24px;vertical-align:middle;background-color:var(--color-fill-4);border:none;border-radius:12px;outline:none;cursor:pointer;transition:background-color .2s cubic-bezier(.34,.69,.1,1)}.arco-switch-handle{position:absolute;top:4px;left:4px;display:flex;align-items:center;justify-content:center;width:16px;height:16px;color:var(--color-neutral-3);font-size:12px;background-color:var(--color-bg-white);border-radius:50%;transition:all .2s cubic-bezier(.34,.69,.1,1)}.arco-switch-checked{background-color:rgb(var(--primary-6))}.arco-switch-checked .arco-switch-handle{left:calc(100% - 20px);color:rgb(var(--primary-6))}.arco-switch[disabled] .arco-switch-handle{color:var(--color-fill-2)}.arco-switch[disabled].arco-switch-checked .arco-switch-handle{color:var(--color-primary-light-3)}.arco-switch-text-holder{margin:0 8px 0 26px;font-size:12px;opacity:0}.arco-switch-text{position:absolute;top:0;left:26px;color:var(--color-white);font-size:12px}.arco-switch-checked .arco-switch-text-holder{margin:0 26px 0 8px}.arco-switch-checked .arco-switch-text{left:8px;color:var(--color-white)}.arco-switch[disabled]{background-color:var(--color-fill-2);cursor:not-allowed}.arco-switch[disabled] .arco-switch-text{color:var(--color-white)}.arco-switch[disabled].arco-switch-checked{background-color:var(--color-primary-light-3)}.arco-switch[disabled].arco-switch-checked .arco-switch-text{color:var(--color-white)}.arco-switch-loading{background-color:var(--color-fill-2)}.arco-switch-loading .arco-switch-handle{color:var(--color-neutral-3)}.arco-switch-loading .arco-switch-text{color:var(--color-white)}.arco-switch-loading.arco-switch-checked{background-color:var(--color-primary-light-3)}.arco-switch-loading.arco-switch-checked .arco-switch-handle{color:var(--color-primary-light-3)}.arco-switch-loading.arco-switch-checked .arco-switch-text{color:var(--color-primary-light-1)}.arco-switch-small{min-width:28px;height:16px;line-height:16px}.arco-switch-small.arco-switch-checked{padding-left:-2px}.arco-switch-small .arco-switch-handle{top:2px;left:2px;width:12px;height:12px;border-radius:8px}.arco-switch-small .arco-switch-handle-icon{position:absolute;top:50%;left:50%;transform:translate(-50%,-50%) scale(.66667)}.arco-switch-small.arco-switch-checked .arco-switch-handle{left:calc(100% - 14px)}.arco-switch-type-round{min-width:40px;border-radius:var(--border-radius-small)}.arco-switch-type-round .arco-switch-handle{border-radius:2px}.arco-switch-type-round.arco-switch-small{min-width:28px;height:16px;line-height:16px;border-radius:2px}.arco-switch-type-round.arco-switch-small .arco-switch-handle{border-radius:1px}.arco-switch-type-line{min-width:36px;overflow:unset;background-color:transparent}.arco-switch-type-line:after{display:block;width:100%;height:6px;background-color:var(--color-fill-4);border-radius:3px;transition:background-color .2s cubic-bezier(.34,.69,.1,1);content:""}.arco-switch-type-line .arco-switch-handle{top:2px;left:0;width:20px;height:20px;background-color:var(--color-bg-white);border-radius:10px;box-shadow:0 1px 3px var(--color-neutral-6)}.arco-switch-type-line.arco-switch-checked{background-color:transparent}.arco-switch-type-line.arco-switch-checked:after{background-color:rgb(var(--primary-6))}.arco-switch-type-line.arco-switch-custom-color{--custom-color: var(--color-fill-4)}.arco-switch-type-line.arco-switch-custom-color:after{background-color:var(--custom-color)}.arco-switch-type-line.arco-switch-custom-color.arco-switch-checked{--custom-color: rgb(var(--primary-6))}.arco-switch-type-line.arco-switch-checked .arco-switch-handle{left:calc(100% - 20px)}.arco-switch-type-line[disabled]{background-color:transparent;cursor:not-allowed}.arco-switch-type-line[disabled]:after{background-color:var(--color-fill-2)}.arco-switch-type-line[disabled].arco-switch-checked{background-color:transparent}.arco-switch-type-line[disabled].arco-switch-checked:after{background-color:var(--color-primary-light-3)}.arco-switch-type-line.arco-switch-loading{background-color:transparent}.arco-switch-type-line.arco-switch-loading:after{background-color:var(--color-fill-2)}.arco-switch-type-line.arco-switch-loading.arco-switch-checked{background-color:transparent}.arco-switch-type-line.arco-switch-loading.arco-switch-checked:after{background-color:var(--color-primary-light-3)}.arco-switch-type-line.arco-switch-small{min-width:28px;height:16px;line-height:16px}.arco-switch-type-line.arco-switch-small.arco-switch-checked{padding-left:0}.arco-switch-type-line.arco-switch-small .arco-switch-handle{top:0px;width:16px;height:16px;border-radius:8px}.arco-switch-type-line.arco-switch-small .arco-switch-handle-icon{transform:translate(-50%,-50%) scale(1)}.arco-switch-type-line.arco-switch-small.arco-switch-checked .arco-switch-handle{left:calc(100% - 16px)}.arco-table-filters-content{box-sizing:border-box;min-width:100px;background:var(--color-bg-5);border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium);box-shadow:0 2px 5px #0000001a}.arco-table-filters-list{max-height:200px;padding:4px 0;overflow-y:auto}.arco-table-filters-item{height:32px;padding:0 12px;font-size:14px;line-height:32px}.arco-table-filters-text{width:100%;max-width:160px;height:34px;margin-right:0;padding-left:10px;overflow:hidden;line-height:32px;white-space:nowrap;text-overflow:ellipsis;cursor:pointer}.arco-table-filters-bottom{box-sizing:border-box;height:38px;padding:0 12px;overflow:hidden;line-height:38px;border-top:1px solid var(--color-neutral-3)}.arco-table-filters-bottom>*:not(*:last-child){margin-right:8px}.arco-table{position:relative}.arco-table-column-handle{position:absolute;top:0;right:-4px;z-index:1;width:8px;height:100%;cursor:col-resize}.arco-table .arco-spin{display:flex;flex-direction:column;height:100%}.arco-table>.arco-spin>.arco-spin-children:after{z-index:2}.arco-table-footer{border-radius:0 0 var(--border-radius-medium) var(--border-radius-medium)}.arco-table-scroll-position-right .arco-table-col-fixed-left-last:after,.arco-table-scroll-position-middle .arco-table-col-fixed-left-last:after{box-shadow:inset 6px 0 8px -3px #00000026}.arco-table-scroll-position-left .arco-table-col-fixed-right-first:after,.arco-table-scroll-position-middle .arco-table-col-fixed-right-first:after{box-shadow:inset -6px 0 8px -3px #00000026}.arco-table-layout-fixed .arco-table-element{table-layout:fixed}.arco-table .arco-table-element{width:100%;min-width:100%;margin:0;border-collapse:separate;border-spacing:0}.arco-table-th{position:relative;box-sizing:border-box;color:rgb(var(--gray-10));font-weight:500;line-height:1.5715;text-align:left;background-color:var(--color-neutral-2)}.arco-table-th[colspan]{text-align:center}.arco-table-th-align-right{text-align:right}.arco-table-th-align-right .arco-table-cell-with-sorter{justify-content:flex-end}.arco-table-th-align-center{text-align:center}.arco-table-th-align-center .arco-table-cell-with-sorter{justify-content:center}.arco-table-td{box-sizing:border-box;color:rgb(var(--gray-10));line-height:1.5715;text-align:left;word-break:break-all;background-color:var(--color-bg-2);border-bottom:1px solid var(--color-neutral-3)}.arco-table-td-align-right{text-align:right}.arco-table-td-align-center{text-align:center}.arco-table-td.arco-table-drag-handle{cursor:move}.arco-table-cell{display:flex;align-items:center}.arco-table-cell-align-right{justify-content:flex-end;text-align:right}.arco-table-cell-align-center{justify-content:center;text-align:center}.arco-table-text-ellipsis{overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-table-td-content{display:block;width:100%}.arco-table-th.arco-table-col-sorted{background-color:var(--color-neutral-3)}.arco-table-td.arco-table-col-sorted{background-color:var(--color-fill-1)}.arco-table-col-fixed-left,.arco-table-col-fixed-right{position:sticky;z-index:10}.arco-table-col-fixed-left-last:after,.arco-table-col-fixed-right-first:after{position:absolute;top:0;bottom:-1px;left:0;width:10px;box-shadow:none;transform:translate(-100%);transition:box-shadow .1s cubic-bezier(0,0,1,1);content:"";pointer-events:none}.arco-table-col-fixed-left-last:after{right:0;left:unset;transform:translate(100%)}.arco-table-cell-text-ellipsis{overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-table-editable-row .arco-table-cell-wrap-value{border:1px solid var(--color-white);border-radius:var(--border-radius-medium);cursor:pointer;transition:all .1s cubic-bezier(0,0,1,1)}.arco-table-editable-row:hover .arco-table-cell-wrap-value{border:1px solid var(--color-neutral-3)}.arco-table .arco-table-expand-btn{display:inline-flex;align-items:center;justify-content:center;width:14px;height:14px;padding:0;color:var(--color-text-2);font-size:12px;line-height:14px;background-color:var(--color-neutral-3);border:1px solid transparent;border-radius:2px;outline:none;cursor:pointer;transition:background-color .1s cubic-bezier(0,0,1,1)}.arco-table .arco-table-expand-btn:hover{color:var(--color-text-1);background-color:var(--color-neutral-4);border-color:transparent}.arco-table-cell-expand-icon{display:flex;align-items:center}.arco-table-cell-expand-icon .arco-table-cell-inline-icon{display:inline-flex;margin-right:4px}.arco-table-cell-expand-icon .arco-table-cell-inline-icon .arco-icon-loading{color:rgb(var(--primary-6))}.arco-table-cell-expand-icon-hidden{display:inline-block;width:14px;height:14px;margin-right:4px}.arco-table-tr-expand .arco-table-td{background-color:var(--color-fill-1)}.arco-table-cell-fixed-expand{position:sticky;left:0;box-sizing:border-box}.arco-table-tr-expand .arco-table-td .arco-table .arco-table-container{border:none}.arco-table-tr-expand .arco-table-td .arco-table .arco-table-th{border-bottom:1px solid var(--color-neutral-3)}.arco-table-tr-expand .arco-table-td .arco-table .arco-table-th,.arco-table-tr-expand .arco-table-td .arco-table .arco-table-td{background-color:transparent}.arco-table-tr-expand .arco-table-td .arco-table .arco-table-pagination{margin-bottom:12px}.arco-table-th.arco-table-operation,.arco-table-td.arco-table-operation{text-align:center}.arco-table-th.arco-table-operation .arco-table-cell,.arco-table-td.arco-table-operation .arco-table-cell{display:flex;justify-content:center;padding:0}.arco-table-radio,.arco-table-checkbox{justify-content:center}.arco-table-checkbox .arco-checkbox,.arco-table-radio .arco-radio{padding-left:0}.arco-table-selection-checkbox-col,.arco-table-selection-radio-col,.arco-table-expand-col,.arco-table-drag-handle-col{width:40px;min-width:40px;max-width:40px}.arco-table-th{transition:background-color .1s cubic-bezier(0,0,1,1)}.arco-table-cell-with-sorter{display:flex;align-items:center;cursor:pointer}.arco-table-cell-with-sorter:hover{background-color:rgba(var(--gray-4),.5)}.arco-table-cell-with-filter{display:flex;align-items:center}.arco-table-cell-next-ascend .arco-table-sorter-icon .arco-icon-caret-up,.arco-table-cell-next-descend .arco-table-sorter-icon .arco-icon-caret-down{color:var(--color-neutral-6)}.arco-table-sorter{display:inline-block;margin-left:8px;vertical-align:-3px}.arco-table-sorter.arco-table-sorter-direction-one{vertical-align:0}.arco-table-sorter-icon{position:relative;width:14px;height:8px;overflow:hidden;line-height:8px}.arco-table-sorter-icon .arco-icon-caret-up,.arco-table-sorter-icon .arco-icon-caret-down{position:absolute;top:50%;color:var(--color-neutral-5);font-size:12px;transition:all .1s cubic-bezier(0,0,1,1)}.arco-table-sorter-icon .arco-icon-caret-up{top:-2px;left:1px}.arco-table-sorter-icon .arco-icon-caret-down{top:-3px;left:1px}.arco-table-sorter-icon.arco-table-sorter-icon-active svg{color:rgb(var(--primary-6))}.arco-table-filters{position:absolute;top:0;right:0;display:flex;align-items:center;justify-content:center;width:24px;height:100%;line-height:1;vertical-align:0;background-color:transparent;cursor:pointer;transition:all .1s cubic-bezier(0,0,1,1)}.arco-table-filters:hover,.arco-table-filters-open{background-color:var(--color-neutral-4)}.arco-table-filters svg{color:var(--color-text-2);font-size:16px;transition:all .1s cubic-bezier(0,0,1,1)}.arco-table-filters-active svg{color:rgb(var(--primary-6))}.arco-table-filters-align-left{position:relative;width:auto;margin-left:8px}.arco-table-filters-align-left svg{font-size:12px}.arco-table-filters-align-left:hover,.arco-table-filters-align-left-open{background:none}.arco-table-filters-align-left:hover:before,.arco-table-filters-align-left.arco-table-filters-open:before{background:var(--color-fill-4)}.arco-table-container{position:relative;border-radius:var(--border-radius-medium) var(--border-radius-medium) 0 0}.arco-table-header{flex-shrink:0;border-radius:var(--border-radius-medium) var(--border-radius-medium) 0 0}.arco-table-container{box-sizing:border-box;width:100%;min-height:0}.arco-table-container .arco-table-content{display:flex;flex-direction:column;width:auto;height:100%}.arco-table-container .arco-table-content-scroll-x{overflow-x:auto;overflow-y:hidden}.arco-table-container:before,.arco-table-container:after{position:absolute;z-index:1;width:10px;height:100%;box-shadow:none;transition:box-shadow .1s cubic-bezier(0,0,1,1);content:"";pointer-events:none}.arco-table-container:before{top:0;left:0;border-top-left-radius:var(--border-radius-medium)}.arco-table-container:after{top:0;right:0;border-top-right-radius:var(--border-radius-medium)}.arco-table-container:not(.arco-table-has-fixed-col-left).arco-table-scroll-position-right:before,.arco-table-container:not(.arco-table-has-fixed-col-left).arco-table-scroll-position-middle:before{box-shadow:inset 6px 0 8px -3px #00000026}.arco-table-container:not(.arco-table-has-fixed-col-right).arco-table-scroll-position-left:after,.arco-table-container:not(.arco-table-has-fixed-col-right).arco-table-scroll-position-middle:after{box-shadow:inset -6px 0 8px -3px #00000026}.arco-table-header{overflow-x:hidden;overflow-y:hidden;background-color:var(--color-neutral-2);scrollbar-color:transparent transparent}.arco-table-header-sticky{position:sticky;top:0;z-index:100}.arco-table:not(.arco-table-empty) .arco-table-header::-webkit-scrollbar{height:0;background-color:transparent}.arco-table.arco-table-empty .arco-table-header{overflow-x:auto}.arco-table-body{position:relative;width:100%;min-height:40px;overflow:auto;background-color:var(--color-bg-2)}.arco-table-border .arco-table-container{border-top:1px solid var(--color-neutral-3);border-left:1px solid var(--color-neutral-3)}.arco-table-border .arco-table-scroll-y{border-bottom:1px solid var(--color-neutral-3)}.arco-table-border .arco-table-scroll-y .arco-table-body .arco-table-tr:last-of-type .arco-table-td,.arco-table-border .arco-table-scroll-y tfoot .arco-table-tr:last-of-type .arco-table-td{border-bottom:none}.arco-table-border .arco-table-scroll-y .arco-table-body .arco-table-tr:last-of-type .arco-table-td.arco-table-col-fixed-left-last:after,.arco-table-border .arco-table-scroll-y tfoot .arco-table-tr:last-of-type .arco-table-td.arco-table-col-fixed-left-last:after,.arco-table-border .arco-table-scroll-y .arco-table-body .arco-table-tr:last-of-type .arco-table-td.arco-table-col-fixed-right-first:after,.arco-table-border .arco-table-scroll-y tfoot .arco-table-tr:last-of-type .arco-table-td.arco-table-col-fixed-right-first:after{bottom:0}.arco-table-border .arco-table-tr .arco-table-th{border-bottom:1px solid var(--color-neutral-3)}.arco-table-border .arco-table-footer{border:1px solid var(--color-neutral-3);border-top:0}.arco-table-border:not(.arco-table-border-cell) .arco-table-container{border-right:1px solid var(--color-neutral-3)}.arco-table-border-cell .arco-table-th,.arco-table-border-cell .arco-table-td:not(.arco-table-tr-expand){border-right:1px solid var(--color-neutral-3)}.arco-table-border-cell .arco-table-th-resizing,.arco-table-border-cell .arco-table-td-resizing:not(.arco-table-tr-expand){border-right-color:rgb(var(--primary-6))}.arco-table-border-header-cell .arco-table-th{border-right:1px solid var(--color-neutral-3);border-bottom:1px solid var(--color-neutral-3)}.arco-table-border.arco-table-border-header-cell thead .arco-table-tr:first-child .arco-table-th:last-child{border-right:0}.arco-table-border-body-cell .arco-table-td:not(:last-child):not(.arco-table-tr-expand){border-right:1px solid var(--color-neutral-3)}.arco-table-stripe:not(.arco-table-dragging) .arco-table-tr:not(.arco-table-tr-empty):not(.arco-table-tr-summary):nth-child(even) .arco-table-td:not(.arco-table-col-fixed-left):not(.arco-table-col-fixed-right),.arco-table-stripe .arco-table-tr-drag .arco-table-td:not(.arco-table-col-fixed-left):not(.arco-table-col-fixed-right){background-color:var(--color-fill-1)}.arco-table-stripe:not(.arco-table-dragging) .arco-table-tr:not(.arco-table-tr-empty):not(.arco-table-tr-summary):nth-child(even) .arco-table-td.arco-table-col-fixed-left:before,.arco-table-stripe .arco-table-tr-drag .arco-table-td.arco-table-col-fixed-left:before,.arco-table-stripe:not(.arco-table-dragging) .arco-table-tr:not(.arco-table-tr-empty):not(.arco-table-tr-summary):nth-child(even) .arco-table-td.arco-table-col-fixed-right:before,.arco-table-stripe .arco-table-tr-drag .arco-table-td.arco-table-col-fixed-right:before{position:absolute;top:0;left:0;z-index:-1;width:100%;height:100%;background-color:var(--color-fill-1);content:""}.arco-table .arco-table-tr-draggable{cursor:move}.arco-table-hover:not(.arco-table-dragging) .arco-table-tr:not(.arco-table-tr-empty):not(.arco-table-tr-summary):hover .arco-table-td:not(.arco-table-col-fixed-left):not(.arco-table-col-fixed-right),.arco-table-hover .arco-table-tr-drag .arco-table-td:not(.arco-table-col-fixed-left):not(.arco-table-col-fixed-right){background-color:var(--color-fill-1)}.arco-table-hover:not(.arco-table-dragging) .arco-table-tr:not(.arco-table-tr-empty):not(.arco-table-tr-summary):hover .arco-table-td.arco-table-col-fixed-left:before,.arco-table-hover .arco-table-tr-drag .arco-table-td.arco-table-col-fixed-left:before,.arco-table-hover:not(.arco-table-dragging) .arco-table-tr:not(.arco-table-tr-empty):not(.arco-table-tr-summary):hover .arco-table-td.arco-table-col-fixed-right:before,.arco-table-hover .arco-table-tr-drag .arco-table-td.arco-table-col-fixed-right:before{position:absolute;top:0;left:0;z-index:-1;width:100%;height:100%;background-color:var(--color-fill-1);content:""}.arco-table-hover .arco-table-tr-expand:not(.arco-table-tr-empty):hover .arco-table-td:not(.arco-table-col-fixed-left):not(.arco-table-col-fixed-right){background-color:var(--color-fill-1)}.arco-table-tr-expand .arco-table-td .arco-table-hover .arco-table-tr:not(.arco-table-tr-empty) .arco-table-td:not(.arco-table-col-fixed-left):not(.arco-table-col-fixed-right){background-color:transparent}.arco-table-tr-expand .arco-table-td .arco-table-hover .arco-table-tr:not(.arco-table-tr-empty) .arco-table-td.arco-table-col-fixed-left:before,.arco-table-tr-expand .arco-table-td .arco-table-hover .arco-table-tr:not(.arco-table-tr-empty) .arco-table-td.arco-table-col-fixed-right:before{background-color:transparent}.arco-table-tfoot{position:relative;z-index:1;flex-shrink:0;width:100%;overflow-x:auto;background-color:var(--color-neutral-2);box-shadow:0 -1px 0 var(--color-neutral-3);scrollbar-color:transparent transparent}.arco-table-tfoot::-webkit-scrollbar{height:0;background-color:transparent}.arco-table tfoot .arco-table-td{background-color:var(--color-neutral-2)}.arco-table-tr-checked .arco-table-td{background-color:var(--color-fill-1)}.arco-table .arco-table-cell{padding:9px 16px}.arco-table .arco-table-th,.arco-table .arco-table-td{font-size:14px}.arco-table .arco-table-footer{padding:9px 16px}.arco-table .arco-table-tr-expand .arco-table-td .arco-table{margin:-9px -16px -10px}.arco-table .arco-table-editable-row .arco-table-cell-wrap-value{padding:9px 16px}.arco-table-size-medium .arco-table-cell{padding:7px 16px}.arco-table-size-medium .arco-table-th,.arco-table-size-medium .arco-table-td{font-size:14px}.arco-table-size-medium .arco-table-footer{padding:7px 16px}.arco-table-size-medium .arco-table-tr-expand .arco-table-td .arco-table{margin:-7px -16px -8px}.arco-table-size-medium .arco-table-editable-row .arco-table-cell-wrap-value{padding:7px 16px}.arco-table-size-small .arco-table-cell{padding:5px 16px}.arco-table-size-small .arco-table-th,.arco-table-size-small .arco-table-td{font-size:14px}.arco-table-size-small .arco-table-footer{padding:5px 16px}.arco-table-size-small .arco-table-tr-expand .arco-table-td .arco-table{margin:-5px -16px -6px}.arco-table-size-small .arco-table-editable-row .arco-table-cell-wrap-value{padding:5px 16px}.arco-table-size-mini .arco-table-cell{padding:2px 16px}.arco-table-size-mini .arco-table-th,.arco-table-size-mini .arco-table-td{font-size:12px}.arco-table-size-mini .arco-table-footer{padding:2px 16px}.arco-table-size-mini .arco-table-tr-expand .arco-table-td .arco-table{margin:-2px -16px -3px}.arco-table-size-mini .arco-table-editable-row .arco-table-cell-wrap-value{padding:2px 16px}.arco-table-virtualized .arco-table-element{table-layout:fixed}.arco-table-virtualized div.arco-table-body div.arco-table-tr{display:flex}.arco-table-virtualized div.arco-table-body div.arco-table-td{display:flex;flex:1;align-items:center}.arco-table-pagination{display:flex;align-items:center;justify-content:flex-end;margin-top:12px}.arco-table-pagination-left{justify-content:flex-start}.arco-table-pagination-center{justify-content:center}.arco-table-pagination-top{margin-top:0;margin-bottom:12px}.arco-icon-hover.arco-tabs-icon-hover:before{width:16px;height:16px}.arco-tabs .arco-tabs-icon-hover{color:var(--color-text-2);font-size:12px;user-select:none}.arco-tabs-dropdown-icon{margin-left:6px;font-size:12px;user-select:none}.arco-tabs-tab-close-btn{margin-left:8px;user-select:none}.arco-tabs-nav-add-btn{display:inline-flex;align-items:center;justify-content:center;padding:0 8px;font-size:12px;user-select:none}.arco-tabs-add{position:relative}.arco-tabs-nav-button-left{margin-right:6px;margin-left:10px}.arco-tabs-nav-button-right{margin-right:10px;margin-left:6px}.arco-tabs-nav-button-up{margin-bottom:10px}.arco-tabs-nav-button-down{margin-top:10px}.arco-tabs-nav-button-disabled{color:var(--color-text-4);cursor:not-allowed}.arco-tabs{position:relative;overflow:hidden}.arco-tabs-nav{position:relative;flex-shrink:0}.arco-tabs-nav:before{position:absolute;right:0;bottom:0;left:0;display:block;clear:both;height:1px;background-color:var(--color-neutral-3);content:""}.arco-tabs-nav-tab{display:flex;flex:1;overflow:hidden}.arco-tabs-nav-tab-list{position:relative;display:inline-block;white-space:nowrap;transition:transform .2s cubic-bezier(.34,.69,.1,1)}.arco-tabs-nav-extra{display:flex;align-items:center;width:auto;line-height:32px}.arco-tabs-nav-extra .arco-tabs-nav-add-btn{padding-left:0}.arco-tabs-tab{display:inline-flex;align-items:center;box-sizing:border-box;padding:4px 0;color:var(--color-text-2);font-size:14px;line-height:1.5715;outline:none;cursor:pointer;transition:color .2s cubic-bezier(0,0,1,1)}.arco-tabs-tab-title{display:inline-block}.arco-tabs-tab:hover{color:var(--color-text-2);font-weight:400}.arco-tabs-tab-disabled,.arco-tabs-tab-disabled:hover{color:var(--color-text-4);cursor:not-allowed}.arco-tabs-tab-active,.arco-tabs-tab-active:hover{color:rgb(var(--primary-6));font-weight:500}.arco-tabs-tab-active.arco-tabs-tab-disabled,.arco-tabs-tab-active:hover.arco-tabs-tab-disabled{color:var(--color-primary-light-3)}.arco-tabs-nav-ink{position:absolute;top:initial;right:initial;bottom:0;height:2px;background-color:rgb(var(--primary-6));transition:left .2s cubic-bezier(.34,.69,.1,1),width .2s cubic-bezier(.34,.69,.1,1)}.arco-tabs-nav-ink.arco-tabs-header-ink-no-animation{transition:none}.arco-tabs-nav-ink-disabled{background-color:var(--color-primary-light-3)}.arco-tabs-nav-type-line .arco-tabs-nav-extra{line-height:40px}.arco-tabs-nav-type-line .arco-tabs-tab{margin:0 16px;padding:8px 0;line-height:1.5715}.arco-tabs-nav-type-line .arco-tabs-tab-title{position:relative;display:inline-block;padding:1px 0}.arco-tabs-nav-type-line .arco-tabs-tab-title:before{position:absolute;top:0;right:-8px;bottom:0;left:-8px;z-index:-1;background-color:transparent;border-radius:var(--border-radius-small);opacity:1;transition:background-color,opacity .2s cubic-bezier(0,0,1,1);content:""}.arco-tabs-nav-type-line .arco-tabs-tab:hover .arco-tabs-tab-title:before{background-color:var(--color-fill-2)}.arco-tabs-nav-type-line .arco-tabs-tab-active .arco-tabs-tab-title:before,.arco-tabs-nav-type-line .arco-tabs-tab-active:hover .arco-tabs-tab-title:before{background-color:transparent}.arco-tabs-nav-type-line .arco-tabs-tab-disabled .arco-tabs-tab-title:before,.arco-tabs-nav-type-line .arco-tabs-tab-disabled:hover .arco-tabs-tab-title:before{opacity:0}.arco-tabs-nav-type-line .arco-tabs-tab:focus-visible .arco-tabs-tab-title:before{border:2px solid rgb(var(--primary-6))}.arco-tabs-nav-type-line.arco-tabs-nav-horizontal>.arco-tabs-tab:first-of-type{margin-left:16px}.arco-tabs-nav-type-line.arco-tabs-nav-horizontal .arco-tabs-nav-tab-list-no-padding>.arco-tabs-tab:first-of-type,.arco-tabs-nav-text.arco-tabs-nav-horizontal .arco-tabs-nav-tab-list-no-padding>.arco-tabs-tab:first-of-type{margin-left:0}.arco-tabs-nav-type-card .arco-tabs-tab,.arco-tabs-nav-type-card-gutter .arco-tabs-tab{position:relative;padding:4px 16px;font-size:14px;border:1px solid var(--color-neutral-3);transition:padding .2s cubic-bezier(0,0,1,1),color .2s cubic-bezier(0,0,1,1)}.arco-tabs-nav-type-card .arco-tabs-tab-closable,.arco-tabs-nav-type-card-gutter .arco-tabs-tab-closable{padding-right:12px}.arco-tabs-nav-type-card .arco-tabs-tab-closable:not(.arco-tabs-tab-active):hover .arco-icon-hover:hover:before,.arco-tabs-nav-type-card-gutter .arco-tabs-tab-closable:not(.arco-tabs-tab-active):hover .arco-icon-hover:hover:before{background-color:var(--color-fill-4)}.arco-tabs-nav-type-card .arco-tabs-tab:focus-visible:before,.arco-tabs-nav-type-card-gutter .arco-tabs-tab:focus-visible:before{position:absolute;top:-1px;right:0;bottom:-1px;left:-1px;border:2px solid rgb(var(--primary-6));content:""}.arco-tabs-nav-type-card .arco-tabs-tab:last-child:focus-visible:before,.arco-tabs-nav-type-card-gutter .arco-tabs-tab:last-child:focus-visible:before{right:-1px}.arco-tabs-nav-type-card .arco-tabs-nav-add-btn,.arco-tabs-nav-type-card-gutter .arco-tabs-nav-add-btn{height:32px}.arco-tabs-nav-type-card .arco-tabs-tab{background-color:transparent;border-right:none}.arco-tabs-nav-type-card .arco-tabs-tab:last-child{border-right:1px solid var(--color-neutral-3);border-top-right-radius:var(--border-radius-small)}.arco-tabs-nav-type-card .arco-tabs-tab:first-child{border-top-left-radius:var(--border-radius-small)}.arco-tabs-nav-type-card .arco-tabs-tab:hover{background-color:var(--color-fill-3)}.arco-tabs-nav-type-card .arco-tabs-tab-disabled,.arco-tabs-nav-type-card .arco-tabs-tab-disabled:hover{background-color:transparent}.arco-tabs-nav-type-card .arco-tabs-tab-active,.arco-tabs-nav-type-card .arco-tabs-tab-active:hover{background-color:transparent;border-bottom-color:var(--color-bg-2)}.arco-tabs-nav-type-card-gutter .arco-tabs-tab{margin-left:4px;background-color:var(--color-fill-1);border-right:1px solid var(--color-neutral-3);border-radius:var(--border-radius-small) var(--border-radius-small) 0 0}.arco-tabs-nav-type-card-gutter .arco-tabs-tab:hover{background-color:var(--color-fill-3)}.arco-tabs-nav-type-card-gutter .arco-tabs-tab-disabled,.arco-tabs-nav-type-card-gutter .arco-tabs-tab-disabled:hover{background-color:var(--color-fill-1)}.arco-tabs-nav-type-card-gutter .arco-tabs-tab-active,.arco-tabs-nav-type-card-gutter .arco-tabs-tab-active:hover{background-color:transparent;border-bottom-color:var(--color-bg-2)}.arco-tabs-nav-type-card-gutter .arco-tabs-tab:first-child{margin-left:0}.arco-tabs-nav-type-text:before{display:none}.arco-tabs-nav-type-text .arco-tabs-tab{position:relative;margin:0 9px;padding:5px 0;font-size:14px;line-height:1.5715}.arco-tabs-nav-type-text .arco-tabs-tab:not(:first-of-type):before{position:absolute;top:50%;left:-9px;display:block;width:2px;height:12px;background-color:var(--color-fill-3);transform:translateY(-50%);content:""}.arco-tabs-nav-type-text .arco-tabs-tab-title{padding-right:8px;padding-left:8px;background-color:transparent}.arco-tabs-nav-type-text .arco-tabs-tab-title:hover{background-color:var(--color-fill-2)}.arco-tabs-nav-type-text .arco-tabs-tab-active .arco-tabs-tab-title,.arco-tabs-nav-type-text .arco-tabs-tab-active .arco-tabs-tab-title:hover,.arco-tabs-nav-type-text .arco-tabs-tab-disabled .arco-tabs-tab-title,.arco-tabs-nav-type-text .arco-tabs-tab-disabled .arco-tabs-tab-title:hover{background-color:transparent}.arco-tabs-nav-type-text .arco-tabs-tab-active.arco-tabs-nav-type-text .arco-tabs-tab-disabled .arco-tabs-tab-title,.arco-tabs-nav-type-text .arco-tabs-tab-active.arco-tabs-nav-type-text .arco-tabs-tab-disabled .arco-tabs-tab-title:hover{background-color:var(--color-primary-light-3)}.arco-tabs-nav-type-text .arco-tabs-tab:focus-visible .arco-tabs-tab-title{margin:-2px;border:2px solid rgb(var(--primary-6))}.arco-tabs-nav-type-rounded:before{display:none}.arco-tabs-nav-type-rounded .arco-tabs-tab{margin:0 6px;padding:5px 16px;font-size:14px;background-color:transparent;border-radius:32px}.arco-tabs-nav-type-rounded .arco-tabs-tab:hover{background-color:var(--color-fill-2)}.arco-tabs-nav-type-rounded .arco-tabs-tab-disabled:hover{background-color:transparent}.arco-tabs-nav-type-rounded .arco-tabs-tab-active,.arco-tabs-nav-type-rounded .arco-tabs-tab-active:hover{background-color:var(--color-fill-2)}.arco-tabs-nav-type-rounded .arco-tabs-tab:focus-visible{border-color:rgb(var(--primary-6))}.arco-tabs-nav-type-capsule:before{display:none}.arco-tabs-nav-type-capsule .arco-tabs-nav-tab:not(.arco-tabs-nav-tab-scroll){justify-content:flex-end}.arco-tabs-nav-type-capsule .arco-tabs-nav-tab-list{padding:3px;line-height:1;background-color:var(--color-fill-2);border-radius:var(--border-radius-small)}.arco-tabs-nav-type-capsule .arco-tabs-tab{position:relative;padding:0 10px;font-size:14px;line-height:26px;background-color:transparent}.arco-tabs-nav-type-capsule .arco-tabs-tab:hover{background-color:var(--color-bg-2)}.arco-tabs-nav-type-capsule .arco-tabs-tab-disabled:hover{background-color:unset}.arco-tabs-nav-type-capsule .arco-tabs-tab-active,.arco-tabs-nav-type-capsule .arco-tabs-tab-active:hover{background-color:var(--color-bg-2)}.arco-tabs-nav-type-capsule .arco-tabs-tab-active:before,.arco-tabs-nav-type-capsule .arco-tabs-tab-active:hover:before,.arco-tabs-nav-type-capsule .arco-tabs-tab-active+.arco-tabs-tab:before,.arco-tabs-nav-type-capsule .arco-tabs-tab-active:hover+.arco-tabs-tab:before{opacity:0}.arco-tabs-nav-type-capsule .arco-tabs-tab:focus-visible{border-color:rgb(var(--primary-6))}.arco-tabs-nav-type-capsule.arco-tabs-nav-horizontal .arco-tabs-tab:not(:first-of-type){margin-left:3px}.arco-tabs-nav-type-capsule.arco-tabs-nav-horizontal .arco-tabs-tab:not(:first-of-type):before{position:absolute;top:50%;left:-4px;display:block;width:1px;height:14px;background-color:var(--color-fill-3);transform:translateY(-50%);transition:all .2s cubic-bezier(0,0,1,1);content:""}.arco-tabs-nav{position:relative;display:flex;align-items:center;overflow:hidden}.arco-tabs-content{box-sizing:border-box;width:100%;padding-top:16px;overflow:hidden}.arco-tabs-content-hide{display:none}.arco-tabs-content .arco-tabs-content-list{display:flex;width:100%}.arco-tabs-content .arco-tabs-content-item{flex-shrink:0;width:100%;height:0;overflow:hidden}.arco-tabs-content .arco-tabs-content-item.arco-tabs-content-item-active{height:auto}.arco-tabs-type-card>.arco-tabs-content,.arco-tabs-type-card-gutter>.arco-tabs-content{border:1px solid var(--color-neutral-3);border-top:none}.arco-tabs-content-animation{transition:all .2s cubic-bezier(.34,.69,.1,1)}.arco-tabs-horizontal.arco-tabs-justify{display:flex;flex-direction:column;height:100%}.arco-tabs-horizontal.arco-tabs-justify .arco-tabs-content,.arco-tabs-horizontal.arco-tabs-justify .arco-tabs-content-list,.arco-tabs-horizontal.arco-tabs-justify .arco-tabs-pane{height:100%}.arco-tabs-nav-size-mini.arco-tabs-nav-type-line .arco-tabs-tab{padding-top:6px;padding-bottom:6px;font-size:12px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-line .arco-tabs-nav-extra{font-size:12px;line-height:32px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-card .arco-tabs-tab,.arco-tabs-nav-size-mini.arco-tabs-nav-type-card-gutter .arco-tabs-tab{padding-top:1px;padding-bottom:1px;font-size:12px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-card .arco-tabs-nav-extra,.arco-tabs-nav-size-mini.arco-tabs-nav-type-card-gutter .arco-tabs-nav-extra{font-size:12px;line-height:24px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-card .arco-tabs-nav-add-btn,.arco-tabs-nav-size-mini.arco-tabs-nav-type-card-gutter .arco-tabs-nav-add-btn{height:24px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-capsule .arco-tabs-tab{font-size:12px;line-height:18px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-capsule .arco-tabs-nav-extra{font-size:12px;line-height:24px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-rounded .arco-tabs-tab{padding-top:3px;padding-bottom:3px;font-size:12px}.arco-tabs-nav-size-mini.arco-tabs-nav-type-rounded .arco-tabs-nav-extra{font-size:12px;line-height:24px}.arco-tabs-nav-size-small.arco-tabs-nav-type-line .arco-tabs-tab{padding-top:6px;padding-bottom:6px;font-size:14px}.arco-tabs-nav-size-small.arco-tabs-nav-type-line .arco-tabs-nav-extra{font-size:14px;line-height:36px}.arco-tabs-nav-size-small.arco-tabs-nav-type-card .arco-tabs-tab,.arco-tabs-nav-size-small.arco-tabs-nav-type-card-gutter .arco-tabs-tab{padding-top:1px;padding-bottom:1px;font-size:14px}.arco-tabs-nav-size-small.arco-tabs-nav-type-card .arco-tabs-nav-extra,.arco-tabs-nav-size-small.arco-tabs-nav-type-card-gutter .arco-tabs-nav-extra{font-size:14px;line-height:28px}.arco-tabs-nav-size-small.arco-tabs-nav-type-card .arco-tabs-nav-add-btn,.arco-tabs-nav-size-small.arco-tabs-nav-type-card-gutter .arco-tabs-nav-add-btn{height:28px}.arco-tabs-nav-size-small.arco-tabs-nav-type-capsule .arco-tabs-tab{font-size:14px;line-height:22px}.arco-tabs-nav-size-small.arco-tabs-nav-type-capsule .arco-tabs-nav-extra{font-size:14px;line-height:28px}.arco-tabs-nav-size-small.arco-tabs-nav-type-rounded .arco-tabs-tab{padding-top:3px;padding-bottom:3px;font-size:14px}.arco-tabs-nav-size-small.arco-tabs-nav-type-rounded .arco-tabs-nav-extra{font-size:14px;line-height:28px}.arco-tabs-nav-size-large.arco-tabs-nav-type-line .arco-tabs-tab{padding-top:10px;padding-bottom:10px;font-size:14px}.arco-tabs-nav-size-large.arco-tabs-nav-type-line .arco-tabs-nav-extra{font-size:14px;line-height:44px}.arco-tabs-nav-size-large.arco-tabs-nav-type-card .arco-tabs-tab,.arco-tabs-nav-size-large.arco-tabs-nav-type-card-gutter .arco-tabs-tab{padding-top:5px;padding-bottom:5px;font-size:14px}.arco-tabs-nav-size-large.arco-tabs-nav-type-card .arco-tabs-nav-extra,.arco-tabs-nav-size-large.arco-tabs-nav-type-card-gutter .arco-tabs-nav-extra{font-size:14px;line-height:36px}.arco-tabs-nav-size-large.arco-tabs-nav-type-card .arco-tabs-nav-add-btn,.arco-tabs-nav-size-large.arco-tabs-nav-type-card-gutter .arco-tabs-nav-add-btn{height:36px}.arco-tabs-nav-size-large.arco-tabs-nav-type-capsule .arco-tabs-tab{font-size:14px;line-height:30px}.arco-tabs-nav-size-large.arco-tabs-nav-type-capsule .arco-tabs-nav-extra{font-size:14px;line-height:36px}.arco-tabs-nav-size-large.arco-tabs-nav-type-rounded .arco-tabs-tab{padding-top:7px;padding-bottom:7px;font-size:14px}.arco-tabs-nav-size-large.arco-tabs-nav-type-rounded .arco-tabs-nav-extra{font-size:14px;line-height:36px}.arco-tabs-nav-vertical{float:left;height:100%}.arco-tabs-nav-vertical:before{position:absolute;top:0;right:0;bottom:0;left:initial;clear:both;width:1px;height:100%}.arco-tabs-nav-vertical .arco-tabs-nav-add-btn{height:auto;margin-top:8px;margin-left:0;padding:0 16px}.arco-tabs-nav-right{float:right}.arco-tabs-nav-vertical{flex-direction:column}.arco-tabs-nav-vertical .arco-tabs-nav-tab{flex-direction:column;height:100%}.arco-tabs-nav-vertical .arco-tabs-nav-ink{position:absolute;right:0;bottom:initial;left:initial;width:2px;transition:top .2s cubic-bezier(.34,.69,.1,1),height .2s cubic-bezier(.34,.69,.1,1)}.arco-tabs-nav-vertical .arco-tabs-nav-tab-list{height:auto}.arco-tabs-nav-vertical .arco-tabs-nav-tab-list-overflow-scroll{padding:6px 0}.arco-tabs-nav-vertical .arco-tabs-tab{display:block;margin:12px 0 0;white-space:nowrap}.arco-tabs-nav-vertical .arco-tabs-tab:first-of-type{margin-top:0}.arco-tabs-nav-right:before{right:unset;left:0}.arco-tabs-nav-right .arco-tabs-nav-ink{right:unset;left:0}.arco-tabs-nav-vertical{position:relative;box-sizing:border-box;height:100%}.arco-tabs-nav-vertical.arco-tabs-nav-type-line .arco-tabs-tab{padding:0 20px}.arco-tabs-nav-vertical.arco-tabs-nav-type-card .arco-tabs-tab{position:relative;margin:0;border:1px solid var(--color-neutral-3);border-bottom-color:transparent}.arco-tabs-nav-vertical.arco-tabs-nav-type-card .arco-tabs-tab:first-child{border-top-left-radius:var(--border-radius-small)}.arco-tabs-nav-vertical.arco-tabs-nav-type-card .arco-tabs-tab-active,.arco-tabs-nav-vertical.arco-tabs-nav-type-card .arco-tabs-tab-active:hover{border-right-color:var(--color-bg-2);border-bottom-color:transparent}.arco-tabs-nav-vertical.arco-tabs-nav-type-card .arco-tabs-tab:last-child{border-bottom:1px solid var(--color-neutral-3);border-bottom-left-radius:var(--border-radius-small)}.arco-tabs-nav-vertical.arco-tabs-nav-type-card-gutter .arco-tabs-tab{position:relative;margin-left:0;border-radius:var(--border-radius-small) 0 0 var(--border-radius-small)}.arco-tabs-nav-vertical.arco-tabs-nav-type-card-gutter .arco-tabs-tab:not(:first-of-type){margin-top:4px}.arco-tabs-nav-vertical.arco-tabs-nav-type-card-gutter .arco-tabs-tab-active,.arco-tabs-nav-vertical.arco-tabs-nav-type-card-gutter .arco-tabs-tab-active:hover{border-right-color:var(--color-bg-2);border-bottom-color:var(--color-neutral-3)}.arco-tabs-vertical .arco-tabs-content{width:auto;height:100%;padding:0}.arco-tabs-right.arco-tabs-vertical .arco-tabs-content{padding-right:16px}.arco-tabs-left.arco-tabs-vertical .arco-tabs-content{padding-left:16px}.arco-tabs-vertical.arco-tabs-type-card>.arco-tabs-content,.arco-tabs-vertical.arco-tabs-type-card-gutter>.arco-tabs-content{border:1px solid var(--color-neutral-3);border-left:none}body[arco-theme=dark] .arco-tabs-nav-type-capsule .arco-tabs-tab-active,body[arco-theme=dark] .arco-tabs-nav-type-capsule .arco-tabs-tab:hover{background-color:var(--color-fill-3)}.arco-tag{display:inline-flex;align-items:center;box-sizing:border-box;height:24px;padding:0 8px;color:var(--color-text-1);font-weight:500;font-size:12px;line-height:22px;vertical-align:middle;border:1px solid transparent;border-radius:var(--border-radius-small);overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-tag .arco-icon-hover.arco-tag-icon-hover:before{width:16px;height:16px}.arco-tag .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:var(--color-fill-3)}.arco-tag-checkable{cursor:pointer;transition:all .1s cubic-bezier(0,0,1,1)}.arco-tag-checkable:hover{background-color:var(--color-fill-2)}.arco-tag-checked{background-color:var(--color-fill-2);border-color:transparent}.arco-tag-checkable.arco-tag-checked:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-tag-bordered,.arco-tag-checkable.arco-tag-checked.arco-tag-bordered:hover{border-color:var(--color-border-2)}.arco-tag-size-small{height:20px;font-size:12px;line-height:18px}.arco-tag-size-medium{height:24px;font-size:12px;line-height:22px}.arco-tag-size-large{height:32px;font-size:14px;line-height:30px}.arco-tag-hide{display:none}.arco-tag-loading{cursor:default;opacity:.8}.arco-tag-icon{margin-right:4px;color:var(--color-text-2)}.arco-tag.arco-tag-checked.arco-tag-red{color:rgb(var(--red-6));background-color:rgb(var(--red-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-red .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--red-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-red.arco-tag:hover{background-color:rgb(var(--red-2));border-color:transparent}.arco-tag-checked.arco-tag-red.arco-tag-bordered,.arco-tag-checked.arco-tag-red.arco-tag-bordered:hover{border-color:rgb(var(--red-6))}.arco-tag.arco-tag-checked.arco-tag-red .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-red .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-red .arco-tag-loading-icon{color:rgb(var(--red-6))}.arco-tag.arco-tag-checked.arco-tag-orangered{color:rgb(var(--orangered-6));background-color:rgb(var(--orangered-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-orangered .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--orangered-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-orangered.arco-tag:hover{background-color:rgb(var(--orangered-2));border-color:transparent}.arco-tag-checked.arco-tag-orangered.arco-tag-bordered,.arco-tag-checked.arco-tag-orangered.arco-tag-bordered:hover{border-color:rgb(var(--orangered-6))}.arco-tag.arco-tag-checked.arco-tag-orangered .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-orangered .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-orangered .arco-tag-loading-icon{color:rgb(var(--orangered-6))}.arco-tag.arco-tag-checked.arco-tag-orange{color:rgb(var(--orange-6));background-color:rgb(var(--orange-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-orange .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--orange-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-orange.arco-tag:hover{background-color:rgb(var(--orange-2));border-color:transparent}.arco-tag-checked.arco-tag-orange.arco-tag-bordered,.arco-tag-checked.arco-tag-orange.arco-tag-bordered:hover{border-color:rgb(var(--orange-6))}.arco-tag.arco-tag-checked.arco-tag-orange .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-orange .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-orange .arco-tag-loading-icon{color:rgb(var(--orange-6))}.arco-tag.arco-tag-checked.arco-tag-gold{color:rgb(var(--gold-6));background-color:rgb(var(--gold-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-gold .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--gold-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-gold.arco-tag:hover{background-color:rgb(var(--gold-3));border-color:transparent}.arco-tag-checked.arco-tag-gold.arco-tag-bordered,.arco-tag-checked.arco-tag-gold.arco-tag-bordered:hover{border-color:rgb(var(--gold-6))}.arco-tag.arco-tag-checked.arco-tag-gold .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-gold .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-gold .arco-tag-loading-icon{color:rgb(var(--gold-6))}.arco-tag.arco-tag-checked.arco-tag-lime{color:rgb(var(--lime-6));background-color:rgb(var(--lime-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-lime .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--lime-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-lime.arco-tag:hover{background-color:rgb(var(--lime-2));border-color:transparent}.arco-tag-checked.arco-tag-lime.arco-tag-bordered,.arco-tag-checked.arco-tag-lime.arco-tag-bordered:hover{border-color:rgb(var(--lime-6))}.arco-tag.arco-tag-checked.arco-tag-lime .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-lime .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-lime .arco-tag-loading-icon{color:rgb(var(--lime-6))}.arco-tag.arco-tag-checked.arco-tag-green{color:rgb(var(--green-6));background-color:rgb(var(--green-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-green .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--green-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-green.arco-tag:hover{background-color:rgb(var(--green-2));border-color:transparent}.arco-tag-checked.arco-tag-green.arco-tag-bordered,.arco-tag-checked.arco-tag-green.arco-tag-bordered:hover{border-color:rgb(var(--green-6))}.arco-tag.arco-tag-checked.arco-tag-green .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-green .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-green .arco-tag-loading-icon{color:rgb(var(--green-6))}.arco-tag.arco-tag-checked.arco-tag-cyan{color:rgb(var(--cyan-6));background-color:rgb(var(--cyan-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-cyan .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--cyan-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-cyan.arco-tag:hover{background-color:rgb(var(--cyan-2));border-color:transparent}.arco-tag-checked.arco-tag-cyan.arco-tag-bordered,.arco-tag-checked.arco-tag-cyan.arco-tag-bordered:hover{border-color:rgb(var(--cyan-6))}.arco-tag.arco-tag-checked.arco-tag-cyan .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-cyan .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-cyan .arco-tag-loading-icon{color:rgb(var(--cyan-6))}.arco-tag.arco-tag-checked.arco-tag-blue{color:rgb(var(--blue-6));background-color:rgb(var(--blue-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-blue .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--blue-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-blue.arco-tag:hover{background-color:rgb(var(--blue-2));border-color:transparent}.arco-tag-checked.arco-tag-blue.arco-tag-bordered,.arco-tag-checked.arco-tag-blue.arco-tag-bordered:hover{border-color:rgb(var(--blue-6))}.arco-tag.arco-tag-checked.arco-tag-blue .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-blue .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-blue .arco-tag-loading-icon{color:rgb(var(--blue-6))}.arco-tag.arco-tag-checked.arco-tag-arcoblue{color:rgb(var(--arcoblue-6));background-color:rgb(var(--arcoblue-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-arcoblue .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--arcoblue-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-arcoblue.arco-tag:hover{background-color:rgb(var(--arcoblue-2));border-color:transparent}.arco-tag-checked.arco-tag-arcoblue.arco-tag-bordered,.arco-tag-checked.arco-tag-arcoblue.arco-tag-bordered:hover{border-color:rgb(var(--arcoblue-6))}.arco-tag.arco-tag-checked.arco-tag-arcoblue .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-arcoblue .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-arcoblue .arco-tag-loading-icon{color:rgb(var(--arcoblue-6))}.arco-tag.arco-tag-checked.arco-tag-purple{color:rgb(var(--purple-6));background-color:rgb(var(--purple-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-purple .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--purple-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-purple.arco-tag:hover{background-color:rgb(var(--purple-2));border-color:transparent}.arco-tag-checked.arco-tag-purple.arco-tag-bordered,.arco-tag-checked.arco-tag-purple.arco-tag-bordered:hover{border-color:rgb(var(--purple-6))}.arco-tag.arco-tag-checked.arco-tag-purple .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-purple .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-purple .arco-tag-loading-icon{color:rgb(var(--purple-6))}.arco-tag.arco-tag-checked.arco-tag-pinkpurple{color:rgb(var(--pinkpurple-6));background-color:rgb(var(--pinkpurple-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-pinkpurple .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--pinkpurple-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-pinkpurple.arco-tag:hover{background-color:rgb(var(--pinkpurple-2));border-color:transparent}.arco-tag-checked.arco-tag-pinkpurple.arco-tag-bordered,.arco-tag-checked.arco-tag-pinkpurple.arco-tag-bordered:hover{border-color:rgb(var(--pinkpurple-6))}.arco-tag.arco-tag-checked.arco-tag-pinkpurple .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-pinkpurple .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-pinkpurple .arco-tag-loading-icon{color:rgb(var(--pinkpurple-6))}.arco-tag.arco-tag-checked.arco-tag-magenta{color:rgb(var(--magenta-6));background-color:rgb(var(--magenta-1));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-magenta .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--magenta-2))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-magenta.arco-tag:hover{background-color:rgb(var(--magenta-2));border-color:transparent}.arco-tag-checked.arco-tag-magenta.arco-tag-bordered,.arco-tag-checked.arco-tag-magenta.arco-tag-bordered:hover{border-color:rgb(var(--magenta-6))}.arco-tag.arco-tag-checked.arco-tag-magenta .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-magenta .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-magenta .arco-tag-loading-icon{color:rgb(var(--magenta-6))}.arco-tag.arco-tag-checked.arco-tag-gray{color:rgb(var(--gray-6));background-color:rgb(var(--gray-2));border:1px solid transparent}.arco-tag.arco-tag-checked.arco-tag-gray .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgb(var(--gray-3))}.arco-tag.arco-tag-checkable.arco-tag-checked.arco-tag-gray.arco-tag:hover{background-color:rgb(var(--gray-3));border-color:transparent}.arco-tag-checked.arco-tag-gray.arco-tag-bordered,.arco-tag-checked.arco-tag-gray.arco-tag-bordered:hover{border-color:rgb(var(--gray-6))}.arco-tag.arco-tag-checked.arco-tag-gray .arco-tag-icon,.arco-tag.arco-tag-checked.arco-tag-gray .arco-tag-close-btn,.arco-tag.arco-tag-checked.arco-tag-gray .arco-tag-loading-icon{color:rgb(var(--gray-6))}.arco-tag.arco-tag-custom-color{color:var(--color-white)}.arco-tag.arco-tag-custom-color .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:#fff3}.arco-tag .arco-tag-close-btn{margin-left:4px;font-size:12px}.arco-tag .arco-tag-close-btn>svg{position:relative}.arco-tag .arco-tag-loading-icon{margin-left:4px;font-size:12px}body[arco-theme=dark] .arco-tag-checked{color:#ffffffe6}body[arco-theme=dark] .arco-tag-checked.arco-tag-red{background-color:rgba(var(--red-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-red .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--red-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-red:hover{background-color:rgba(var(--red-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-orangered{background-color:rgba(var(--orangered-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-orangered .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--orangered-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-orangered:hover{background-color:rgba(var(--orangered-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-orange{background-color:rgba(var(--orange-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-orange .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--orange-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-orange:hover{background-color:rgba(var(--orange-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-gold{background-color:rgba(var(--gold-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-gold .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--gold-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-gold:hover{background-color:rgba(var(--gold-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-lime{background-color:rgba(var(--lime-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-lime .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--lime-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-lime:hover{background-color:rgba(var(--lime-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-green{background-color:rgba(var(--green-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-green .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--green-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-green:hover{background-color:rgba(var(--green-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-cyan{background-color:rgba(var(--cyan-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-cyan .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--cyan-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-cyan:hover{background-color:rgba(var(--cyan-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-blue{background-color:rgba(var(--blue-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-blue .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--blue-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-blue:hover{background-color:rgba(var(--blue-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-arcoblue{background-color:rgba(var(--arcoblue-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-arcoblue .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--arcoblue-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-arcoblue:hover{background-color:rgba(var(--arcoblue-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-purple{background-color:rgba(var(--purple-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-purple .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--purple-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-purple:hover{background-color:rgba(var(--purple-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-pinkpurple{background-color:rgba(var(--pinkpurple-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-pinkpurple .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--pinkpurple-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-pinkpurple:hover{background-color:rgba(var(--pinkpurple-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-magenta{background-color:rgba(var(--magenta-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-magenta .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--magenta-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-magenta:hover{background-color:rgba(var(--magenta-6),.35)}body[arco-theme=dark] .arco-tag-checked.arco-tag-gray{background-color:rgba(var(--gray-6),.2)}body[arco-theme=dark] .arco-tag-checked.arco-tag-gray .arco-icon-hover.arco-tag-icon-hover:hover:before{background-color:rgba(var(--gray-6),.35)}body[arco-theme=dark] .arco-tag-checkable.arco-tag-checked.arco-tag-gray:hover{background-color:rgba(var(--gray-6),.35)}.arco-textarea-wrapper{display:inline-flex;box-sizing:border-box;color:var(--color-text-1);font-size:14px;background-color:var(--color-fill-2);border:1px solid transparent;border-radius:var(--border-radius-small);cursor:text;transition:color .1s cubic-bezier(0,0,1,1),border-color .1s cubic-bezier(0,0,1,1),background-color .1s cubic-bezier(0,0,1,1);position:relative;display:inline-block;width:100%;padding-right:0;padding-left:0;overflow:hidden}.arco-textarea-wrapper:hover{background-color:var(--color-fill-3);border-color:transparent}.arco-textarea-wrapper:focus-within,.arco-textarea-wrapper.arco-textarea-focus{background-color:var(--color-bg-2);border-color:rgb(var(--primary-6));box-shadow:0 0 0 0 var(--color-primary-light-2)}.arco-textarea-wrapper.arco-textarea-disabled{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent;cursor:not-allowed}.arco-textarea-wrapper.arco-textarea-disabled:hover{color:var(--color-text-4);background-color:var(--color-fill-2);border-color:transparent}.arco-textarea-wrapper.arco-textarea-disabled .arco-textarea-prefix,.arco-textarea-wrapper.arco-textarea-disabled .arco-textarea-suffix{color:inherit}.arco-textarea-wrapper.arco-textarea-error{background-color:var(--color-danger-light-1);border-color:transparent}.arco-textarea-wrapper.arco-textarea-error:hover{background-color:var(--color-danger-light-2);border-color:transparent}.arco-textarea-wrapper.arco-textarea-error:focus-within,.arco-textarea-wrapper.arco-textarea-error.arco-textarea-wrapper-focus{background-color:var(--color-bg-2);border-color:rgb(var(--danger-6));box-shadow:0 0 0 0 var(--color-danger-light-2)}.arco-textarea-wrapper .arco-textarea-prefix,.arco-textarea-wrapper .arco-textarea-suffix{display:inline-flex;flex-shrink:0;align-items:center;white-space:nowrap;user-select:none}.arco-textarea-wrapper .arco-textarea-prefix>svg,.arco-textarea-wrapper .arco-textarea-suffix>svg{font-size:14px}.arco-textarea-wrapper .arco-textarea-prefix{padding-right:12px;color:var(--color-text-2)}.arco-textarea-wrapper .arco-textarea-suffix{padding-left:12px;color:var(--color-text-2)}.arco-textarea-wrapper .arco-textarea-suffix .arco-feedback-icon{display:inline-flex}.arco-textarea-wrapper .arco-textarea-suffix .arco-feedback-icon-status-validating{color:rgb(var(--primary-6))}.arco-textarea-wrapper .arco-textarea-suffix .arco-feedback-icon-status-success{color:rgb(var(--success-6))}.arco-textarea-wrapper .arco-textarea-suffix .arco-feedback-icon-status-warning{color:rgb(var(--warning-6))}.arco-textarea-wrapper .arco-textarea-suffix .arco-feedback-icon-status-error{color:rgb(var(--danger-6))}.arco-textarea-wrapper .arco-textarea-clear-btn{align-self:center;color:var(--color-text-2);font-size:12px;visibility:hidden;cursor:pointer}.arco-textarea-wrapper .arco-textarea-clear-btn>svg{position:relative;transition:color .1s cubic-bezier(0,0,1,1)}.arco-textarea-wrapper:hover .arco-textarea-clear-btn{visibility:visible}.arco-textarea-wrapper:not(.arco-textarea-focus) .arco-textarea-icon-hover:hover:before{background-color:var(--color-fill-4)}.arco-textarea-wrapper .arco-textarea-word-limit{position:absolute;right:10px;bottom:6px;color:var(--color-text-3);font-size:12px;user-select:none}.arco-textarea-wrapper.arco-textarea-scroll .arco-textarea-word-limit{right:25px}.arco-textarea-wrapper .arco-textarea-clear-btn{position:absolute;top:50%;right:10px;transform:translateY(-50%)}.arco-textarea-wrapper.arco-textarea-scroll .arco-textarea-clear-btn{right:25px}.arco-textarea-wrapper:hover .arco-textarea-clear-btn{display:block}.arco-textarea-wrapper .arco-textarea-mirror{position:absolute;visibility:hidden}.arco-textarea{width:100%;color:inherit;background:none;border:none;border-radius:0;outline:none;cursor:inherit;-webkit-appearance:none;-webkit-tap-highlight-color:rgba(0,0,0,0);display:block;box-sizing:border-box;height:100%;min-height:32px;padding:4px 12px;font-size:14px;line-height:1.5715;vertical-align:top;resize:vertical}.arco-textarea::placeholder{color:var(--color-text-3)}.arco-textarea[disabled]::placeholder{color:var(--color-text-4)}.arco-textarea[disabled]{-webkit-text-fill-color:var(--color-text-4)}.arco-timepicker{position:relative;display:flex;box-sizing:border-box;padding:0}.arco-timepicker-container{overflow:hidden;background-color:var(--color-bg-popup);border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-medium);box-shadow:0 2px 5px #0000001a}.arco-timepicker-column{box-sizing:border-box;width:64px;height:224px;overflow:hidden}.arco-timepicker-column:not(:last-child){border-right:1px solid var(--color-neutral-3)}.arco-timepicker-column:hover{overflow-y:auto}.arco-timepicker-column ul{box-sizing:border-box;margin:0;padding:0;list-style:none}.arco-timepicker-column ul:after{display:block;width:100%;height:192px;content:""}.arco-timepicker-cell{padding:4px 0;color:var(--color-text-1);font-weight:500;cursor:pointer}.arco-timepicker-cell-inner{height:24px;padding-left:24px;font-size:14px;line-height:24px}.arco-timepicker-cell:not(.arco-timepicker-cell-selected):not(.arco-timepicker-cell-disabled):hover .arco-timepicker-cell-inner{background-color:var(--color-fill-2)}.arco-timepicker-cell-selected .arco-timepicker-cell-inner{font-weight:500;background-color:var(--color-fill-2)}.arco-timepicker-cell-disabled{color:var(--color-text-4);cursor:not-allowed}.arco-timepicker-footer-extra-wrapper{padding:8px;color:var(--color-text-1);font-size:12px;border-top:1px solid var(--color-neutral-3)}.arco-timepicker-footer-btn-wrapper{display:flex;justify-content:space-between;padding:8px;border-top:1px solid var(--color-neutral-3)}.arco-timepicker-footer-btn-wrapper :only-child{margin-left:auto}.arco-timeline{display:flex;flex-direction:column}.arco-timeline-item{position:relative;min-height:78px;padding-left:6px;color:var(--color-text-1);font-size:14px}.arco-timeline-item-label{color:var(--color-text-3);font-size:12px;line-height:1.667}.arco-timeline-item-content{margin-bottom:4px;color:var(--color-text-1);font-size:14px;line-height:1.5715}.arco-timeline-item-content-wrapper{position:relative;margin-left:16px}.arco-timeline-item.arco-timeline-item-last>.arco-timeline-item-dot-wrapper .arco-timeline-item-dot-line{display:none}.arco-timeline-item-dot-wrapper{position:absolute;left:0;height:100%;text-align:center}.arco-timeline-item-dot-wrapper .arco-timeline-item-dot-content{position:relative;width:6px;height:22.001px;line-height:22.001px}.arco-timeline-item-dot{position:relative;top:50%;box-sizing:border-box;width:6px;height:6px;margin-top:-50%;color:rgb(var(--primary-6));border-radius:var(--border-radius-circle)}.arco-timeline-item-dot-solid{background-color:rgb(var(--primary-6))}.arco-timeline-item-dot-hollow{background-color:var(--color-bg-2);border:2px solid rgb(var(--primary-6))}.arco-timeline-item-dot-custom{position:absolute;top:50%;left:50%;display:inline-flex;box-sizing:border-box;color:rgb(var(--primary-6));background-color:var(--color-bg-2);transform:translate(-50%) translateY(-50%);transform-origin:center}.arco-timeline-item-dot-custom svg{color:inherit}.arco-timeline-item-dot-line{position:absolute;top:18.0005px;bottom:-4.0005px;left:50%;box-sizing:border-box;width:1px;border-color:var(--color-neutral-3);border-left-width:1px;transform:translate(-50%)}.arco-timeline-is-reverse{flex-direction:column-reverse}.arco-timeline-alternate{overflow:hidden}.arco-timeline-alternate .arco-timeline-item-vertical-left{padding-left:0}.arco-timeline-alternate .arco-timeline-item-vertical-left>.arco-timeline-item-dot-wrapper{left:50%}.arco-timeline-alternate .arco-timeline-item-vertical-left>.arco-timeline-item-content-wrapper{left:50%;width:50%;margin-left:22px;padding-right:22px}.arco-timeline-alternate .arco-timeline-item-vertical-right{padding-right:0}.arco-timeline-alternate .arco-timeline-item-vertical-right>.arco-timeline-item-dot-wrapper{left:50%}.arco-timeline-alternate .arco-timeline-item-vertical-right>.arco-timeline-item-content-wrapper{left:0;width:50%;margin-right:0;margin-left:-16px;padding-right:16px;text-align:right}.arco-timeline-right .arco-timeline-item-vertical-right{padding-right:6px}.arco-timeline-right .arco-timeline-item-vertical-right>.arco-timeline-item-dot-wrapper{right:0;left:unset}.arco-timeline-right .arco-timeline-item-vertical-right>.arco-timeline-item-content-wrapper{margin-right:16px;margin-left:0;text-align:right}.arco-timeline-item-label-relative>.arco-timeline-item-label{position:absolute;top:0;box-sizing:border-box;max-width:100px}.arco-timeline-item-vertical-left.arco-timeline-item-label-relative{margin-left:100px}.arco-timeline-item-vertical-left.arco-timeline-item-label-relative>.arco-timeline-item-label{left:0;padding-right:16px;text-align:right;transform:translate(-100%)}.arco-timeline-item-vertical-right.arco-timeline-item-label-relative{margin-right:100px}.arco-timeline-item-vertical-right.arco-timeline-item-label-relative>.arco-timeline-item-label{right:0;padding-left:16px;text-align:left;transform:translate(100%)}.arco-timeline-item-horizontal-top.arco-timeline-item-label-relative{margin-top:50px}.arco-timeline-item-horizontal-top.arco-timeline-item-label-relative>.arco-timeline-item-label{padding-bottom:16px;transform:translateY(-100%)}.arco-timeline-item-horizontal-top.arco-timeline-item-label-relative>.arco-timeline-item-content{margin-bottom:0}.arco-timeline-item-horizontal-bottom.arco-timeline-item-label-relative{margin-bottom:50px}.arco-timeline-item-horizontal-bottom.arco-timeline-item-label-relative>.arco-timeline-item-content{margin-bottom:0}.arco-timeline-item-horizontal-bottom.arco-timeline-item-label-relative>.arco-timeline-item-label{top:unset;bottom:0;padding-top:16px;text-align:left;transform:translateY(100%)}.arco-timeline-alternate .arco-timeline-item-vertical-left.arco-timeline-item-label-relative{margin-left:0}.arco-timeline-alternate .arco-timeline-item-vertical-left.arco-timeline-item-label-relative>.arco-timeline-item-label{left:0;width:50%;max-width:unset;transform:none}.arco-timeline-alternate .arco-timeline-item-vertical-right.arco-timeline-item-label-relative{margin-right:0}.arco-timeline-alternate .arco-timeline-item-vertical-right.arco-timeline-item-label-relative>.arco-timeline-item-label{right:0;width:50%;max-width:unset;transform:none}.arco-timeline-alternate .arco-timeline-item-horizontal-top.arco-timeline-item-label-relative{margin-top:0}.arco-timeline-alternate .arco-timeline-item-horizontal-bottom.arco-timeline-item-label-relative{margin-bottom:0}.arco-timeline-direction-horizontal{display:flex;flex-direction:row}.arco-timeline-direction-horizontal.arco-timeline-is-reverse{flex-direction:row-reverse}.arco-timeline-item-dot-line-is-horizontal{top:50%;right:4px;left:12px;width:unset;height:1px;border-top-width:1px;border-left:none;transform:translateY(-50%)}.arco-timeline-item-horizontal-bottom,.arco-timeline-item-horizontal-top{flex:1;min-height:unset;padding-right:0;padding-left:0}.arco-timeline-item-horizontal-bottom>.arco-timeline-item-dot-wrapper,.arco-timeline-item-horizontal-top>.arco-timeline-item-dot-wrapper{top:0;width:100%;height:auto}.arco-timeline-item-horizontal-bottom>.arco-timeline-item-dot-wrapper .arco-timeline-item-dot,.arco-timeline-item-horizontal-top>.arco-timeline-item-dot-wrapper .arco-timeline-item-dot{top:unset;margin-top:unset}.arco-timeline-item-horizontal-bottom>.arco-timeline-item-dot-wrapper .arco-timeline-item-dot-content,.arco-timeline-item-horizontal-top>.arco-timeline-item-dot-wrapper .arco-timeline-item-dot-content{height:6px;line-height:6px}.arco-timeline-item-horizontal-top{padding-top:6px}.arco-timeline-item-horizontal-top>.arco-timeline-item-dot-wrapper{top:0;bottom:unset}.arco-timeline-item-horizontal-top>.arco-timeline-item-content-wrapper{margin-top:16px;margin-left:0}.arco-timeline-item-horizontal-bottom{padding-bottom:6px}.arco-timeline-item-horizontal-bottom>.arco-timeline-item-dot-wrapper{top:unset;bottom:0}.arco-timeline-item-horizontal-bottom>.arco-timeline-item-content-wrapper{margin-bottom:16px;margin-left:0}.arco-timeline-alternate.arco-timeline-direction-horizontal{align-items:center;min-height:200px;overflow:visible}.arco-timeline-alternate.arco-timeline-direction-horizontal .arco-timeline-item-horizontal-bottom{margin-top:6px;transform:translateY(-50%)}.arco-timeline-alternate.arco-timeline-direction-horizontal .arco-timeline-item-horizontal-top{margin-top:-6px;transform:translateY(50%)}.arco-tooltip-content{max-width:350px;padding:8px 12px;color:#fff;font-size:14px;line-height:1.5715;text-align:left;word-wrap:break-word;background-color:var(--color-tooltip-bg);border-radius:var(--border-radius-small)}.arco-tooltip-mini{padding:4px 12px;font-size:14px}.arco-tooltip-popup-arrow{background-color:var(--color-tooltip-bg)}.arco-transfer{display:flex;align-items:center}.arco-transfer-view{display:flex;flex-direction:column;box-sizing:border-box;width:200px;height:224px;border:1px solid var(--color-neutral-3);border-radius:var(--border-radius-small)}.arco-transfer-view-search{padding:8px 12px 4px}.arco-transfer-view-list{flex:1}.arco-transfer-view-custom-list{flex:1;overflow:auto}.arco-transfer-view-header{display:flex;align-items:center;padding:0 10px}.arco-transfer-view-header>*:first-child{flex:1;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-transfer-view-header>*:first-child:not(:last-child){margin-right:8px}.arco-transfer-view-header{height:40px;color:var(--color-text-1);font-weight:500;font-size:14px;line-height:40px;background-color:var(--color-fill-1)}.arco-transfer-view-header-title{display:flex;align-items:center}.arco-transfer-view-header-title .arco-checkbox{overflow:hidden;white-space:nowrap;text-overflow:ellipsis;font-size:inherit}.arco-transfer-view-header-title .arco-checkbox-text{color:inherit}.arco-transfer-view-header-clear-btn{color:var(--color-text-2);font-size:12px;cursor:pointer}.arco-transfer-view-header-clear-btn:hover:before{background-color:var(--color-fill-3)}.arco-transfer-view-header-count{margin-right:2px;color:var(--color-text-3);font-weight:400;font-size:12px}.arco-transfer-view-body{flex:1 1 auto;overflow:hidden}.arco-transfer-view-body .arco-transfer-view-empty{display:flex;flex-direction:column;align-items:center;justify-content:center;height:100%}.arco-transfer-view .arco-scrollbar{height:100%}.arco-transfer-view .arco-scrollbar-container{height:100%;overflow:auto}.arco-transfer-view .arco-list{border-radius:0}.arco-transfer-view .arco-list-footer{position:relative;display:flex;align-items:center;box-sizing:border-box;height:40px;padding:0 8px}.arco-transfer-view .arco-list .arco-pagination{position:absolute;top:50%;right:8px;margin:0;transform:translateY(-50%)}.arco-transfer-view .arco-list .arco-pagination-jumper-input{width:24px}.arco-transfer-view .arco-list .arco-pagination-jumper-separator{padding:0 8px}.arco-transfer-view .arco-checkbox{padding-left:6px}.arco-transfer-view .arco-checkbox-wrapper{display:inline}.arco-transfer-view .arco-checkbox .arco-icon-hover:hover:before{background-color:var(--color-fill-3)}.arco-transfer-list-item{position:relative;display:flex;align-items:center;height:36px;padding:0 10px;color:var(--color-text-1);line-height:36px;list-style:none;background-color:transparent;cursor:default}.arco-transfer-list-item-content{font-size:14px;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-transfer-list-item-checkbox .arco-checkbox-label{overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.arco-transfer-list-item-disabled{color:var(--color-text-4);background-color:transparent;cursor:not-allowed}.arco-transfer-list-item:not(.arco-transfer-list-item-disabled):hover{color:var(--color-text-1);background-color:var(--color-fill-2)}.arco-transfer-list-item .arco-checkbox{width:100%}.arco-transfer-list-item .arco-checkbox-text{color:inherit}.arco-transfer-list-item-remove-btn{margin-left:auto;color:var(--color-text-2);font-size:12px;cursor:pointer}.arco-transfer-list-item-remove-btn:hover:before{background-color:var(--color-fill-3)}.arco-transfer-list-item-draggable:before{position:absolute;right:0;left:0;display:block;height:2px;border-radius:1px;content:""}.arco-transfer-list-item-gap-bottom:before{bottom:-2px;background-color:rgb(var(--primary-6))}.arco-transfer-list-item-gap-top:before{top:-2px;background-color:rgb(var(--primary-6))}.arco-transfer-list-item-dragging{color:var(--color-text-4)!important;background-color:var(--color-fill-1)!important}.arco-transfer-list-item-dragged{animation:arco-transfer-drag-item-blink .4s;animation-timing-function:cubic-bezier(0,0,1,1)}.arco-transfer-operations{padding:0 20px}.arco-transfer-operations .arco-btn{display:block}.arco-transfer-operations .arco-btn:last-child{margin-top:12px}.arco-transfer-operations-words .arco-btn{width:100%;padding:0 12px;text-align:left}.arco-transfer-simple .arco-transfer-view-source{border-right:none;border-top-right-radius:0;border-bottom-right-radius:0}.arco-transfer-simple .arco-transfer-view-target{border-top-left-radius:0;border-bottom-left-radius:0}.arco-transfer-disabled .arco-transfer-view-header{color:var(--color-text-4)}@keyframes arco-transfer-drag-item-blink{0%{background-color:var(--color-primary-light-1)}to{background-color:transparent}}.arco-tree-select-popup{box-sizing:border-box;padding:4px 0;background-color:var(--color-bg-popup);border:1px solid var(--color-fill-3);border-radius:var(--border-radius-medium);box-shadow:0 4px 10px #0000001a}.arco-tree-select-popup .arco-tree-select-tree-wrapper{height:100%;max-height:200px;padding-right:4px;padding-left:10px;overflow:auto}.arco-tree-select-popup .arco-tree-node{padding-left:0}.arco-tree-select-highlight{font-weight:500}.arco-icon-hover.arco-tree-node-icon-hover:before{width:16px;height:16px}.arco-tree-node-switcher{position:relative;display:flex;flex-shrink:0;align-items:center;width:12px;height:32px;margin-right:10px;color:var(--color-text-2);font-size:12px;cursor:pointer;user-select:none}.arco-tree-node-switcher-icon{position:relative;margin:0 auto}.arco-tree-node-switcher-icon svg{position:relative;transform:rotate(-90deg);transition:transform .2s cubic-bezier(.34,.69,.1,1)}.arco-tree-node-expanded .arco-tree-node-switcher-icon svg,.arco-tree-node-is-leaf .arco-tree-node-switcher-icon svg{transform:rotate(0)}.arco-tree-node-drag-icon{margin-left:120px;color:rgb(var(--primary-6));opacity:0}.arco-tree-node-custom-icon{margin-right:10px;font-size:inherit;line-height:1;cursor:pointer;user-select:none}.arco-tree-node .arco-icon-loading{color:rgb(var(--primary-6))}.arco-tree-node-minus-icon,.arco-tree-node-plus-icon{position:relative;display:block;width:14px;height:14px;background:var(--color-fill-2);border-radius:var(--border-radius-small);cursor:pointer}.arco-tree-node-minus-icon:after,.arco-tree-node-plus-icon:after{position:absolute;top:50%;left:50%;display:block;width:6px;height:2px;margin-top:-1px;margin-left:-3px;color:var(--color-text-2);background-color:var(--color-text-2);border-radius:.5px;content:""}.arco-tree-node-plus-icon:before{position:absolute;top:50%;left:50%;display:block;width:2px;height:6px;margin-top:-3px;margin-left:-1px;color:var(--color-text-2);background-color:var(--color-text-2);border-radius:.5px;content:""}.arco-tree{color:var(--color-text-1)}.arco-tree .arco-checkbox{margin-right:10px;padding-left:0;line-height:32px}.arco-tree-node{position:relative;display:flex;flex-wrap:nowrap;align-items:center;padding-left:2px;color:var(--color-text-1);line-height:1.5715;cursor:pointer}.arco-tree-node-selected .arco-tree-node-title,.arco-tree-node-selected .arco-tree-node-title:hover{color:rgb(var(--primary-6));transition:color .2s cubic-bezier(0,0,1,1)}.arco-tree-node-disabled-selectable .arco-tree-node-title,.arco-tree-node-disabled .arco-tree-node-title,.arco-tree-node-disabled-selectable .arco-tree-node-title:hover,.arco-tree-node-disabled .arco-tree-node-title:hover{color:var(--color-text-4);background:none;cursor:not-allowed}.arco-tree-node-disabled.arco-tree-node-selected .arco-tree-node-title{color:var(--color-primary-light-3)}.arco-tree-node-title-block{flex:1;box-sizing:content-box}.arco-tree-node-title-block .arco-tree-node-drag-icon{position:absolute;right:12px}.arco-tree-node-indent{position:relative;flex-shrink:0;align-self:stretch}.arco-tree-node-indent-block{position:relative;display:inline-block;width:12px;height:100%;margin-right:10px}.arco-tree-node-draggable{margin-top:2px}.arco-tree-node-title{position:relative;display:flex;align-items:center;margin-left:-4px;padding:5px 4px;font-size:14px;border-radius:var(--border-radius-small)}.arco-tree-node-title:hover{color:var(--color-text-1);background-color:var(--color-fill-2)}.arco-tree-node-title:hover .arco-tree-node-drag-icon{opacity:1}.arco-tree-node-title-draggable:before{position:absolute;top:-2px;right:0;left:0;display:block;height:2px;border-radius:1px;content:""}.arco-tree-node-title-gap-bottom:before{top:unset;bottom:-2px;background-color:rgb(var(--primary-6))}.arco-tree-node-title-gap-top:before{background-color:rgb(var(--primary-6))}.arco-tree-node-title-highlight{color:var(--color-text-1);background-color:var(--color-primary-light-1)}.arco-tree-node-title-dragging,.arco-tree-node-title-dragging:hover{color:var(--color-text-4);background-color:var(--color-fill-1)}.arco-tree-show-line{padding-left:1px}.arco-tree-show-line .arco-tree-node-switcher{width:14px;text-align:center}.arco-tree-show-line .arco-tree-node-switcher .arco-tree-node-icon-hover{width:100%}.arco-tree-show-line .arco-tree-node-indent-block{width:14px}.arco-tree-show-line .arco-tree-node-indent-block:before{position:absolute;left:50%;box-sizing:border-box;width:1px;border-left:1px solid var(--color-neutral-3);transform:translate(-50%);content:"";top:-5px;bottom:-5px}.arco-tree-show-line .arco-tree-node-is-leaf:not(.arco-tree-node-is-tail) .arco-tree-node-indent:after{position:absolute;right:-7px;box-sizing:border-box;width:1px;border-left:1px solid var(--color-neutral-3);transform:translate(50%);content:"";top:27px;bottom:-5px}.arco-tree-show-line .arco-tree-node-indent-block-lineless:before{display:none}.arco-tree-size-mini .arco-tree-node-switcher{height:24px}.arco-tree-size-mini .arco-checkbox{line-height:24px}.arco-tree-size-mini .arco-tree-node-title{padding-top:2px;padding-bottom:2px;font-size:12px;line-height:1.667}.arco-tree-size-mini .arco-tree-node-indent-block:after{top:23px;bottom:-1px}.arco-tree-size-mini .arco-tree-node-is-leaf:not(.arco-tree-node-is-tail) .arco-tree-node-indent:before{top:-1px;bottom:-1px}.arco-tree-size-small .arco-tree-node-switcher{height:28px}.arco-tree-size-small .arco-checkbox{line-height:28px}.arco-tree-size-small .arco-tree-node-title{padding-top:3px;padding-bottom:3px;font-size:14px}.arco-tree-size-small .arco-tree-node-indent-block:after{top:25px;bottom:-3px}.arco-tree-size-small .arco-tree-node-is-leaf:not(.arco-tree-node-is-tail) .arco-tree-node-indent:before{top:-3px;bottom:-3px}.arco-tree-size-large .arco-tree-node-switcher{height:36px}.arco-tree-size-large .arco-checkbox{line-height:36px}.arco-tree-size-large .arco-tree-node-title{padding-top:7px;padding-bottom:7px;font-size:14px}.arco-tree-size-large .arco-tree-node-indent-block:after{top:29px;bottom:-7px}.arco-tree-size-large .arco-tree-node-is-leaf:not(.arco-tree-node-is-tail) .arco-tree-node-indent:before{top:-7px;bottom:-7px}.arco-tree-node-list{overflow:hidden;transition:height .2s cubic-bezier(.34,.69,.1,1)}.arco-typography{color:var(--color-text-1);line-height:1.5715}h1.arco-typography,h2.arco-typography,h3.arco-typography,h4.arco-typography,h5.arco-typography,h6.arco-typography{margin-top:1em;margin-bottom:.5em;font-weight:500}h1.arco-typography{font-size:36px;line-height:1.23}h2.arco-typography{font-size:32px;line-height:1.25}h3.arco-typography{font-size:28px;line-height:1.29}h4.arco-typography{font-size:24px;line-height:1.33}h5.arco-typography{font-size:20px;line-height:1.4}h6.arco-typography{font-size:16px;line-height:1.5}div.arco-typography,p.arco-typography{margin-top:0;margin-bottom:1em}.arco-typography-primary{color:rgb(var(--primary-6))}.arco-typography-secondary{color:var(--color-text-2)}.arco-typography-success{color:rgb(var(--success-6))}.arco-typography-warning{color:rgb(var(--warning-6))}.arco-typography-danger{color:rgb(var(--danger-6))}.arco-typography-disabled{color:var(--color-text-4);cursor:not-allowed}.arco-typography mark{background-color:rgb(var(--yellow-4))}.arco-typography u{text-decoration:underline}.arco-typography del{text-decoration:line-through}.arco-typography b{font-weight:500}.arco-typography code{margin:0 2px;padding:2px 8px;color:var(--color-text-2);font-size:85%;background-color:var(--color-neutral-2);border:1px solid var(--color-neutral-3);border-radius:2px}.arco-typography blockquote{margin:0 0 1em;padding-left:8px;background-color:var(--color-bg-2);border-left:2px solid var(--color-neutral-6)}.arco-typography ol,.arco-typography ul{margin:0;padding:0}.arco-typography ul li,.arco-typography ol li{margin-left:20px}.arco-typography ul{list-style:circle}.arco-typography-spacing-close{line-height:1.3}.arco-typography-operation-copy,.arco-typography-operation-copied{margin-left:2px;padding:2px}.arco-typography-operation-copy{color:var(--color-text-2);background-color:transparent;border-radius:2px;cursor:pointer;transition:background-color .1s cubic-bezier(0,0,1,1)}.arco-typography-operation-copy:hover{color:var(--color-text-2);background-color:var(--color-fill-2)}.arco-typography-operation-copied{color:rgb(var(--success-6))}.arco-typography-operation-edit{margin-left:2px;padding:2px;color:var(--color-text-2);background-color:transparent;border-radius:2px;cursor:pointer;transition:background-color .1s cubic-bezier(0,0,1,1)}.arco-typography-operation-edit:hover{color:var(--color-text-2);background-color:var(--color-fill-2)}.arco-typography-operation-expand{margin:0 4px;color:rgb(var(--primary-6));cursor:pointer}.arco-typography-operation-expand:hover{color:rgb(var(--primary-5))}.arco-typography-edit-content{position:relative;left:-13px;margin-top:-5px;margin-right:-13px;margin-bottom:calc(1em - 5px)}.arco-typography-css-operation{margin-top:-1em;margin-bottom:1em;text-align:right}.arco-upload{display:inline-block;max-width:100%;cursor:pointer}.arco-upload.arco-upload-draggable{width:100%}.arco-upload-tip{margin-top:4px;overflow:hidden;color:var(--color-text-3);font-size:12px;line-height:1.5;white-space:nowrap;text-overflow:ellipsis}.arco-upload-picture-card{display:flex;flex-direction:column;justify-content:center;min-width:80px;height:80px;margin-bottom:0;color:var(--color-text-2);text-align:center;background:var(--color-fill-2);border:1px dashed var(--color-neutral-3);border-radius:var(--border-radius-small);transition:all .1s cubic-bezier(0,0,1,1)}.arco-upload-picture-card:hover{color:var(--color-text-2);background-color:var(--color-fill-3);border-color:var(--color-neutral-4)}.arco-upload-drag{width:100%;padding:50px 0;color:var(--color-text-1);text-align:center;background-color:var(--color-fill-1);border:1px dashed var(--color-neutral-3);border-radius:var(--border-radius-small);transition:all .2s ease}.arco-upload-drag .arco-icon-plus{margin-bottom:24px;color:var(--color-text-2);font-size:14px}.arco-upload-drag:hover{background-color:var(--color-fill-3);border-color:var(--color-neutral-4)}.arco-upload-drag:hover .arco-upload-drag-text{color:var(--color-text-1)}.arco-upload-drag:hover .arco-icon-plus{color:var(--color-text-2)}.arco-upload-drag-active{color:var(--color-text-1);background-color:var(--color-primary-light-1);border-color:rgb(var(--primary-6))}.arco-upload-drag-active .arco-upload-drag-text{color:var(--color-text-1)}.arco-upload-drag-active .arco-icon-plus{color:rgb(var(--primary-6))}.arco-upload-drag .arco-upload-tip{margin-top:0}.arco-upload-drag-text{color:var(--color-text-1);font-size:14px;line-height:1.5}.arco-upload-wrapper{width:100%}.arco-upload-wrapper.arco-upload-wrapper-type-picture-card{display:flex;justify-content:flex-start}.arco-upload-drag{width:100%}.arco-upload-hide{display:none}.arco-upload-disabled .arco-upload-picture-card,.arco-upload-disabled .arco-upload-picture-card:hover{color:var(--color-text-4);background-color:var(--color-fill-1);border-color:var(--color-neutral-4);cursor:not-allowed}.arco-upload-disabled .arco-upload-drag,.arco-upload-disabled .arco-upload-drag:hover{background-color:var(--color-fill-1);border-color:var(--color-text-4);cursor:not-allowed}.arco-upload-disabled .arco-upload-drag .arco-icon-plus,.arco-upload-disabled .arco-upload-drag:hover .arco-icon-plus,.arco-upload-disabled .arco-upload-drag .arco-upload-drag-text,.arco-upload-disabled .arco-upload-drag:hover .arco-upload-drag-text,.arco-upload-disabled .arco-upload-tip{color:var(--color-text-4)}.arco-upload-icon{cursor:pointer}.arco-upload-icon-error{margin-left:4px;color:rgb(var(--danger-6))}.arco-upload-icon-success{color:rgb(var(--success-6));font-size:14px;line-height:14px}.arco-upload-icon-remove{position:relative;font-size:14px}.arco-upload-icon-start,.arco-upload-icon-cancel{position:absolute;top:50%;left:50%;color:var(--color-white);font-size:12px;transform:translate(-50%) translateY(-50%)}.arco-upload-icon-upload{color:rgb(var(--primary-6));font-size:14px;cursor:pointer;transition:all .2s ease}.arco-upload-icon-upload:active,.arco-upload-icon-upload:hover{color:rgb(var(--primary-7))}.arco-upload-list{margin:0;padding:0;list-style:none}.arco-upload-list.arco-upload-list-type-text,.arco-upload-list.arco-upload-list-type-picture{width:100%}.arco-upload-list.arco-upload-list-type-text .arco-upload-list-item:first-of-type,.arco-upload-list.arco-upload-list-type-picture .arco-upload-list-item:first-of-type{margin-top:24px}.arco-upload-list-item-done .arco-upload-list-item-file-icon{color:rgb(var(--primary-6))}.arco-upload-list-item{position:relative;display:flex;align-items:center;box-sizing:border-box;margin-top:12px}.arco-upload-list-item-content{display:flex;flex:1;flex-wrap:nowrap;align-items:center;box-sizing:border-box;width:100%;padding:8px 10px 8px 12px;overflow:hidden;font-size:14px;background-color:var(--color-fill-1);border-radius:var(--border-radius-small);transition:background-color .1s cubic-bezier(0,0,1,1)}.arco-upload-list-item-file-icon{margin-right:12px;color:rgb(var(--primary-6));font-size:16px;line-height:16px}.arco-upload-list-item-thumbnail{flex-shrink:0;width:40px;height:40px;margin-right:12px}.arco-upload-list-item-thumbnail img{width:100%;height:100%}.arco-upload-list-item-name{display:flex;flex:1;align-items:center;margin-right:10px;overflow:hidden;color:var(--color-text-1);font-size:14px;line-height:1.4286;white-space:nowrap;text-overflow:ellipsis}.arco-upload-list-item-name-link{overflow:hidden;color:rgb(var(--link-6));text-decoration:none;text-overflow:ellipsis;cursor:pointer}.arco-upload-list-item-name-text{overflow:hidden;text-overflow:ellipsis;cursor:pointer}.arco-upload-list-item .arco-upload-progress{position:relative;margin-left:auto;line-height:12px}.arco-upload-list-item .arco-upload-progress:hover .arco-progress-circle-bg{stroke:rgba(var(--gray-10),.2)}.arco-upload-list-item .arco-upload-progress:hover .arco-progress-circle-bar{stroke:rgb(var(--primary-7))}.arco-upload-list-item-operation{margin-left:12px;color:var(--color-text-2);font-size:12px}.arco-upload-list-item-operation .arco-upload-icon-remove{font-size:inherit}.arco-upload-list-item-error .arco-upload-list-status,.arco-upload-list-item-done .arco-upload-list-status{display:none}.arco-upload-list-type-text .arco-upload-list-item-error .arco-upload-list-item-name-link,.arco-upload-list-type-text .arco-upload-list-item-error .arco-upload-list-item-name{color:rgb(var(--danger-6))}.arco-upload-list.arco-upload-list-type-picture-card{display:flex;flex-wrap:wrap;vertical-align:top}.arco-upload-list.arco-upload-list-type-picture-card .arco-upload-list-status{top:50%;margin-left:0;transform:translateY(-50%)}.arco-upload-list-picture{display:inline-block;margin-top:0;margin-right:8px;margin-bottom:8px;padding-right:0;overflow:hidden;vertical-align:top;transition:all .2s cubic-bezier(.34,.69,.1,1)}.arco-upload-list-picture-status-error .arco-upload-list-picture-mask{opacity:1}.arco-upload-list-picture{position:relative;box-sizing:border-box;width:80px;height:80px;overflow:hidden;line-height:80px;text-align:center;vertical-align:top;border-radius:var(--border-radius-small)}.arco-upload-list-picture img{width:100%;height:100%}.arco-upload-list-picture-mask{position:absolute;top:0;right:0;bottom:0;left:0;color:var(--color-white);font-size:16px;line-height:80px;text-align:center;background:rgba(0,0,0,.5);cursor:pointer;opacity:0;transition:opacity .1s cubic-bezier(0,0,1,1)}.arco-upload-list-picture-operation{display:none;font-size:14px}.arco-upload-list-picture-operation .arco-upload-icon-retry{color:var(--color-white)}.arco-upload-list-picture-error-tip .arco-upload-icon-error{color:var(--color-white);font-size:26px}.arco-upload-list-picture-mask:hover{opacity:1}.arco-upload-list-picture-mask:hover .arco-upload-list-picture-operation{display:flex;justify-content:space-evenly}.arco-upload-list-picture-mask:hover .arco-upload-list-picture-error-tip{display:none}.arco-upload-list-type-picture .arco-upload-list-item-content{padding-top:8px;padding-bottom:8px}.arco-upload-list-type-picture .arco-upload-list-item-error .arco-upload-list-item-content{background-color:var(--color-danger-light-1)}.arco-upload-list-type-picture .arco-upload-list-item-error .arco-upload-list-item-name-link,.arco-upload-list-type-picture .arco-upload-list-item-error .arco-upload-list-item-name{color:rgb(var(--danger-6))}.arco-upload-hide+.arco-upload-list .arco-upload-list-item:first-of-type{margin-top:0}.arco-upload-slide-up-enter{opacity:0}.arco-upload-slide-up-enter-active{opacity:1;transition:opacity .2s cubic-bezier(.34,.69,.1,1)}.arco-upload-slide-up-exit{opacity:1}.arco-upload-slide-up-exit-active{margin:0;overflow:hidden;opacity:0;transition:opacity .1s cubic-bezier(0,0,1,1),height .3s cubic-bezier(.34,.69,.1,1) .1s,margin .3s cubic-bezier(.34,.69,.1,1) .1s}.arco-upload-list-item.arco-upload-slide-inline-enter{opacity:0}.arco-upload-list-item.arco-upload-slide-inline-enter-active{opacity:1;transition:opacity .2s cubic-bezier(0,0,1,1)}.arco-upload-list-item.arco-upload-slide-inline-exit{opacity:1}.arco-upload-list-item.arco-upload-slide-inline-exit-active{margin:0;overflow:hidden;opacity:0;transition:opacity .1s cubic-bezier(0,0,1,1),width .3s cubic-bezier(.34,.69,.1,1) .1s,margin .3s cubic-bezier(.34,.69,.1,1) .1s}body{font-family:Nunito Sans-SemiBold,Nunito Sans}html,body,#app{height:100%}.arco-table-td-content{color:#4e5969;font-size:12px;font-weight:400;padding:8px 0}.arco-table-th-title{color:#1d2129;font-size:12px;font-weight:500}.loadingDirectiveElement{position:absolute;left:0;right:0;top:0;bottom:0;z-index:10;display:flex;justify-content:center;align-items:center;text-align:center;background-color:#fff9;transition:opacity .1s cubic-bezier(0,0,1,1);user-select:none}.loadingDirectiveElement.fullScreen{position:fixed;z-index:1000}.posRelative{position:relative}.spaceBTW{justify-content:space-between}.headerInner{height:100%;padding:0 16px}.headerInner .title{color:#1d2129;font-size:14px;font-weight:500}.v-binder-follower-content{max-width:300px}.typing-pre>.code_container:last-child pre code:after{display:block;color:#fff;content:"▋";margin-left:4px;animation:blink 1s steps(5,start) infinite}.typing-text>*:last-child:after{content:"▋";margin-left:4px;vertical-align:baseline;animation:blink 1s steps(5,start) infinite}@keyframes blink{to{visibility:hidden}}.rotate{animation:rotate 1.5s infinite linear}@keyframes rotate{0%{transform:rotate(0)}to{transform:rotate(360deg)}}.avatarWrap{height:40px;width:40px;border-radius:20px;box-sizing:border-box;overflow:hidden}.avatarWrap img{width:100%;height:100%;object-fit:cover}::-webkit-scrollbar{height:16px;width:8px}::-webkit-scrollbar:horizontal{height:8px;width:16px}::-webkit-scrollbar-track{background-color:transparent;border-radius:9999px}::-webkit-scrollbar-thumb{background-color:#d9d9e3cc;border-color:#fff;border-radius:9999px;border-width:1px}::-webkit-scrollbar-thumb:hover{background-color:#ececf1}.hide-scrollbar{-ms-overflow-style:none;scrollbar-width:none}.hide-scrollbar ::-webkit-scrollbar{display:none}.login{height:100%;display:flex}.login .loginbg{flex:2;background-size:cover;position:relative}.login .loginbg .logiWhite{position:absolute;top:20px;left:20px}.login .loginbg .title{color:var(--fill-color-bg-white, #fff);font-family:PingFang SC;font-size:34px;font-style:normal;font-weight:600;line-height:normal;margin-left:20%;margin-top:40%}.login .loginform{flex:3;display:flex;justify-content:center;align-items:center}.login .loginform .formTitle{color:#2d2a2a;font-size:24px;font-weight:500}.login .loginform .toolBox{line-height:50px}.login .loginform .toolBox .toolBoxBtn{cursor:pointer;font-weight:400}.login .loginform .desc{font-size:12px;font-weight:400;color:#86909c}.login .loginform .desc .arco-link{font-size:12px}.IconCommon{fill:currentColor;outline:none;width:1em;height:1em}.IconCommon.iconDisabled{filter:opacity(.5);cursor:not-allowed!important}dialog[data-v-6fddb6c7]:not([open]){opacity:0;visibility:hidden;display:block}.customDialog[data-v-6fddb6c7]{opacity:1;padding:20px;box-sizing:border-box;border:none;border-radius:20px;filter:drop-shadow(0px 0px 40px rgba(168,168,168,.25));transition:opacity .3s ease;display:flex;flex-direction:column;outline:none}.customDialog .header[data-v-6fddb6c7]{width:100%;display:flex;justify-content:flex-end}.customDialog .content[data-v-6fddb6c7]{flex:1}.wechatModal[data-v-e442bd8c]{height:407px;padding:0 20px;box-sizing:border-box;display:flex;flex-direction:column;align-items:center;justify-content:flex-start;gap:12px}.wechatModal .title[data-v-e442bd8c]{display:flex;align-items:center;justify-content:center;gap:10px}.wechatModal .title .titleText[data-v-e442bd8c]{color:#000;font-family:Helvetica Neue;font-size:24px;font-style:normal;font-weight:500;line-height:normal}.wechatModal .desc[data-v-e442bd8c]{color:var(--light-text-color-text-2, #4e5969);font-family:Helvetica Neue;font-size:16px;font-style:normal;font-weight:400;line-height:150%}.wechatModal .qrCode[data-v-e442bd8c]{width:242px;height:263.868px;flex-shrink:0;border-radius:20px;background:#fff;box-shadow:0 4px 40px 10px #0000000d;padding:20px;box-sizing:border-box;margin-top:4px}.wechatModal .qrCode .scanText[data-v-e442bd8c]{display:flex;flex-direction:row;align-items:center;gap:10px}.wechatModal .qrCode .scanText span[data-v-e442bd8c]{color:var(--light-text-color-text-1, #1d2129);font-family:Helvetica Neue;font-size:16px;font-style:normal;font-weight:400;line-height:normal}.baseFont[data-v-8c3ce136],.heroWrapper .links .link .linkName[data-v-8c3ce136],.heroWrapper .affiliations .affiliationsItem[data-v-8c3ce136],.heroWrapper .affiliations[data-v-8c3ce136],.heroWrapper .affiliationIndex[data-v-8c3ce136],.heroWrapper .contributor[data-v-8c3ce136],.heroWrapper h1[data-v-8c3ce136]{font-size:20px;font-weight:400;line-height:32px;letter-spacing:0em}.heroWrapper[data-v-8c3ce136]{width:100%;display:flex;align-items:center;flex-direction:column;color:#1d2129;font-family:Helvetica Neue;padding-bottom:50px;overflow:hidden}.heroWrapper h1[data-v-8c3ce136]{font-size:56px;font-weight:700;line-height:70px;text-align:center;color:#1d2129;max-width:1350px;margin:72px 0 0}.heroWrapper .contributors[data-v-8c3ce136]{max-width:854px;text-align:center;margin-top:24px}@media screen and (max-width: 854px){.heroWrapper .contributors[data-v-8c3ce136]{margin:0 12px;width:750px}}.heroWrapper .contributor[data-v-8c3ce136]{text-align:center;color:#4080ff}.heroWrapper .affiliationIndex[data-v-8c3ce136]{text-align:center;font-family:PingFang SC;font-size:14px;vertical-align:top;position:relative;top:-6px}.heroWrapper .affiliationIndex .extra[data-v-8c3ce136]{position:absolute;font-size:12px;top:-8px;right:-5px;top:-3px}.heroWrapper .affiliations[data-v-8c3ce136]{text-align:center;color:#1d2129;margin-top:10px;max-width:947px}@media screen and (max-width: 947px){.heroWrapper .affiliations[data-v-8c3ce136]{width:800px}}.heroWrapper .affiliations .affiliationsItem[data-v-8c3ce136]{text-align:center;font-family:PingFang SC}.heroWrapper .affiliations .affiliationsItemIndex[data-v-8c3ce136]{font-size:12px;vertical-align:text-bottom}.heroWrapper .links[data-v-8c3ce136]{display:flex;flex-direction:row;gap:16px;margin-top:40px;z-index:1;flex-wrap:wrap;justify-content:center}.heroWrapper .links .link[data-v-8c3ce136]{height:42px;padding:8px 16px;border-radius:50px;box-sizing:border-box;background:linear-gradient(90deg,#e8f3ff -1.99%,#e2e8ff 100%);color:#1d2129;display:flex;align-items:center;justify-content:center;gap:8px;user-select:none;cursor:not-allowed;transition:all .3s}.heroWrapper .links .link .linkName[data-v-8c3ce136]{line-height:24px;color:#1d2129}.heroWrapper .links .enabled[data-v-8c3ce136]{cursor:pointer}.heroWrapper .links .enabled[data-v-8c3ce136]:hover{transform:scale(1.1)}.heroWrapper .bigTex[data-v-8c3ce136]{margin-top:50px;width:100%;max-width:1251px;height:356px;flex-shrink:0;border-radius:20px;background:linear-gradient(129deg,#1d1c48 15.07%,#252436 76.51%);padding-top:19.49px;box-sizing:border-box;overflow:hidden;position:relative}.heroWrapper .bigTex .bigTexContent[data-v-8c3ce136]{width:100%;height:301px;box-sizing:border-box;padding:29px 43px;z-index:999;position:absolute;margin-top:46px;color:var(--light-text-color-white, #fff);font-family:Helvetica Neue;font-size:20px;font-style:normal;font-weight:400;line-height:160%}.heroWrapper .bigTex .header[data-v-8c3ce136]{position:absolute;top:46px;left:64px;z-index:999;color:#fff;font-size:24px;font-weight:700;line-height:0%}.heroWrapper .bigTex .copyBtn[data-v-8c3ce136]{position:absolute;top:24px;right:22px;color:red;z-index:999;color:#fff;font-size:24px;font-weight:700;line-height:0%;cursor:pointer}.heroWrapper .bigTex .copyBtn[data-v-8c3ce136]:active{scale:.9}.heroWrapper .galance[data-v-8c3ce136]{margin-top:50px;max-width:1440px;position:relative;overflow:hidden;user-select:none;pointer-events:none}@media screen and (max-width: 1440px){.heroWrapper .galance[data-v-8c3ce136]{margin:12px}}.wechatModal[data-v-d5d425dc]{height:407px;padding:0 20px;box-sizing:border-box;display:flex;flex-direction:column;align-items:center;justify-content:flex-start;gap:12px;font-family:Helvetica Neue}.wechatModal .title[data-v-d5d425dc]{display:flex;align-items:center;justify-content:center;gap:10px}.wechatModal .title .titleText[data-v-d5d425dc]{color:#000;font-size:24px;font-style:normal;font-weight:500;line-height:normal}.wechatModal .desc[data-v-d5d425dc]{color:var(--light-text-color-text-2, #4e5969);font-size:16px;font-style:normal;font-weight:400;line-height:150%}.wechatModal .links[data-v-d5d425dc]{width:100%;padding-top:8px;display:flex;gap:8px;flex-direction:column;color:var(--light-text-color-text-1, #1d2129);font-size:16px;font-style:normal;font-weight:400;line-height:normal;border-top:1px dashed #e5e6eb}.wechatModal .links .link[data-v-d5d425dc]{color:var(--light-text-color-text-1, #1d2129);font-size:16px;font-style:normal;font-weight:400;line-height:normal;cursor:pointer}.wechatModal .links .link[data-v-d5d425dc]:hover{text-decoration:underline}.wechatModal .viwer[data-v-d5d425dc]{width:548px;height:304.385px;flex-shrink:0;border-radius:20px;margin-top:8px}.wechatModal .button[data-v-d5d425dc]{display:flex;height:42px;padding:18px 24px;box-sizing:border-box;justify-content:center;align-items:center;gap:10px;border-radius:10px;background:linear-gradient(270deg,#5772ff 0%,#165dff 89.78%);color:#fff;margin-top:8px;cursor:pointer;transition:transform .3s}.wechatModal .button[data-v-d5d425dc]:hover{transform:scale(1.1)}.wechatModal .welcomText[data-v-d5d425dc]{color:var(--light-text-color-text-1, #1d2129);font-size:16px;font-style:normal;font-weight:500;line-height:normal;margin-top:10px}.wechatModal .contributor[data-v-d5d425dc]{color:var(--light-text-color-text-2, #4e5969);font-size:16px;font-style:normal;font-weight:400;line-height:normal;display:flex;align-items:center;gap:4px;margin-top:6px}.wechatModal .contributor .count[data-v-d5d425dc]{display:flex;padding:2px 5px;align-items:center;gap:4px;border-radius:40px;background:var(--color-fill-2, #f2f3f5)}.roleListWrapper[data-v-dc72f28c]{width:429px;height:100%;background-image:linear-gradient(170.11deg,#e9e9ff 1.21%,#ffffff 10.31%,#ffffff 98.31%);font-family:Helvetica Neue;display:flex;flex-direction:column;overflow:hidden}.roleListWrapper[data-v-dc72f28c] .arco-select-view-size-large{height:48px;width:369px;margin:20px 30px 0;border-radius:10px}.roleListWrapper .title[data-v-dc72f28c]{font-size:24px;font-weight:700;line-height:29px;letter-spacing:0em;text-align:left;display:flex;align-items:center;gap:10px;padding:16px 32px;box-sizing:border-box}.roleListWrapper .keyFill[data-v-dc72f28c]{margin:0 30px;height:78px;padding:0 28.75px;box-sizing:border-box;border-radius:10px;border:1px;text-align:left;border:1px solid #e5e6eb;box-shadow:0 2px 10px #0000001a;background:linear-gradient(0deg,#f7f8fa,#f7f8fa),linear-gradient(0deg,#e5e6eb,#e5e6eb);display:flex;align-items:center}.roleListWrapper .keyFill input[data-v-dc72f28c]{width:100%;height:19px;resize:none;outline:none;border:none;background:none;color:var(--light-text-color-text-2, #4e5969);font-family:Helvetica Neue;font-size:16px;font-style:normal;font-weight:400;line-height:normal}.roleListWrapper .keyFill .placeholder[data-v-dc72f28c]{color:#86909c;font-size:16px;font-weight:400;line-height:19px;letter-spacing:0em}.roleListWrapper .keyFill .showPassword[data-v-dc72f28c]{width:50px;color:#86909c;font-size:16px;font-weight:400;line-height:19px;letter-spacing:0em;cursor:pointer;display:flex;justify-content:flex-end}.roleListWrapper .keyFilled[data-v-dc72f28c]{border-radius:10px;border:1px solid var(--light-line-color-border-2, #e5e6eb);background:linear-gradient(90deg,#e8f3ff 0%,#e2e8ff 100%)}.roleListWrapper .shake[data-v-dc72f28c]{animation:shake-dc72f28c .5s 1}@keyframes shake-dc72f28c{0%,to{transform:translate(0)}10%,30%,50%,70%,90%{transform:translate(-10px)}20%,40%,60%,80%{transform:translate(10px)}}.roleListWrapper .roleList[data-v-dc72f28c]{width:100%;overflow:hidden;display:flex;flex-direction:column;gap:14px;padding:3px 32px 32px;box-sizing:border-box}.roleListWrapper .roleList .role[data-v-dc72f28c]{width:100%;height:92px;padding:0 0 16px;box-sizing:border-box;border-radius:4.8px;gap:12px;display:flex;flex-direction:row}.roleListWrapper .roleList .role .avatar[data-v-dc72f28c]{width:54px;height:54px;border-radius:50%;border:3px solid #c9cdd4;position:relative}.roleListWrapper .roleList .role .avatar .innerPie[data-v-dc72f28c]{margin:3px;border-radius:50%;position:absolute;width:calc(100% - 6px);height:calc(100% - 6px);background:linear-gradient(0deg,#e5e6eb,#e5e6eb),linear-gradient(0deg,#ffffff,#ffffff)}.roleListWrapper .roleList .role .avatar .rightPoint[data-v-dc72f28c]{position:absolute;content:"";width:10px;height:10px;top:40px;left:40px;border:2px;border-radius:50%;background:linear-gradient(0deg,#c9cdd4,#c9cdd4),linear-gradient(0deg,#ffffff,#ffffff);border:2px solid #ffffff}.roleListWrapper .roleList .role .avatar .pointActive[data-v-dc72f28c]{background:#0fd267}.roleListWrapper .roleList .role .avatar img[data-v-dc72f28c]{width:32px;margin:8px 12px;position:absolute}.roleListWrapper .roleList .role .infomation[data-v-dc72f28c]{flex:1;display:flex;flex-direction:column;justify-content:space-between;overflow:hidden}.roleListWrapper .roleList .role .infomation .job[data-v-dc72f28c]{flex:1;display:flex;flex-direction:row;justify-content:space-between;overflow:hidden}.roleListWrapper .roleList .role .infomation .job .jobName[data-v-dc72f28c]{font-size:16px;font-weight:500;line-height:20px;letter-spacing:.01em;text-align:left;color:#1d2129;margin-top:5px}.roleListWrapper .roleList .role .infomation .job .jobStatus[data-v-dc72f28c]{font-size:16px;font-weight:400;letter-spacing:0em;text-align:right;color:#86909c}.roleListWrapper .roleList .role .infomation .tags[data-v-dc72f28c]{flex:1;display:flex;flex-direction:row;justify-content:space-between}.roleListWrapper .roleList .role .infomation .tags .tagItem[data-v-dc72f28c]{width:auto;height:21px;padding:2px 10px;box-sizing:border-box;border-radius:5px;gap:4px;background:#f2f3f5;font-family:Helvetica Neue;font-size:14px;font-weight:500;line-height:17px;letter-spacing:0em;text-align:left}.roleListWrapper .roleList .role .infomation .tags .action[data-v-dc72f28c]{font-family:Helvetica Neue;font-size:16px;font-weight:500;line-height:20px;letter-spacing:0em;text-align:left;color:#165dff;cursor:pointer;user-select:none}.roleListWrapper .roleList .role .infomation .tags .action[data-v-dc72f28c]:hover{text-decoration:underline}.loading_wrap[data-v-491f84be]{display:inline-flex;align-items:center}.loading[data-v-491f84be],.loading>span[data-v-491f84be]{position:relative;box-sizing:border-box}.loading[data-v-491f84be]{display:inline-block;font-size:0;color:inherit}.loading>span[data-v-491f84be]{display:inline-block;float:none;background-color:currentColor;border:0 solid inherit}.loading[data-v-491f84be]{width:27px;height:9px}.loading>span[data-v-491f84be]{width:5px;height:5px;margin:2px;border-radius:100%;animation:ball-beat-491f84be .7s -.15s infinite linear}.loading>span[data-v-491f84be]:nth-child(2n-1){animation-delay:-.5s}@keyframes ball-beat-491f84be{50%{opacity:.2;transform:scale(.75)}to{opacity:1;transform:scale(1)}}.message_info[data-v-9c433b71]{display:flex;padding:16px;gap:16px}.message_info .agentAvatar[data-v-9c433b71]{padding:6px;box-sizing:border-box;background:black;border-radius:50%}.message_info .avatar[data-v-9c433b71]{width:40px;height:40px;position:relative;flex-shrink:0}.message_info .avatar[data-v-9c433b71]:after{content:"";position:absolute;width:8px;height:8px;border-radius:50%;background-color:currentColor;right:6px;bottom:0}.message_info[data-v-9c433b71] .avatar .arco-avatar-text{color:#fff}.message_info .info_box[data-v-9c433b71]{display:flex;flex-direction:column;gap:8px;width:100%;overflow:hidden}.message_info .info_box .item_info[data-v-9c433b71]{display:flex;gap:16px;font-size:14px;font-weight:400;color:#4e5969}.message_info .info_box .item_info .name[data-v-9c433b71]{font-weight:500;color:var(--light-text-color-text-1, #1d2129)}.message_info .info_box .item_info .time[data-v-9c433b71]{color:#86909c;font-size:14px;font-weight:400}.message_info .info_box .item_info .responseSwitcher[data-v-9c433b71]{display:flex;align-items:center;column-gap:4px;color:#4e5969;user-select:none;font-size:14px;font-weight:400;margin-left:-8px;margin-right:-8px}.message_info .info_box .item_info .responseSwitcher>svg[data-v-9c433b71]{cursor:pointer}.message_info .info_box .item_info .responseSwitcher .disabled[data-v-9c433b71]{cursor:not-allowed;color:#c9cdd4}.message_info .info_box .item_info .rate_wrap[data-v-9c433b71]{position:relative;display:none;align-items:center;height:22px;gap:8px}.message_info .info_box .item_info .rate_wrap[data-v-9c433b71] .rate_box{background-color:#fff;border-radius:4px;padding:8px 16px;height:32px;font-size:12px;box-sizing:border-box}.message_info .info_box .item_info .rate_wrap[data-v-9c433b71] .rate_box .arco-rate{font-size:16px;min-height:16px}.message_info .info_box:hover .rate_wrap[data-v-9c433b71]{display:flex}.message_info .info_box .message_wrap[data-v-9c433b71]{width:100%}.message_info .info_box .answer_feedback[data-v-9c433b71]{position:relative;min-width:440px;max-width:min(700px,70%);display:flex;flex-direction:row;justify-content:flex-start;align-items:center;column-gap:32px;padding:12px 16px;box-sizing:border-box;color:var(--color-text-1);background-color:#fff;font-size:14px;font-weight:400;line-height:22px}.message_info .info_box .answer_feedback .icon_close[data-v-9c433b71]{position:absolute;top:5px;right:5px;font-size:12px;font-weight:300}.message_info .info_box .answer_feedback .feedback[data-v-9c433b71]{display:flex;align-items:center;column-gap:6px;cursor:pointer}.message_info .info_box .answer_feedback .feedback.active[data-v-9c433b71],.message_info .info_box .answer_feedback .feedback[data-v-9c433b71]:hover{color:#165dff}.message_info .right_pos[data-v-9c433b71]{justify-content:flex-end;align-items:flex-end}.message_container[data-v-6f899d6f]{width:100%;display:flex;flex-direction:column;align-items:flex-end}.message_container .user_message[data-v-6f899d6f]{min-width:440px;max-width:min(700px,70%);position:relative;background:#eaf3ff;padding:16px 64px 16px 16px;border-radius:4px;box-sizing:border-box;display:flex;flex-direction:column;row-gap:30px}.message_container .user_message .msg_wrap[data-v-6f899d6f]{display:flex;align-items:center}.message_container .user_message[data-v-6f899d6f] .msg_wrap .arco-textarea-wrapper{background-color:transparent;border-color:transparent;box-shadow:none;padding:0}.message_container .user_message[data-v-6f899d6f] .msg_wrap .arco-textarea-wrapper .arco-textarea{padding:0}.message_container .user_message .icon_more_wrap[data-v-6f899d6f]{position:absolute;right:16px;top:16px;cursor:pointer}.message_container .user_message .icon_more_wrap .icon_more[data-v-6f899d6f]{display:none}.message_container .user_message .btn_group[data-v-6f899d6f]{align-self:flex-end;display:flex;column-gap:8px;margin-right:-48px}.message_container:hover .icon_more_wrap .icon_more[data-v-6f899d6f]{display:block}.step_skill[data-v-17bf8a16]{display:flex;align-items:center;margin-left:54px;font-weight:400;color:#1d2129;font-size:14px;line-height:22px;column-gap:8px}.step_skill .trigger[data-v-17bf8a16]{color:#4e5969;margin-right:16px;display:flex;align-items:center;column-gap:8px}.step_skill .link_group[data-v-17bf8a16]{margin-left:16px;display:flex;column-gap:8px}.step_skill .link_group>a[data-v-17bf8a16]{display:flex;align-items:center;column-gap:4px}.step_item[data-v-690b1166]{width:100%;height:100%}.step_item+.step_item[data-v-690b1166]{margin-top:16px}.step_item .step[data-v-690b1166]{width:100%;display:flex;align-items:center;min-height:initial!important}.step_item .step .step_title_wrap[data-v-690b1166]{width:100%;display:flex;flex-direction:row;align-items:center;font-weight:400}.step_item .step .step_title_wrap .title[data-v-690b1166]{color:#1d2129;font-size:16px;line-height:24px}.step_item .step .step_title_wrap .icon_loading[data-v-690b1166]{display:inline-flex;align-items:center;margin:0 8px}.step_item .step .step_title_wrap .description[data-v-690b1166]{margin-left:auto;color:#4e5969;font-size:14px;line-height:22px;text-wrap:wrap;max-width:500px}.step_item[data-v-690b1166] .step .arco-steps-item-content{flex:1}.step_item[data-v-690b1166] .step .arco-steps-item-content .arco-steps-item-title{width:100%;height:100%}.step_item .step_info[data-v-690b1166]{height:100%;margin-top:4px;display:flex;flex-direction:column;row-gap:8px}.step_item .step_info .step_content_wrap[data-v-690b1166]{display:flex;column-gap:28px;min-height:50px}.step_item .step_info .step_content_wrap .divider[data-v-690b1166]{flex-shrink:0;height:inherit;background:#165dff}.step_item .step_info .step_content_wrap .divider.active[data-v-690b1166]{background:#e5e6eb}.step_item .step_info .step_content_wrap .step_content[data-v-690b1166]{width:calc(100% - 54px);padding-top:3.5px;box-sizing:border-box}pre code.hljs{display:block;overflow-x:auto;padding:1em}code.hljs{padding:3px 5px}.hljs{color:#abb2bf;background:#282c34}.hljs-comment,.hljs-quote{color:#5c6370;font-style:italic}.hljs-doctag,.hljs-formula,.hljs-keyword{color:#c678dd}.hljs-deletion,.hljs-name,.hljs-section,.hljs-selector-tag,.hljs-subst{color:#e06c75}.hljs-literal{color:#56b6c2}.hljs-addition,.hljs-attribute,.hljs-meta .hljs-string,.hljs-regexp,.hljs-string{color:#98c379}.hljs-attr,.hljs-number,.hljs-selector-attr,.hljs-selector-class,.hljs-selector-pseudo,.hljs-template-variable,.hljs-type,.hljs-variable{color:#d19a66}.hljs-bullet,.hljs-link,.hljs-meta,.hljs-selector-id,.hljs-symbol,.hljs-title{color:#61aeee}.hljs-built_in,.hljs-class .hljs-title,.hljs-title.class_{color:#e6c07b}.hljs-emphasis{font-style:italic}.hljs-strong{font-weight:700}.hljs-link{text-decoration:underline}.operation_wrap[data-v-ea9cc5f9]{width:100%;position:relative}.operation_wrap .operate_icon[data-v-ea9cc5f9]{display:inline-block;position:absolute;left:calc(100% + 8px);top:0px;width:32px;height:32px;text-align:center;line-height:32px;box-sizing:border-box;border-radius:4px;background-color:#fff;color:#4e5969;cursor:pointer;visibility:hidden;transition:all .2s ease}.operation_wrap .operate_icon[data-v-ea9cc5f9]:hover{background-color:var(--color-fill-2)}.operation_wrap .operate_icon:hover svg[data-v-ea9cc5f9]{transform:scale(1.1)}.operation_wrap:hover .operate_icon[data-v-ea9cc5f9]{visibility:visible}.code_container[data-v-4f00a864]{display:flex;flex-direction:column;border-radius:6px;overflow:hidden}.code_container .tool_wrap[data-v-4f00a864]{font-size:10px;line-height:24px;display:flex;color:#d9d9e3;background:rgb(52,53,65);padding:8px 16px}.code_container .tool_wrap .copy_btn[data-v-4f00a864]{font-size:12px;color:#d9d9e3;background-color:transparent;margin-left:auto;cursor:pointer;outline:none;border:none;display:flex;padding:0;gap:6px;align-items:center}.code_container .tool_wrap .copy_btn .copy_icon[data-v-4f00a864]{width:16px;height:16px;display:block}.markdown_wrap{padding:16px;border-radius:4px;box-sizing:border-box;font-size:14px;font-weight:400;color:#1d2129;line-height:22px;background-color:#fff}.markdown_wrap>p:first-child{margin-top:0}.markdown_wrap>p:last-child{margin-bottom:0}.markdown_wrap pre{margin:0;padding:0}.markdown_wrap .hljs_code{width:100%;box-sizing:border-box;padding:15px;overflow-x:auto}.chatHistoryImageItem{background-color:#fff;display:inline-flex;flex-wrap:wrap;max-width:324px;padding:8px;gap:4px}.chatHistoryImageItem .imageItem{width:160px;height:160px;position:relative;display:flex;justify-content:center;align-items:center}.chatHistoryImageItem .imageItem .n-image{height:100%;width:100%}.chatHistoryImageItem .imageItem img{width:100%;height:100%}.chatHistoryImageItem .imageItem .maxCover{height:100%;width:100%;position:absolute;left:0;top:0;background:rgba(0,0,0,.4);display:flex;justify-content:center;align-items:center}.chatHistoryAudioItem{width:574px;display:inline-block;padding:4px 16px;background-color:#fff}.chatHistoryAudioItem .audio{display:flex;align-items:center;gap:16px;color:#4e5969}.chatHistoryAudioItem .audio .control{font-size:32px;color:#165dff;cursor:pointer}.chatHistoryAudioItem audio{display:none}.error_msg[data-v-84a7773a]{border:1px solid #f53f3f;background-color:#f53f3f1a;padding:16px;border-radius:4px;font-size:14px;font-weight:400;color:#1d2129;line-height:22px;box-sizing:border-box;white-space:normal;overflow-wrap:break-word}.agent_message_wrap[data-v-898355de]{display:flex;flex-direction:column;row-gap:8px}.steps_container[data-v-77479ba1]{width:100%;margin-top:12px}.steps_container .steps_wrap[data-v-77479ba1]{min-width:440px;max-width:min(700px,70%)}.steps_container .steps_wrap[data-v-77479ba1] .arco-steps-icon{background-color:#fff0!important}.status_btn[data-v-240aae5d]{position:absolute;top:0;right:0;transform:translateY(calc(-100% - 16px))}.status_btn .error_msg[data-v-240aae5d]{font-size:14px;font-weight:400;line-height:22px;color:#86909c}.chatRoomWrapper[data-v-ae197aef]{flex:1;height:100%;background-color:#f8faff;padding:40px 100px;box-sizing:border-box;display:flex;flex-direction:column;align-items:center;font-family:Helvetica Neue;overflow:hidden}.chatRoomWrapper .visionWrapper[data-v-ae197aef]{width:100%;flex:1;display:flex;overflow:hidden}.chatRoomWrapper .visionWrapper .emptyWrapper[data-v-ae197aef]{padding:53px 0;box-sizing:border-box;display:flex;flex-direction:column;align-items:center}.chatRoomWrapper .visionWrapper .emptyWrapper .descWrapper[data-v-ae197aef]{flex:1;width:100%;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:16px}.chatRoomWrapper .visionWrapper .emptyWrapper .descWrapper .title[data-v-ae197aef]{font-size:40px;font-weight:500;line-height:49px;letter-spacing:0em;text-align:center;color:#86909c}.chatRoomWrapper .visionWrapper .emptyWrapper .descWrapper .desc[data-v-ae197aef]{text-align:center;font-size:20px;line-height:32px}.chatRoomWrapper .visionWrapper .emptyWrapper .descWrapper .desc .text2[data-v-ae197aef]{color:#86909c;font-weight:500}.chatRoomWrapper .visionWrapper .emptyWrapper .descWrapper .desc .text3[data-v-ae197aef]{font-weight:400;color:#c9cdd4}.chatRoomWrapper .visionWrapper .emptyWrapper .actionWrapper[data-v-ae197aef]{width:900px;height:136px;display:flex;flex-direction:row;flex-wrap:wrap;gap:20px;align-items:center;justify-content:center}.chatRoomWrapper .visionWrapper .emptyWrapper .actionWrapper .button[data-v-ae197aef]{width:440px;height:58px;padding:18px 24px;box-sizing:border-box;border-radius:10px;gap:10px;background:linear-gradient(180deg,#ffffff 0%,#f4f4f4 100%);border:1.32px solid #e5e6eb;cursor:pointer;transition:all .3s;display:flex;align-items:center;justify-content:center}.chatRoomWrapper .visionWrapper .emptyWrapper .actionWrapper .button span[data-v-ae197aef]{color:#1d2129;font-size:16px;font-style:normal;font-weight:500;line-height:normal;word-wrap:normal;word-break:keep-all;white-space:nowrap}.chatRoomWrapper .visionWrapper .emptyWrapper .actionWrapper .button[data-v-ae197aef]:hover{border-radius:10px;background:linear-gradient(0deg,#2c67f7 0%,#5486ff 100%)!important}.chatRoomWrapper .visionWrapper .emptyWrapper .actionWrapper .button:hover span[data-v-ae197aef]{color:#fff!important}.chatRoomWrapper .visionWrapper .emptyWrapper .actionWrapper .button[data-v-ae197aef]:active{transform:scale(.98)}.chatRoomWrapper .visionWrapper .chatWrapper[data-v-ae197aef]{width:100%;flex:1;display:flex;overflow:hidden;position:relative}.chatRoomWrapper .visionWrapper .chatWrapper .msg_history_area[data-v-ae197aef]{flex:1;display:flex;flex-direction:column;overflow:scroll;position:relative;padding-top:10px}.chatRoomWrapper .visionWrapper .chatWrapper .msg_history_area .scroll_wrap[data-v-ae197aef]{width:100%;height:100%;overflow-y:auto}.chatRoomWrapper .visionWrapper .chatWrapper .msg_history_area .msg_text[data-v-ae197aef]{min-width:440px;max-width:min(700px,70%);padding:16px;border-radius:4px;background-color:#fff;font-size:14px;font-weight:400;color:#1d2129;line-height:22px;box-sizing:border-box;white-space:normal;overflow-wrap:break-word}.chatRoomWrapper .visionWrapper .chatWrapper .msg_history_area .bottom_trigger[data-v-ae197aef]{width:200px;position:absolute;bottom:0px;left:calc(50% - 150px)}.chatRoomWrapper .inputWrapper[data-v-ae197aef]{width:100%;height:59px}.chatRoomWrapper .inputWrapper .inputInner[data-v-ae197aef]{display:flex;width:100%;height:100%;align-items:center;gap:10px;box-sizing:border-box;padding:12px;border-radius:10px;border:2px solid #e1e3e8;background:#fff;box-shadow:2.6px 2.6px 8px #00000014 inset}.chatRoomWrapper .inputWrapper .inputInner input[data-v-ae197aef]{flex:1;height:100%;color:#1d2129;font-size:16px;font-style:normal;font-weight:400;line-height:0%;border:none;outline:none}.chatRoomWrapper .inputWrapper .inputInner input[data-v-ae197aef]:is(:disabled){background:none;cursor:not-allowed}.chatRoomWrapper .inputWrapper .inputInner input[data-v-ae197aef]:is(:disabled)::placeholder{color:#86909c}.chatRoomWrapper .inputWrapper .inputInner .sendBtn[data-v-ae197aef]{width:42px;height:42px;flex-shrink:0;background:#165dff;border-radius:8px;margin-right:-4px;display:flex;align-items:center;justify-content:center;user-select:none;cursor:pointer;transition:transform .3s;border:none}.chatRoomWrapper .inputWrapper .inputInner .sendBtn[data-v-ae197aef]:is(:not(:disabled)):hover{transform:scale(1.1)}.chatRoomWrapper .inputWrapper .inputInner .sendBtn[data-v-ae197aef]:is(:not(:disabled)):active{transform:scale(.98)}.chatRoomWrapper .emptyWrapper[data-v-ae197aef]{width:100%;height:100%;display:flex}.chatWrapper[data-v-7d0d8d24]{width:100%;height:100vh;display:flex;flex-direction:row;overflow:hidden}.hfHomeWrapper[data-v-7444cb54]{width:100%;height:100vh} diff --git a/spaces/diacanFperku/AutoGPT/Download Extra Qualityannefrankdiarymalayalamedition.md b/spaces/diacanFperku/AutoGPT/Download Extra Qualityannefrankdiarymalayalamedition.md deleted file mode 100644 index 719ee8c66dd9f61561b60f489237c8d2bbba8ce6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download Extra Qualityannefrankdiarymalayalamedition.md +++ /dev/null @@ -1,6 +0,0 @@ -

      downloadannefrankdiarymalayalamedition


      Downloadhttps://gohhs.com/2uFTWi



      -
      -Downloadannefrankdiarymalayalamedition DOWNLOAD: https://geags.com/1ic9rj ba1888a4a6 Download Anne Frank Diary Malayalam Edition ... 4d29de3e1b
      -
      -
      -

      diff --git "a/spaces/diacanFperku/AutoGPT/KMS Nano V.16.1 Automatic\302\240Activator.md" "b/spaces/diacanFperku/AutoGPT/KMS Nano V.16.1 Automatic\302\240Activator.md" deleted file mode 100644 index 9d536706e912999a607550714911f1ceaab71bdf..0000000000000000000000000000000000000000 --- "a/spaces/diacanFperku/AutoGPT/KMS Nano V.16.1 Automatic\302\240Activator.md" +++ /dev/null @@ -1,12 +0,0 @@ - -

      kmsnano activator is a software that is used to activate windows. it is a software that has been designed to ensure that your operating system is activated to its full potential. the software is simple and easy to use. if youre looking to activate windows 7, windows 8, windows 8.1 and windows 10, then youre in the right place.

      -

      KMS Nano V.16.1 Automatic Activator


      Download 🌟 https://gohhs.com/2uFUbh



      -

      kmsnano activator is a great software which is used to activate windows operating system. it is a very easy-to-use software and you can activate windows 7, windows 8, windows 8.1 and windows 10 operating systems with this software.

      -

      kmsnano activator is a software that allows you to activate your computer. it is a simple and easy to use software and you can activate windows 7, windows 8, windows 8.1 and windows 10 operating systems with this software.

      -

      kmsnano activator is a very good software which is used to activate windows operating system. it is a simple and easy-to-use software and you can activate windows 7, windows 8, windows 8.1 and windows 10 operating systems with this software.

      -

      -

      kmsnano activator is a software which allows you to activate your computer. it is a very simple and easy to use software and you can activate windows 7, windows 8, windows 8.1 and windows 10 operating systems with this software.

      -

      kmsnano activator is a software that allows you to activate your computer. it is a very simple and easy to use software and you can activate windows 7, windows 8, windows 8.1 and windows 10 operating systems with this software.

      -

      this is a cool device that looks great and is kind of small enough that i can carry it around with me in my bag. the design reminds me of the super nunchuck which was a great device and the design of the aegis nano is very similar. i like that they even included a little led that lets you know when the battery is fully charged.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Mastizaade Full Movie Fix Download 720p Khatrimaza.md b/spaces/diacanFperku/AutoGPT/Mastizaade Full Movie Fix Download 720p Khatrimaza.md deleted file mode 100644 index 561e05ded8afea187b436f537c9db22726bf0b12..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Mastizaade Full Movie Fix Download 720p Khatrimaza.md +++ /dev/null @@ -1,115 +0,0 @@ - -

      Mastizaade Full Movie Download 720p Khatrimaza

      - -

      If you are looking for a comedy movie that will make you laugh out loud, then you should watch Mastizaade. Mastizaade is a 2016 Bollywood movie starring Sunny Leone, Tusshar Kapoor and Vir Das. The movie is about two sex addicts who fall in love with twin sisters who run a sex addiction clinic. The movie is full of hilarious scenes and dialogues that will tickle your funny bone.

      - -

      Mastizaade was released on 29 January 2016 and received mixed reviews from critics and audiences. The movie was a moderate success at the box office, earning around 38 crores on a budget of 30 crores. The movie was also criticized for its vulgar and offensive content by some sections of the society.

      -

      Mastizaade Full Movie Download 720p Khatrimaza


      Downloadhttps://gohhs.com/2uFVoG



      - -

      However, if you are a fan of adult comedy movies, then you might enjoy Mastizaade. The movie has some catchy songs and Sunny Leone's double role as Laila and Lily Lele. The movie also has some cameo appearances by Riteish Deshmukh, Shaad Randhawa, Suresh Menon and Asrani.

      - -

      How to Download Mastizaade Full Movie in 720p Quality?

      - -

      If you want to download Mastizaade full movie in 720p quality, then you have come to the right place. In this article, we will tell you how to download Mastizaade full movie in 720p quality from Khatrimaza. Khatrimaza is a popular website that provides free movies download in various formats and qualities.

      - -

      Khatrimaza has a huge collection of Bollywood, Hollywood, South Indian, Punjabi and other regional movies. You can find almost any movie on Khatrimaza and download it for free. However, you should be aware that Khatrimaza is an illegal website that uploads pirated copies of movies without the permission of the makers. Downloading movies from Khatrimaza can land you in legal trouble and also expose your device to malware and viruses.

      - -

      Therefore, we do not recommend downloading Mastizaade full movie in 720p quality from Khatrimaza. Instead, you should watch Mastizaade legally on streaming platforms like Netflix, Amazon Prime Video or Hotstar. You can also buy or rent Mastizaade on YouTube, Google Play Movies or iTunes.

      - -

      Disclaimer

      - -

      We do not support or promote piracy in any form. This article is for informational purposes only and does not intend to encourage anyone to download Mastizaade full movie in 720p quality from Khatrimaza or any other illegal website. We respect the hard work and creativity of the filmmakers and urge our readers to watch movies legally and ethically.

      -

      What are the Benefits of Downloading Mastizaade Full Movie in 720p Quality?

      - -

      Downloading Mastizaade full movie in 720p quality has many benefits. First of all, you can enjoy the movie in high definition and clear sound. You can also watch the movie offline without any buffering or interruptions. You can also save your data and bandwidth by downloading the movie once and watching it multiple times. You can also share the movie with your friends and family and have a fun time together.

      -

      - -

      Downloading Mastizaade full movie in 720p quality also allows you to watch the movie on any device of your choice. You can watch the movie on your laptop, desktop, smartphone, tablet or smart TV. You can also transfer the movie to a pen drive or a hard disk and watch it on a bigger screen. You can also adjust the brightness, contrast, volume and subtitles of the movie according to your preference.

      - -
      How to Download Mastizaade Full Movie in 720p Quality from Khatrimaza?
      - -

      Downloading Mastizaade full movie in 720p quality from Khatrimaza is very easy and simple. You just need to follow these steps:

      - -
        -
      1. Go to Khatrimaza.zone website and search for Mastizaade full movie in 720p quality.
      2. -
      3. Select the movie from the search results and click on the download button.
      4. -
      5. Choose a download server from the list and wait for a few seconds.
      6. -
      7. The download will start automatically and you can see the progress on your screen.
      8. -
      9. Once the download is complete, you can find the movie file in your download folder.
      10. -
      11. Enjoy watching Mastizaade full movie in 720p quality on your device.
      12. -
      - -

      Note: You may need to use a VPN service or a proxy site to access Khatrimaza website as it may be blocked by your ISP or government. You may also need to disable your antivirus or firewall software as they may interfere with the download process.

      - -
      Conclusion
      - -

      Mastizaade is a hilarious comedy movie that will make you laugh till you drop. The movie has a star cast of Sunny Leone, Tusshar Kapoor and Vir Das who deliver amazing performances. The movie has some catchy songs and Sunny Leone's double role as Laila and Lily Lele. The movie also has some cameo appearances by Riteish Deshmukh, Shaad Randhawa, Suresh Menon and Asrani.

      - -

      If you want to download Mastizaade full movie in 720p quality, then you can use Khatrimaza website. Khatrimaza is a popular website that provides free movies download in various formats and qualities. However, you should be careful while downloading movies from Khatrimaza as it is an illegal website that uploads pirated copies of movies without the permission of the makers. Downloading movies from Khatrimaza can land you in legal trouble and also expose your device to malware and viruses.

      - -

      Therefore, we suggest you watch Mastizaade legally on streaming platforms like Netflix, Amazon Prime Video or Hotstar. You can also buy or rent Mastizaade on YouTube, Google Play Movies or iTunes.

      - -

      We hope this article was helpful for you. If you have any questions or feedback, please let us know in the comments section below. Thank you for reading!

      -What are the Features of Mastizaade Full Movie in 720p Quality? - -

      Mastizaade full movie in 720p quality has many features that make it worth watching. Some of the features are:

      - -
        -
      • The movie has a crisp and clear picture quality that enhances the visual appeal of the movie.
      • -
      • The movie has a good sound quality that matches the mood and tone of the movie.
      • -
      • The movie has a fast and smooth download speed that saves your time and hassle.
      • -
      • The movie has a small file size that does not occupy much space on your device.
      • -
      • The movie has a compatible format that can be played on any device of your choice.
      • -
      - -

      These features make Mastizaade full movie in 720p quality a great choice for downloading and watching.

      - -What are the Reviews of Mastizaade Full Movie in 720p Quality? - -

      Mastizaade full movie in 720p quality has received mixed reviews from critics and audiences. Some of the reviews are:

      - -
      "Mastizaade is a sex comedy that tries too hard to be funny and ends up being crass and vulgar. The movie has no plot, no logic and no sense of humor. The movie is a waste of time and money." - Times of India
      - -
      "Mastizaade is a hilarious comedy that will make you laugh till you drop. The movie has a star cast of Sunny Leone, Tusshar Kapoor and Vir Das who deliver amazing performances. The movie has some catchy songs and Sunny Leone's double role as Laila and Lily Lele. The movie also has some cameo appearances by Riteish Deshmukh, Shaad Randhawa, Suresh Menon and Asrani." - Bollywood Hungama
      - -
      "Mastizaade is a mediocre comedy that fails to impress. The movie has a weak script, poor direction and lame jokes. The movie relies on cheap thrills and vulgar dialogues to entertain the audience. The movie is not for the faint-hearted or the family audience." - Hindustan Times
      - -

      These reviews show that Mastizaade full movie in 720p quality has different opinions from different people. You can watch the movie and decide for yourself whether you like it or not.

      -What are the Alternatives of Khatrimaza for Downloading Mastizaade Full Movie in 720p Quality? - -

      Khatrimaza is not the only website that offers free movies download in various formats and qualities. There are many other websites that provide similar services. However, you should be careful while using these websites as they are also illegal and unsafe. Some of the alternatives of Khatrimaza for downloading Mastizaade full movie in 720p quality are:

      - -
        -
      • Filmywap: Filmywap is a popular website that provides free movies download in different languages and genres. You can find Mastizaade full movie in 720p quality on Filmywap and download it easily.
      • -
      • Filmyzilla: Filmyzilla is another website that offers free movies download in various formats and qualities. You can download Mastizaade full movie in 720p quality from Filmyzilla and enjoy watching it.
      • -
      • Bolly4u: Bolly4u is a website that specializes in Bollywood movies download. You can find Mastizaade full movie in 720p quality on Bolly4u and download it for free.
      • -
      • Worldfree4u: Worldfree4u is a website that provides free movies download in 300mb, 480p, 720p and 1080p qualities. You can download Mastizaade full movie in 720p quality from Worldfree4u and watch it offline.
      • -
      • MoviesKiDuniya: MoviesKiDuniya is a website that provides free movies download in Hindi, English, Tamil, Telugu and other languages. You can download Mastizaade full movie in 720p quality from MoviesKiDuniya and have a fun time.
      • -
      - -

      These are some of the alternatives of Khatrimaza for downloading Mastizaade full movie in 720p quality. However, we do not recommend using these websites as they are illegal and unsafe. You should always watch movies legally and ethically on streaming platforms or official websites.

      - -What are the Tips for Downloading Mastizaade Full Movie in 720p Quality Safely? - -

      If you still want to download Mastizaade full movie in 720p quality from Khatrimaza or any other illegal website, then you should follow some tips to avoid any trouble or risk. Some of the tips are:

      - -
        -
      1. Use a VPN service or a proxy site to access the website as it may be blocked by your ISP or government.
      2. -
      3. Disable your antivirus or firewall software as they may interfere with the download process or delete the movie file.
      4. -
      5. Do not click on any pop-ups or ads that may appear on the website as they may redirect you to malicious sites or install malware on your device.
      6. -
      7. Do not provide any personal or financial information on the website as it may be used for fraud or identity theft.
      8. -
      9. Delete the movie file after watching it and clear your browser history and cache to erase any traces of your activity.
      10. -
      - -

      These are some of the tips for downloading Mastizaade full movie in 720p quality safely from Khatrimaza or any other illegal website. However, we do not endorse or support piracy in any form. We advise you to watch movies legally and ethically on streaming platforms or official websites.

      -Conclusion - -

      In this article, we have discussed everything you need to know about Mastizaade full movie download in 720p quality. We have told you what the movie is about, how to download it legally and illegally, what are the benefits, features, reviews and alternatives of downloading it, and what are the tips for downloading it safely. We hope this article was helpful for you.

      - -

      However, we would like to remind you that downloading Mastizaade full movie in 720p quality from Khatrimaza or any other illegal website is a crime and a violation of the copyright law. You may face legal action and penalties for doing so. You may also harm your device and data by exposing them to malware and viruses.

      - -

      Therefore, we strongly recommend you to watch Mastizaade legally on streaming platforms like Netflix, Amazon Prime Video or Hotstar. You can also buy or rent Mastizaade on YouTube, Google Play Movies or iTunes. This way, you can support the filmmakers and enjoy the movie in high quality and safety.

      - -

      Thank you for reading this article. If you have any questions or feedback, please let us know in the comments section below. Have a great day!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Oracle Primavera P6 V7 Sp3 Full Torrent.md b/spaces/diacanFperku/AutoGPT/Oracle Primavera P6 V7 Sp3 Full Torrent.md deleted file mode 100644 index 2c32267445cb4a3cae06a10ebdffb1e3f5264547..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Oracle Primavera P6 V7 Sp3 Full Torrent.md +++ /dev/null @@ -1,24 +0,0 @@ -
      -

      How to Download and Install Oracle Primavera P6 v7 SP3 for Free

      -

      Oracle Primavera P6 is a powerful project management software that helps you plan, schedule, and control complex projects. It is widely used in various industries such as construction, engineering, oil and gas, aerospace, and more. However, Oracle Primavera P6 is not cheap and requires a license to use. If you want to try it out for free, you can download a torrent file that contains the full version of Oracle Primavera P6 v7 SP3.

      -

      In this article, we will show you how to download and install Oracle Primavera P6 v7 SP3 full torrent in a few simple steps. Please note that this is for educational purposes only and we do not condone piracy or illegal use of software. You should always purchase a legitimate license from Oracle if you want to use Oracle Primavera P6 for your projects.

      -

      oracle primavera p6 v7 sp3 full torrent


      Download Filehttps://gohhs.com/2uFVKJ



      -

      Step 1: Download the Torrent File

      -

      The first step is to download the torrent file that contains Oracle Primavera P6 v7 SP3 full version. You can find it on various torrent sites such as SolidTorrents[^1^], CloudTorrents[^2^], or Pastebin[^4^]. You will need a torrent client such as uTorrent or BitTorrent to download the file. Once you have downloaded the torrent file, open it with your torrent client and start downloading the content.

      -

      Step 2: Extract the Content

      -

      The second step is to extract the content of the torrent file. You will need a software such as WinRAR or 7-Zip to extract the files. The torrent file should contain a folder named "Oracle Primavera P6 v7" with several files inside. Extract the folder to your desired location on your computer.

      -

      Step 3: Install Oracle Primavera P6 v7 SP3

      -

      The third step is to install Oracle Primavera P6 v7 SP3 on your computer. To do this, follow these instructions:

      -
        -
      • Open the folder "Oracle Primavera P6 v7" and double-click on the file "Oracle Primavera P6 v7.exe". This will launch the installation wizard.
      • -
      • Follow the on-screen instructions and accept the terms and conditions. Choose the destination folder where you want to install Oracle Primavera P6 v7 SP3.
      • -
      • When prompted, enter the serial number that is provided in the file "Info.nfo" or "Readme!.txt". You can open these files with Notepad or any text editor.
      • -
      • Wait for the installation to complete. You may need to restart your computer after the installation.
      • -
      • Congratulations! You have successfully installed Oracle Primavera P6 v7 SP3 on your computer.
      • -
      -

      Step 4: Enjoy Oracle Primavera P6 v7 SP3

      -

      The final step is to enjoy Oracle Primavera P6 v7 SP3 for free. You can launch the software from your desktop shortcut or start menu. You can use it to create and manage your projects, assign resources, track progress, generate reports, and more. However, please remember that this is an illegal copy of Oracle Primavera P6 v7 SP3 and you should not use it for commercial purposes or share it with others. If you like the software and want to use it legally, you should buy a license from Oracle or their authorized resellers.

      -

      We hope this article was helpful and informative. If you have any questions or comments, please let us know in the comments section below.

      -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Portable-Wondertouch-Particle-Illusion-V3041-Emitters-Pro-TT.md b/spaces/diacanFperku/AutoGPT/Portable-Wondertouch-Particle-Illusion-V3041-Emitters-Pro-TT.md deleted file mode 100644 index 9b8dfdbe6ca46f407085b4bbc91b2fc79fbaf4e7..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Portable-Wondertouch-Particle-Illusion-V3041-Emitters-Pro-TT.md +++ /dev/null @@ -1,88 +0,0 @@ -## Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T - - - - - - ![Portable Wondertouch Particle Illusion V3.0.4.1 Emitters Pro T-T](https://nerdsmagazine.com/wp-content/uploads/2012/09/ABBYYFineReaderOnline.jpg) - - - - - -**CLICK HERE >>> [https://urlca.com/2tyxog](https://urlca.com/2tyxog)** - - - - - - - - - - - - - -# Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T: A Review - - - -Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T is a software that allows you to create stunning visual effects with particle systems. It is a portable version of the original Wondertouch Particle Illusion, which means you can run it from a USB drive or any other removable media without installing it on your computer. - - - -Particle Illusion is a powerful and easy-to-use tool that lets you create realistic fire, smoke, explosions, sparks, rain, snow, and more with just a few clicks. You can also customize the appearance and behavior of the particles with various parameters and presets. You can use Particle Illusion as a standalone application or as a plug-in for Adobe After Effects, Premiere Pro, Final Cut Pro, and other video editing software. - - - -One of the main features of Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T is that it includes six of the Pro Emitter libraries that were previously sold separately by Wondertouch[^1^]. These libraries contain hundreds of high-quality emitters that were created by professional artists and showcase what Particle Illusion can really do. The libraries are: Abstract, Eclectic 01, Logo and Text, Graphics Elements 2, Artistic Backgrounds, and Pyro 01. - - - -These emitters cover a wide range of styles and effects, from abstract shapes and patterns to realistic fire and smoke. They can be used for creating titles, logos, backgrounds, transitions, and more. You can also modify them to suit your needs or create your own emitters from scratch. - - - -Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T is a great software for anyone who wants to add some magic to their videos or animations. It is fast, easy, and fun to use, and it produces amazing results. It is also portable, which means you can take it with you wherever you go and use it on any computer without installation or registration. - - - -If you are interested in Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T, you can download it from [here](https://portableapps.com/node/24124)[^3^]. You will need a USB drive with at least 2 GB of free space to run it. - - - -In this article, we will show you how to use Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T as a plug-in for Adobe After Effects. This way, you can integrate the particle effects with your video footage and apply other effects and adjustments. - - - -To use Particle Illusion as a plug-in, you need to have Adobe After Effects installed on your computer. You also need to copy the Particle Illusion folder from your USB drive to your hard drive. Then, follow these steps: - - - -1. Open Adobe After Effects and create a new project or open an existing one. - -2. Import the video footage that you want to add particle effects to and drag it to the timeline. - -3. Create a new solid layer by going to Layer > New > Solid. Choose any color and size that matches your composition. - -4. Go to Effect > Wondertouch > particleIllusion. This will open the Particle Illusion interface in a separate window. - -5. In the Particle Illusion window, you can choose an emitter from the library or load a custom one. You can also adjust the position, scale, rotation, and other parameters of the emitter. - -6. Click on the Preview button to see how the particle effect looks on your video. You can also scrub the timeline to see how it animates. - -7. When you are happy with the result, click on OK to apply the effect to the solid layer. - -8. Back in After Effects, you can change the blending mode and opacity of the solid layer to blend it with your video. You can also add masks, filters, or other effects to enhance the look. - - - -That's it! You have successfully added particle effects to your video using Portable Wondertouch Particle Illusion v3.0.4.1 Emitters Pro T-T as a plug-in for Adobe After Effects. You can repeat the process for any other video clips or layers that you want to add particle effects to. - - dfd1c89656 - - - - - diff --git a/spaces/digitalxingtong/Azuma-Bert-VITS2/text/__init__.py b/spaces/digitalxingtong/Azuma-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azuma-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/digitalxingtong/Jiaran-Bert-VITS2/train_ms.py b/spaces/digitalxingtong/Jiaran-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaran-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/dilums/sentence-similarity/components/Main/Results/Results.tsx b/spaces/dilums/sentence-similarity/components/Main/Results/Results.tsx deleted file mode 100644 index a09eee356def5aca16925aed5cd71a3b4c8bbe71..0000000000000000000000000000000000000000 --- a/spaces/dilums/sentence-similarity/components/Main/Results/Results.tsx +++ /dev/null @@ -1,86 +0,0 @@ -import { Card, CardContent, CardHeader } from "@/components/ui/card"; -import { Slider } from "@/components/ui/slider"; -import { Skeleton } from "@/components/ui/skeleton"; -import { - Table, - TableBody, - TableCell, - TableHead, - TableHeader, - TableRow, -} from "@/components/ui/table"; -import { useMemo, useState } from "react"; -type ResultsProps = { - result: { label: string; score: number; key: number }[]; - loading: boolean; -}; - -export default function Results({ loading, result }: ResultsProps) { - const [threshold, setThreshold] = useState(0.75); - - const tableData = useMemo(() => { - return result.map((i) => ({ - ...i, - passed: i.score >= threshold, - score: i.score.toFixed(4), - })); - }, [threshold, result]); - return ( - - -
      -

      Results

      -
      -
      Threshold
      - setThreshold(v[0])} - /> -
      {threshold}
      -
      -
      -
      - - {loading ? ( - - ) : ( - - - - # - Sentence - Score - Pass - - - - {tableData.map(({ key, label, score, passed }) => ( - - {key} - {label} - {score} - {passed ? "✔️" : "❌"} - - ))} - -
      - )} -
      -
      - ); -} - -function LoadingPlaceholder() { - return ( - <> - - - - - - ); -} diff --git a/spaces/django-ochain/youtube-q-and-a/README.md b/spaces/django-ochain/youtube-q-and-a/README.md deleted file mode 100644 index f3bc5ceb8135d1ab5a004efc49268e7b425ff700..0000000000000000000000000000000000000000 --- a/spaces/django-ochain/youtube-q-and-a/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Youtube Q And A -emoji: 🚀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Using-LoRAs.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Using-LoRAs.md deleted file mode 100644 index fafd6cde2d87bfdf46d942ab841a74bf50facdb5..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Using-LoRAs.md +++ /dev/null @@ -1,55 +0,0 @@ -Based on https://github.com/tloen/alpaca-lora - -## Instructions - -1. Download a LoRA, for instance: - -``` -python download-model.py tloen/alpaca-lora-7b -``` - -2. Load the LoRA. 16-bit, 8-bit, and CPU modes work: - -``` -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu -``` - -* For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). - -* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface. - -## Prompt -For the Alpaca LoRA in particular, the prompt must be formatted like this: - -``` -Below is an instruction that describes a task. Write a response that appropriately completes the request. -### Instruction: -Write a Python script that generates text using the transformers library. -### Response: -``` - -Sample output: - -``` -Below is an instruction that describes a task. Write a response that appropriately completes the request. -### Instruction: -Write a Python script that generates text using the transformers library. -### Response: - -import transformers -from transformers import AutoTokenizer, AutoModelForCausalLM -tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") -model = AutoModelForCausalLM.from_pretrained("bert-base-uncased") -texts = ["Hello world", "How are you"] -for sentence in texts: -sentence = tokenizer(sentence) -print(f"Generated {len(sentence)} tokens from '{sentence}'") -output = model(sentences=sentence).predict() -print(f"Predicted {len(output)} tokens for '{sentence}':\n{output}") -``` - -## Training a LoRA - -You can train your own LoRAs from the `Training` tab. See [Training LoRAs](Training-LoRAs.md) for details. diff --git a/spaces/duycse1603/math2tex/ScanSSD/IOU_lib/Evaluator.py b/spaces/duycse1603/math2tex/ScanSSD/IOU_lib/Evaluator.py deleted file mode 100644 index d16aecde1e09a382b1fa29eaa0bb5e29bec2a2d8..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/ScanSSD/IOU_lib/Evaluator.py +++ /dev/null @@ -1,87 +0,0 @@ -########################################################################################### -# # -# Evaluator class: Implements the most popular metrics for object detection # -# # -# Developed by: Rafael Padilla (rafael.padilla@smt.ufrj.br) # -# SMT - Signal Multimedia and Telecommunications Lab # -# COPPE - Universidade Federal do Rio de Janeiro # -# Last modification: Oct 9th 2018 # -########################################################################################### - -import os -import sys -from collections import Counter - -import matplotlib.pyplot as plt -import numpy as np - -from .BoundingBox import * -from .iou_utils import * - - -class Evaluator: - - # For each detections, calculate IOU with reference - @staticmethod - def _getAllIOUs(reference, detections): - ret = [] - bbReference = reference.getAbsoluteBoundingBox(BBFormat.XYX2Y2) - # img = np.zeros((200,200,3), np.uint8) - for d in detections: - bb = d.getAbsoluteBoundingBox(BBFormat.XYX2Y2) - iou = Evaluator.iou(bbReference, bb) - # Show blank image with the bounding boxes - # img = add_bb_into_image(img, d, color=(255,0,0), thickness=2, label=None) - # img = add_bb_into_image(img, reference, color=(0,255,0), thickness=2, label=None) - ret.append((iou, reference, d)) # iou, reference, detection - # cv2.imshow("comparing",img) - # cv2.waitKey(0) - # cv2.destroyWindow("comparing") - return sorted(ret, key=lambda i: i[0], reverse=True) # sort by iou (from highest to lowest) - - @staticmethod - def iou(boxA, boxB): - # if boxes dont intersect - if Evaluator._boxesIntersect(boxA, boxB) is False: - return 0 - interArea = Evaluator._getIntersectionArea(boxA, boxB) - union = Evaluator._getUnionAreas(boxA, boxB, interArea=interArea) - # intersection over union - iou = interArea / union - assert iou >= 0 - return iou - - # boxA = (Ax1,Ay1,Ax2,Ay2) - # boxB = (Bx1,By1,Bx2,By2) - @staticmethod - def _boxesIntersect(boxA, boxB): - if boxA[0] > boxB[2]: - return False # boxA is right of boxB - if boxB[0] > boxA[2]: - return False # boxA is left of boxB - if boxA[3] < boxB[1]: - return False # boxA is above boxB - if boxA[1] > boxB[3]: - return False # boxA is below boxB - return True - - @staticmethod - def _getIntersectionArea(boxA, boxB): - xA = max(boxA[0], boxB[0]) - yA = max(boxA[1], boxB[1]) - xB = min(boxA[2], boxB[2]) - yB = min(boxA[3], boxB[3]) - # intersection area - return (xB - xA + 1) * (yB - yA + 1) - - @staticmethod - def _getUnionAreas(boxA, boxB, interArea=None): - area_A = Evaluator._getArea(boxA) - area_B = Evaluator._getArea(boxB) - if interArea is None: - interArea = Evaluator._getIntersectionArea(boxA, boxB) - return float(area_A + area_B - interArea) - - @staticmethod - def _getArea(box): - return (box[2] - box[0] + 1) * (box[3] - box[1] + 1) diff --git a/spaces/dylanebert/igf/viewer/src/routes/store.js b/spaces/dylanebert/igf/viewer/src/routes/store.js deleted file mode 100644 index 97e66c8476d8034c3f725c414f73a801952f3ae4..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/igf/viewer/src/routes/store.js +++ /dev/null @@ -1,3 +0,0 @@ -import { writable } from 'svelte/store'; - -export const activeTab = writable('Models'); diff --git a/spaces/enesbol/case_dif/config.py b/spaces/enesbol/case_dif/config.py deleted file mode 100644 index 99d921fba92663bbf895b9486571733aa61e908d..0000000000000000000000000000000000000000 --- a/spaces/enesbol/case_dif/config.py +++ /dev/null @@ -1,47 +0,0 @@ -import argparse - -def getConfig(): - parser = argparse.ArgumentParser() - parser.add_argument('action', type=str, default='train', help='Model Training or Testing options') - parser.add_argument('--exp_num', default=0, type=str, help='experiment_number') - parser.add_argument('--dataset', type=str, default='DUTS', help='DUTS') - parser.add_argument('--data_path', type=str, default='data/') - - # Model parameter settings - parser.add_argument('--arch', type=str, default='0', help='Backbone Architecture') - parser.add_argument('--channels', type=list, default=[24, 40, 112, 320]) - parser.add_argument('--RFB_aggregated_channel', type=int, nargs='*', default=[32, 64, 128]) - parser.add_argument('--frequency_radius', type=int, default=16, help='Frequency radius r in FFT') - parser.add_argument('--denoise', type=float, default=0.93, help='Denoising background ratio') - parser.add_argument('--gamma', type=float, default=0.1, help='Confidence ratio') - - # Training parameter settings - parser.add_argument('--img_size', type=int, default=320) - parser.add_argument('--batch_size', type=int, default=32) - parser.add_argument('--epochs', type=int, default=100) - parser.add_argument('--lr', type=float, default=5e-5) - parser.add_argument('--optimizer', type=str, default='Adam') - parser.add_argument('--weight_decay', type=float, default=1e-4) - parser.add_argument('--criterion', type=str, default='API', help='API or bce') - parser.add_argument('--scheduler', type=str, default='Reduce', help='Reduce or Step') - parser.add_argument('--aug_ver', type=int, default=2, help='1=Normal, 2=Hard') - parser.add_argument('--lr_factor', type=float, default=0.1) - parser.add_argument('--clipping', type=float, default=2, help='Gradient clipping') - parser.add_argument('--patience', type=int, default=5, help="Scheduler ReduceLROnPlateau's parameter & Early Stopping(+5)") - parser.add_argument('--model_path', type=str, default='results/') - parser.add_argument('--seed', type=int, default=42) - parser.add_argument('--save_map', type=bool, default=None, help='Save prediction map') - - - # Hardware settings - parser.add_argument('--multi_gpu', type=bool, default=True) - parser.add_argument('--num_workers', type=int, default=4) - cfg = parser.parse_args() - - return cfg - - -if __name__ == '__main__': - cfg = getConfig() - cfg = vars(cfg) - print(cfg) \ No newline at end of file diff --git a/spaces/everythingfades/Math-Stats-AP/README.md b/spaces/everythingfades/Math-Stats-AP/README.md deleted file mode 100644 index 73babfbf8f80e3851c2e6994a1591126069c1472..0000000000000000000000000000000000000000 --- a/spaces/everythingfades/Math-Stats-AP/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Math Stats AP -emoji: 🦀 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fabiogra/moseca/app/__init__.py b/spaces/fabiogra/moseca/app/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/failfast/2D-GameCreator/src/components/ExamplesGrid.tsx b/spaces/failfast/2D-GameCreator/src/components/ExamplesGrid.tsx deleted file mode 100644 index 80467736e043c937ba20963125855b447dcafb8b..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/components/ExamplesGrid.tsx +++ /dev/null @@ -1,89 +0,0 @@ -import { - Card, - CardHeader, - CardMedia, - CardContent, - Grid, - Link, - Table, - TableBody, - TableRow, - TableCell, -} from "@mui/material"; - -export interface Example { - title: string; - creatorLink: string; - creatorName: string; - image: string; - playLink: string; - model: string; - iterations: number; - controls: string; - hints: string; -} - -interface ExamplesGridProps { - examples: Example[]; -} - -export default function ExamplesGrid({ examples }: ExamplesGridProps) { - return ( - - {examples.map((example, index) => ( - - - - by{" "} - - {example.creatorName} - - - } - /> - - - - - - Play - - {" "} - - on CodeSandbox - - - - - - Model - {example.model} - - - Iterations - {example.iterations} - - - Controls - {example.controls} - - - Hints - {example.hints} - - -
      -
      -
      -
      - ))} -
      - ); -} diff --git a/spaces/falterWliame/Face_Mask_Detection/Drivers Joystick Usb Bitrom Rar HOT!.md b/spaces/falterWliame/Face_Mask_Detection/Drivers Joystick Usb Bitrom Rar HOT!.md deleted file mode 100644 index 3c997d6ef56b58034680e7296a765aa21976d694..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Drivers Joystick Usb Bitrom Rar HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Drivers Joystick Usb Bitrom Rar


      Download Filehttps://urlca.com/2uDcdM



      -
      -... Movie Download Hd 1080p _VERIFIED_ · Drivers Joystick Usb Bitrom Rar ((LINK)) · Varenda Maduraikku Tamil Full Movie Download [2020] ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Enjoy the Latest Version of NBA 2K20 APK OBB V97.0.2 with Unlimited Money and Mods.md b/spaces/fatiXbelha/sd/Enjoy the Latest Version of NBA 2K20 APK OBB V97.0.2 with Unlimited Money and Mods.md deleted file mode 100644 index 89f3bb385a441342376a38e4ccf9ee053c5cd5a8..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy the Latest Version of NBA 2K20 APK OBB V97.0.2 with Unlimited Money and Mods.md +++ /dev/null @@ -1,93 +0,0 @@ -
      -

      NBA 2K20 APK OBB V97.0.2: The Ultimate Guide

      -

      If you are a fan of basketball and video games, you must have heard of NBA 2K20, the latest installment of the popular NBA 2K series. NBA 2K20 is a basketball simulation game that lets you experience the thrill and excitement of the NBA on your mobile device. Whether you want to create your own legend in MyCAREER mode, compete with other players online, or just enjoy some casual streetball, NBA 2K20 has something for everyone.

      -

      nba 2k20 apk 97.0.2 obb


      Download File ---> https://urllie.com/2uNyjQ



      -

      But how can you download and install NBA 2K20 APK OBB V97.0.2 on your Android device? And how can you play it effectively and have fun? In this guide, we will answer these questions and more. We will explain what NBA 2K20 is, what features it offers, how to download and install it, and how to play it. By the end of this guide, you will be ready to take on the court and dominate the game.

      -

      What is NBA 2K20?

      -

      NBA 2K20 is a basketball simulation video game developed by Visual Concepts and published by 2K Sports. It was released in September 2019 and is available for various platforms, including Android. It is the 21st installment of the NBA 2K franchise and the successor to NBA 2K19.

      -

      NBA 2K20 features realistic gameplay, stunning graphics, and immersive sound effects that make you feel like you are watching a real NBA game. It also offers a variety of game modes for different types of players, such as:

      -

      Features of NBA 2K20

      -

      NBA Stories

      -

      This mode lets you relive some of the most memorable moments and challenges in NBA history. You can play as some of the greatest players and teams of all time, such as Michael Jordan, Kobe Bryant, LeBron James, and more. You can also unlock new content and rewards as you progress through the stories.

      -

      MyCAREER

      -

      This mode lets you create your own custom player and follow his journey from a rookie to a superstar. You can customize your appearance, skills, attributes, and style. You can also interact with other characters, make decisions that affect your career path, and earn endorsements and fans. You can also join forces with other players online and form your own team.

      -

      nba 2k20 apk obb v97.0.2 download for android
      -nba 2k20 apk obb v97.0.2 updated roster + mods
      -nba 2k20 apk obb v97.0.2 free download full version
      -nba 2k20 apk obb v97.0.2 offline basketball game
      -nba 2k20 apk obb v97.0.2 latest version with cheats
      -nba 2k20 apk obb v97.0.2 mod unlimited money and vc
      -nba 2k20 apk obb v97.0.2 best graphics and gameplay
      -nba 2k20 apk obb v97.0.2 how to install and play
      -nba 2k20 apk obb v97.0.2 compatible devices and requirements
      -nba 2k20 apk obb v97.0.2 features and reviews
      -nba 2k20 apk obb v97.0.2 new modes and teams
      -nba 2k20 apk obb v97.0.2 all star weekend and mycareer
      -nba 2k20 apk obb v97.0.2 online multiplayer and pvp
      -nba 2k20 apk obb v97.0.2 tips and tricks for beginners
      -nba 2k20 apk obb v97.0.2 bugs and fixes update
      -nba 2k20 apk obb v97.0.2 download link and mirror
      -nba 2k20 apk obb v97.0.2 no root and no verification
      -nba 2k20 apk obb v97.0.2 original and modded version comparison
      -nba 2k20 apk obb v97.0.2 safe and secure download site
      -nba 2k20 apk obb v97.0.2 file size and data usage
      -nba 2k20 apk obb v97.0.2 system settings and performance optimization
      -nba 2k20 apk obb v97.0.2 controller support and customization
      -nba 2k20 apk obb v97.0.2 soundtrack and voice actors
      -nba 2k20 apk obb v97.0.2 ratings and awards
      -nba 2k20 apk obb v97.0.2 alternatives and similar games

      -

      Run The Streets

      -

      This mode lets you take your game to the streets and compete in 3-on-3 matches against other players from around the world. You can choose from different locations, such as Venice Beach, Rucker Park, and more. You can also earn rewards and rank up in the leaderboards as you win matches.

      -

      Blacktop

      -

      This mode lets you enjoy some casual basketball with current or all-time great NBA teams. You can play 5-on-5 or choose from different formats, such as 1-on-1, 2-on-2, or 3-on-3. You can also adjust the rules, time limit, difficulty level, and more.

      -

      Online Multiplayer

      -

      This mode lets you connect with other players online and join various online communities. You can play in different modes, such as MyTEAM, MyLEAGUE, MyGM, and more. You can also chat with other players, trade cards, join tournaments, and more.

      -

      Graphics and Sound

      -

      NBA 2K20 boasts of impressive graphics and sound that make the game more realistic and immersive. The game uses the Unreal Engine 4 to render high-quality graphics and animations. The game also features realistic player models, facial expressions, movements, and reactions. The game also has a dynamic soundtrack that changes according to the game situation and mood. The game also features authentic commentary from NBA broadcasters and analysts, such as Kevin Harlan, Doris Burke, Chris Webber, and more.

      -

      How to download and install NBA 2K20 APK OBB V97.0.2 on Android?

      -

      If you want to play NBA 2K20 on your Android device, you need to download and install the NBA 2K20 APK OBB V97.0.2 file. This file contains the game data and the latest updates and patches. Here are the requirements and steps to download and install the file:

      -

      Requirements

      -
        -
      • An Android device with at least 4 GB of RAM and 16 GB of free storage space.
      • -
      • A stable internet connection to download the file and play online.
      • -
      • A file manager app to extract and move the file.
      • -
      • Allow installation from unknown sources in your device settings.
      • -
      -

      Steps

      -
        -
      1. Download the NBA 2K20 APK OBB V97.0.2 file from a trusted source. You can use this link to download the file.
      2. -
      3. Once the download is complete, locate the file in your device storage and extract it using a file manager app.
      4. -
      5. You will get two files: an APK file and an OBB file. Install the APK file by tapping on it and following the instructions.
      6. -
      7. Do not open the game yet. Move the OBB file to the Android/OBB folder in your device storage. If there is no such folder, create one.
      8. -
      9. Now you can launch the game from your app drawer or home screen. Enjoy playing NBA 2K20 on your Android device.
      10. -
      -

      How to play NBA 2K20 on Android?

      -

      Now that you have downloaded and installed NBA 2K20 on your Android device, you might be wondering how to play it effectively and have fun. Here are some tips and tricks to help you out:

      -

      Controls

      -

      NBA 2K20 uses a virtual joystick and buttons to control your player and perform various actions. You can also customize the controls according to your preference in the settings menu. Here are some of the basic controls:

      - | Action | Control | | --- | --- | | Move | Drag the left joystick | | Sprint | Double-tap and hold the left joystick | | Pass | Tap the pass button | | Shoot | Tap and hold the shoot button | | Dribble moves | Swipe the right joystick | | Defense | Tap the defense button | | Steal | Tap the steal button | | Block | Tap the block button |

      Tips and Tricks

      -
        -
      • Choose a game mode that suits your style and skill level. If you are new to the game, you might want to start with Blacktop mode or NBA Stories mode to get familiar with the gameplay and controls.
      • -
      • Practice your shooting skills in the practice mode or in free throw situations. You need to time your release correctly to make a shot. You can also use the shot meter or feedback indicators to help you with your timing.
      • -
      • Learn how to use different dribble moves to create space, break ankles, and cross over defenders. You can use the right joystick or swipe gestures to perform various moves, such as behind-the-back, spin, crossover, step-back, etc.
      • -
      • Play smart defense by staying in front of your opponent, contesting shots, stealing passes, and blocking shots. You can also use defensive settings or call for help defense if you need assistance.
      • -
      • Use teamwork and strategy to win games. You can call for screens, cut to the basket, pass to open teammates, run plays, etc. You can also adjust your lineup, rotation, tempo, etc. according to your preference.
      • -
      -

      Conclusion

      -

      NBA 2K20 is a great basketball simulation game that offers a lot of fun and excitement for basketball fans and gamers alike. It features realistic gameplay, stunning graphics, immersive sound effects, and various game modes for different types of players. It also lets you download and install NBA 2K20 APK OBB V97.0.2 on your Android device easily and quickly.

      -

      If you want to experience the thrill and excitement of the NBA on your mobile device, you should definitely give NBA 2K20 a try. You will not regret it.

      -

      FAQs

      Here are some of the frequently asked questions about NBA 2K20 APK OBB V97.0.2:

      -
        -
      1. Is NBA 2K20 APK OBB V97.0.2 safe to download and install?
      2. -

        Yes, NBA 2K20 APK OBB V97.0.2 is safe to download and install, as long as you use a trusted source and follow the instructions carefully. However, you should always be careful when downloading and installing files from unknown sources, as they may contain viruses or malware that can harm your device.

        -
      3. Is NBA 2K20 APK OBB V97.0.2 free to play?
      4. -

        No, NBA 2K20 APK OBB V97.0.2 is not free to play. You need to pay a one-time fee of $5.99 to download and install the game on your Android device. However, you can enjoy the game without any additional in-app purchases or ads.

        -
      5. Can I play NBA 2K20 offline?
      6. -

        Yes, you can play NBA 2K20 offline, as long as you have downloaded and installed the game data and the latest updates and patches. However, some game modes and features, such as online multiplayer, require an internet connection to function properly.

        -
      7. Can I play NBA 2K20 with a controller?
      8. -

        Yes, you can play NBA 2K20 with a controller, as long as your controller is compatible with your Android device and the game. You can also customize the controller settings in the game menu.

        -
      9. How can I update NBA 2K20 APK OBB V97.0.2?
      10. -

        You can update NBA 2K20 APK OBB V97.0.2 by downloading and installing the latest version of the file from a trusted source. You can also check for updates in the game menu or in the Google Play Store.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/feng2022/styleganhuman_copy/dnnlib/tflib/network.py b/spaces/feng2022/styleganhuman_copy/dnnlib/tflib/network.py deleted file mode 100644 index bfa73dc5ff2051689d16159871d2bc7e31294502..0000000000000000000000000000000000000000 --- a/spaces/feng2022/styleganhuman_copy/dnnlib/tflib/network.py +++ /dev/null @@ -1,592 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -"""Helper for managing networks.""" - -import types -import inspect -import re -import uuid -import sys -import numpy as np -import tensorflow as tf - -from collections import OrderedDict -from typing import Any, List, Tuple, Union - -from . import tfutil -from .. import util - -from .tfutil import TfExpression, TfExpressionEx - -_import_handlers = [] # Custom import handlers for dealing with legacy data in pickle import. -_import_module_src = dict() # Source code for temporary modules created during pickle import. - - -def import_handler(handler_func): - """Function decorator for declaring custom import handlers.""" - _import_handlers.append(handler_func) - return handler_func - - -class Network: - """Generic network abstraction. - - Acts as a convenience wrapper for a parameterized network construction - function, providing several utility methods and convenient access to - the inputs/outputs/weights. - - Network objects can be safely pickled and unpickled for long-term - archival purposes. The pickling works reliably as long as the underlying - network construction function is defined in a standalone Python module - that has no side effects or application-specific imports. - - Args: - name: Network name. Used to select TensorFlow name and variable scopes. - func_name: Fully qualified name of the underlying network construction function, or a top-level function object. - static_kwargs: Keyword arguments to be passed in to the network construction function. - - Attributes: - name: User-specified name, defaults to build func name if None. - scope: Unique TensorFlow scope containing template graph and variables, derived from the user-specified name. - static_kwargs: Arguments passed to the user-supplied build func. - components: Container for sub-networks. Passed to the build func, and retained between calls. - num_inputs: Number of input tensors. - num_outputs: Number of output tensors. - input_shapes: Input tensor shapes (NC or NCHW), including minibatch dimension. - output_shapes: Output tensor shapes (NC or NCHW), including minibatch dimension. - input_shape: Short-hand for input_shapes[0]. - output_shape: Short-hand for output_shapes[0]. - input_templates: Input placeholders in the template graph. - output_templates: Output tensors in the template graph. - input_names: Name string for each input. - output_names: Name string for each output. - own_vars: Variables defined by this network (local_name => var), excluding sub-networks. - vars: All variables (local_name => var). - trainables: All trainable variables (local_name => var). - var_global_to_local: Mapping from variable global names to local names. - """ - - def __init__(self, name: str = None, func_name: Any = None, **static_kwargs): - tfutil.assert_tf_initialized() - assert isinstance(name, str) or name is None - assert func_name is not None - assert isinstance(func_name, str) or util.is_top_level_function(func_name) - assert util.is_pickleable(static_kwargs) - - self._init_fields() - self.name = name - self.static_kwargs = util.EasyDict(static_kwargs) - - # Locate the user-specified network build function. - if util.is_top_level_function(func_name): - func_name = util.get_top_level_function_name(func_name) - module, self._build_func_name = util.get_module_from_obj_name(func_name) - self._build_func = util.get_obj_from_module(module, self._build_func_name) - assert callable(self._build_func) - - # Dig up source code for the module containing the build function. - self._build_module_src = _import_module_src.get(module, None) - if self._build_module_src is None: - self._build_module_src = inspect.getsource(module) - - # Init TensorFlow graph. - self._init_graph() - self.reset_own_vars() - - def _init_fields(self) -> None: - self.name = None - self.scope = None - self.static_kwargs = util.EasyDict() - self.components = util.EasyDict() - self.num_inputs = 0 - self.num_outputs = 0 - self.input_shapes = [[]] - self.output_shapes = [[]] - self.input_shape = [] - self.output_shape = [] - self.input_templates = [] - self.output_templates = [] - self.input_names = [] - self.output_names = [] - self.own_vars = OrderedDict() - self.vars = OrderedDict() - self.trainables = OrderedDict() - self.var_global_to_local = OrderedDict() - - self._build_func = None # User-supplied build function that constructs the network. - self._build_func_name = None # Name of the build function. - self._build_module_src = None # Full source code of the module containing the build function. - self._run_cache = dict() # Cached graph data for Network.run(). - - def _init_graph(self) -> None: - # Collect inputs. - self.input_names = [] - - for param in inspect.signature(self._build_func).parameters.values(): - if param.kind == param.POSITIONAL_OR_KEYWORD and param.default is param.empty: - self.input_names.append(param.name) - - self.num_inputs = len(self.input_names) - assert self.num_inputs >= 1 - - # Choose name and scope. - if self.name is None: - self.name = self._build_func_name - assert re.match("^[A-Za-z0-9_.\\-]*$", self.name) - with tf.name_scope(None): - self.scope = tf.get_default_graph().unique_name(self.name, mark_as_used=True) - - # Finalize build func kwargs. - build_kwargs = dict(self.static_kwargs) - build_kwargs["is_template_graph"] = True - build_kwargs["components"] = self.components - - # Build template graph. - with tfutil.absolute_variable_scope(self.scope, reuse=False), tfutil.absolute_name_scope(self.scope): # ignore surrounding scopes - assert tf.get_variable_scope().name == self.scope - assert tf.get_default_graph().get_name_scope() == self.scope - with tf.control_dependencies(None): # ignore surrounding control dependencies - self.input_templates = [tf.placeholder(tf.float32, name=name) for name in self.input_names] - out_expr = self._build_func(*self.input_templates, **build_kwargs) - - # Collect outputs. - assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple) - self.output_templates = [out_expr] if tfutil.is_tf_expression(out_expr) else list(out_expr) - self.num_outputs = len(self.output_templates) - assert self.num_outputs >= 1 - assert all(tfutil.is_tf_expression(t) for t in self.output_templates) - - # Perform sanity checks. - if any(t.shape.ndims is None for t in self.input_templates): - raise ValueError("Network input shapes not defined. Please call x.set_shape() for each input.") - if any(t.shape.ndims is None for t in self.output_templates): - raise ValueError("Network output shapes not defined. Please call x.set_shape() where applicable.") - if any(not isinstance(comp, Network) for comp in self.components.values()): - raise ValueError("Components of a Network must be Networks themselves.") - if len(self.components) != len(set(comp.name for comp in self.components.values())): - raise ValueError("Components of a Network must have unique names.") - - # List inputs and outputs. - self.input_shapes = [t.shape.as_list() for t in self.input_templates] - self.output_shapes = [t.shape.as_list() for t in self.output_templates] - self.input_shape = self.input_shapes[0] - self.output_shape = self.output_shapes[0] - self.output_names = [t.name.split("/")[-1].split(":")[0] for t in self.output_templates] - - # List variables. - self.own_vars = OrderedDict((var.name[len(self.scope) + 1:].split(":")[0], var) for var in tf.global_variables(self.scope + "/")) - self.vars = OrderedDict(self.own_vars) - self.vars.update((comp.name + "/" + name, var) for comp in self.components.values() for name, var in comp.vars.items()) - self.trainables = OrderedDict((name, var) for name, var in self.vars.items() if var.trainable) - self.var_global_to_local = OrderedDict((var.name.split(":")[0], name) for name, var in self.vars.items()) - - def reset_own_vars(self) -> None: - """Re-initialize all variables of this network, excluding sub-networks.""" - tfutil.run([var.initializer for var in self.own_vars.values()]) - - def reset_vars(self) -> None: - """Re-initialize all variables of this network, including sub-networks.""" - tfutil.run([var.initializer for var in self.vars.values()]) - - def reset_trainables(self) -> None: - """Re-initialize all trainable variables of this network, including sub-networks.""" - tfutil.run([var.initializer for var in self.trainables.values()]) - - def get_output_for(self, *in_expr: TfExpression, return_as_list: bool = False, **dynamic_kwargs) -> Union[TfExpression, List[TfExpression]]: - """Construct TensorFlow expression(s) for the output(s) of this network, given the input expression(s).""" - assert len(in_expr) == self.num_inputs - assert not all(expr is None for expr in in_expr) - - # Finalize build func kwargs. - build_kwargs = dict(self.static_kwargs) - build_kwargs.update(dynamic_kwargs) - build_kwargs["is_template_graph"] = False - build_kwargs["components"] = self.components - - # Build TensorFlow graph to evaluate the network. - with tfutil.absolute_variable_scope(self.scope, reuse=True), tf.name_scope(self.name): - assert tf.get_variable_scope().name == self.scope - valid_inputs = [expr for expr in in_expr if expr is not None] - final_inputs = [] - for expr, name, shape in zip(in_expr, self.input_names, self.input_shapes): - if expr is not None: - expr = tf.identity(expr, name=name) - else: - expr = tf.zeros([tf.shape(valid_inputs[0])[0]] + shape[1:], name=name) - final_inputs.append(expr) - out_expr = self._build_func(*final_inputs, **build_kwargs) - - # Propagate input shapes back to the user-specified expressions. - for expr, final in zip(in_expr, final_inputs): - if isinstance(expr, tf.Tensor): - expr.set_shape(final.shape) - - # Express outputs in the desired format. - assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple) - if return_as_list: - out_expr = [out_expr] if tfutil.is_tf_expression(out_expr) else list(out_expr) - return out_expr - - def get_var_local_name(self, var_or_global_name: Union[TfExpression, str]) -> str: - """Get the local name of a given variable, without any surrounding name scopes.""" - assert tfutil.is_tf_expression(var_or_global_name) or isinstance(var_or_global_name, str) - global_name = var_or_global_name if isinstance(var_or_global_name, str) else var_or_global_name.name - return self.var_global_to_local[global_name] - - def find_var(self, var_or_local_name: Union[TfExpression, str]) -> TfExpression: - """Find variable by local or global name.""" - assert tfutil.is_tf_expression(var_or_local_name) or isinstance(var_or_local_name, str) - return self.vars[var_or_local_name] if isinstance(var_or_local_name, str) else var_or_local_name - - def get_var(self, var_or_local_name: Union[TfExpression, str]) -> np.ndarray: - """Get the value of a given variable as NumPy array. - Note: This method is very inefficient -- prefer to use tflib.run(list_of_vars) whenever possible.""" - return self.find_var(var_or_local_name).eval() - - def set_var(self, var_or_local_name: Union[TfExpression, str], new_value: Union[int, float, np.ndarray]) -> None: - """Set the value of a given variable based on the given NumPy array. - Note: This method is very inefficient -- prefer to use tflib.set_vars() whenever possible.""" - tfutil.set_vars({self.find_var(var_or_local_name): new_value}) - - def __getstate__(self) -> dict: - """Pickle export.""" - state = dict() - state["version"] = 4 - state["name"] = self.name - state["static_kwargs"] = dict(self.static_kwargs) - state["components"] = dict(self.components) - state["build_module_src"] = self._build_module_src - state["build_func_name"] = self._build_func_name - state["variables"] = list(zip(self.own_vars.keys(), tfutil.run(list(self.own_vars.values())))) - return state - - def __setstate__(self, state: dict) -> None: - """Pickle import.""" - # pylint: disable=attribute-defined-outside-init - tfutil.assert_tf_initialized() - self._init_fields() - - # Execute custom import handlers. - for handler in _import_handlers: - state = handler(state) - - # Set basic fields. - assert state["version"] in [2, 3, 4] - self.name = state["name"] - self.static_kwargs = util.EasyDict(state["static_kwargs"]) - self.components = util.EasyDict(state.get("components", {})) - self._build_module_src = state["build_module_src"] - self._build_func_name = state["build_func_name"] - - # Create temporary module from the imported source code. - module_name = "_tflib_network_import_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _import_module_src[module] = self._build_module_src - exec(self._build_module_src, module.__dict__) # pylint: disable=exec-used - - # Locate network build function in the temporary module. - self._build_func = util.get_obj_from_module(module, self._build_func_name) - assert callable(self._build_func) - - # Init TensorFlow graph. - self._init_graph() - self.reset_own_vars() - tfutil.set_vars({self.find_var(name): value for name, value in state["variables"]}) - - def clone(self, name: str = None, **new_static_kwargs) -> "Network": - """Create a clone of this network with its own copy of the variables.""" - # pylint: disable=protected-access - net = object.__new__(Network) - net._init_fields() - net.name = name if name is not None else self.name - net.static_kwargs = util.EasyDict(self.static_kwargs) - net.static_kwargs.update(new_static_kwargs) - net._build_module_src = self._build_module_src - net._build_func_name = self._build_func_name - net._build_func = self._build_func - net._init_graph() - net.copy_vars_from(self) - return net - - def copy_own_vars_from(self, src_net: "Network") -> None: - """Copy the values of all variables from the given network, excluding sub-networks.""" - names = [name for name in self.own_vars.keys() if name in src_net.own_vars] - tfutil.set_vars(tfutil.run({self.vars[name]: src_net.vars[name] for name in names})) - - def copy_vars_from(self, src_net: "Network") -> None: - """Copy the values of all variables from the given network, including sub-networks.""" - names = [name for name in self.vars.keys() if name in src_net.vars] - tfutil.set_vars(tfutil.run({self.vars[name]: src_net.vars[name] for name in names})) - - def copy_trainables_from(self, src_net: "Network") -> None: - """Copy the values of all trainable variables from the given network, including sub-networks.""" - names = [name for name in self.trainables.keys() if name in src_net.trainables] - tfutil.set_vars(tfutil.run({self.vars[name]: src_net.vars[name] for name in names})) - - def convert(self, new_func_name: str, new_name: str = None, **new_static_kwargs) -> "Network": - """Create new network with the given parameters, and copy all variables from this network.""" - if new_name is None: - new_name = self.name - static_kwargs = dict(self.static_kwargs) - static_kwargs.update(new_static_kwargs) - net = Network(name=new_name, func_name=new_func_name, **static_kwargs) - net.copy_vars_from(self) - return net - - def setup_as_moving_average_of(self, src_net: "Network", beta: TfExpressionEx = 0.99, beta_nontrainable: TfExpressionEx = 0.0) -> tf.Operation: - """Construct a TensorFlow op that updates the variables of this network - to be slightly closer to those of the given network.""" - with tfutil.absolute_name_scope(self.scope + "/_MovingAvg"): - ops = [] - for name, var in self.vars.items(): - if name in src_net.vars: - cur_beta = beta if name in self.trainables else beta_nontrainable - new_value = tfutil.lerp(src_net.vars[name], var, cur_beta) - ops.append(var.assign(new_value)) - return tf.group(*ops) - - def run(self, - *in_arrays: Tuple[Union[np.ndarray, None], ...], - input_transform: dict = None, - output_transform: dict = None, - return_as_list: bool = False, - print_progress: bool = False, - minibatch_size: int = None, - num_gpus: int = 1, - assume_frozen: bool = False, - **dynamic_kwargs) -> Union[np.ndarray, Tuple[np.ndarray, ...], List[np.ndarray]]: - """Run this network for the given NumPy array(s), and return the output(s) as NumPy array(s). - - Args: - input_transform: A dict specifying a custom transformation to be applied to the input tensor(s) before evaluating the network. - The dict must contain a 'func' field that points to a top-level function. The function is called with the input - TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs. - output_transform: A dict specifying a custom transformation to be applied to the output tensor(s) after evaluating the network. - The dict must contain a 'func' field that points to a top-level function. The function is called with the output - TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs. - return_as_list: True = return a list of NumPy arrays, False = return a single NumPy array, or a tuple if there are multiple outputs. - print_progress: Print progress to the console? Useful for very large input arrays. - minibatch_size: Maximum minibatch size to use, None = disable batching. - num_gpus: Number of GPUs to use. - assume_frozen: Improve multi-GPU performance by assuming that the trainable parameters will remain changed between calls. - dynamic_kwargs: Additional keyword arguments to be passed into the network build function. - """ - assert len(in_arrays) == self.num_inputs - assert not all(arr is None for arr in in_arrays) - assert input_transform is None or util.is_top_level_function(input_transform["func"]) - assert output_transform is None or util.is_top_level_function(output_transform["func"]) - output_transform, dynamic_kwargs = _handle_legacy_output_transforms(output_transform, dynamic_kwargs) - num_items = in_arrays[0].shape[0] - if minibatch_size is None: - minibatch_size = num_items - - # Construct unique hash key from all arguments that affect the TensorFlow graph. - key = dict(input_transform=input_transform, output_transform=output_transform, num_gpus=num_gpus, assume_frozen=assume_frozen, dynamic_kwargs=dynamic_kwargs) - def unwind_key(obj): - if isinstance(obj, dict): - return [(key, unwind_key(value)) for key, value in sorted(obj.items())] - if callable(obj): - return util.get_top_level_function_name(obj) - return obj - key = repr(unwind_key(key)) - - # Build graph. - if key not in self._run_cache: - with tfutil.absolute_name_scope(self.scope + "/_Run"), tf.control_dependencies(None): - with tf.device("/cpu:0"): - in_expr = [tf.placeholder(tf.float32, name=name) for name in self.input_names] - in_split = list(zip(*[tf.split(x, num_gpus) for x in in_expr])) - - out_split = [] - for gpu in range(num_gpus): - with tf.device("/gpu:%d" % gpu): - net_gpu = self.clone() if assume_frozen else self - in_gpu = in_split[gpu] - - if input_transform is not None: - in_kwargs = dict(input_transform) - in_gpu = in_kwargs.pop("func")(*in_gpu, **in_kwargs) - in_gpu = [in_gpu] if tfutil.is_tf_expression(in_gpu) else list(in_gpu) - - assert len(in_gpu) == self.num_inputs - out_gpu = net_gpu.get_output_for(*in_gpu, return_as_list=True, **dynamic_kwargs) - - if output_transform is not None: - out_kwargs = dict(output_transform) - out_gpu = out_kwargs.pop("func")(*out_gpu, **out_kwargs) - out_gpu = [out_gpu] if tfutil.is_tf_expression(out_gpu) else list(out_gpu) - - assert len(out_gpu) == self.num_outputs - out_split.append(out_gpu) - - with tf.device("/cpu:0"): - out_expr = [tf.concat(outputs, axis=0) for outputs in zip(*out_split)] - self._run_cache[key] = in_expr, out_expr - - # Run minibatches. - in_expr, out_expr = self._run_cache[key] - out_arrays = [np.empty([num_items] + expr.shape.as_list()[1:], expr.dtype.name) for expr in out_expr] - - for mb_begin in range(0, num_items, minibatch_size): - if print_progress: - print("\r%d / %d" % (mb_begin, num_items), end="") - - mb_end = min(mb_begin + minibatch_size, num_items) - mb_num = mb_end - mb_begin - mb_in = [src[mb_begin : mb_end] if src is not None else np.zeros([mb_num] + shape[1:]) for src, shape in zip(in_arrays, self.input_shapes)] - mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in))) - - for dst, src in zip(out_arrays, mb_out): - dst[mb_begin: mb_end] = src - - # Done. - if print_progress: - print("\r%d / %d" % (num_items, num_items)) - - if not return_as_list: - out_arrays = out_arrays[0] if len(out_arrays) == 1 else tuple(out_arrays) - return out_arrays - - def list_ops(self) -> List[TfExpression]: - include_prefix = self.scope + "/" - exclude_prefix = include_prefix + "_" - ops = tf.get_default_graph().get_operations() - ops = [op for op in ops if op.name.startswith(include_prefix)] - ops = [op for op in ops if not op.name.startswith(exclude_prefix)] - return ops - - def list_layers(self) -> List[Tuple[str, TfExpression, List[TfExpression]]]: - """Returns a list of (layer_name, output_expr, trainable_vars) tuples corresponding to - individual layers of the network. Mainly intended to be used for reporting.""" - layers = [] - - def recurse(scope, parent_ops, parent_vars, level): - # Ignore specific patterns. - if any(p in scope for p in ["/Shape", "/strided_slice", "/Cast", "/concat", "/Assign"]): - return - - # Filter ops and vars by scope. - global_prefix = scope + "/" - local_prefix = global_prefix[len(self.scope) + 1:] - cur_ops = [op for op in parent_ops if op.name.startswith(global_prefix) or op.name == global_prefix[:-1]] - cur_vars = [(name, var) for name, var in parent_vars if name.startswith(local_prefix) or name == local_prefix[:-1]] - if not cur_ops and not cur_vars: - return - - # Filter out all ops related to variables. - for var in [op for op in cur_ops if op.type.startswith("Variable")]: - var_prefix = var.name + "/" - cur_ops = [op for op in cur_ops if not op.name.startswith(var_prefix)] - - # Scope does not contain ops as immediate children => recurse deeper. - contains_direct_ops = any("/" not in op.name[len(global_prefix):] and op.type not in ["Identity", "Cast", "Transpose"] for op in cur_ops) - if (level == 0 or not contains_direct_ops) and (len(cur_ops) + len(cur_vars)) > 1: - visited = set() - for rel_name in [op.name[len(global_prefix):] for op in cur_ops] + [name[len(local_prefix):] for name, _var in cur_vars]: - token = rel_name.split("/")[0] - if token not in visited: - recurse(global_prefix + token, cur_ops, cur_vars, level + 1) - visited.add(token) - return - - # Report layer. - layer_name = scope[len(self.scope) + 1:] - layer_output = cur_ops[-1].outputs[0] if cur_ops else cur_vars[-1][1] - layer_trainables = [var for _name, var in cur_vars if var.trainable] - layers.append((layer_name, layer_output, layer_trainables)) - - recurse(self.scope, self.list_ops(), list(self.vars.items()), 0) - return layers - - def print_layers(self, title: str = None, hide_layers_with_no_params: bool = False) -> None: - """Print a summary table of the network structure.""" - rows = [[title if title is not None else self.name, "Params", "OutputShape", "WeightShape"]] - rows += [["---"] * 4] - total_params = 0 - - for layer_name, layer_output, layer_trainables in self.list_layers(): - num_params = sum(int(np.prod(var.shape.as_list())) for var in layer_trainables) - weights = [var for var in layer_trainables if var.name.endswith("/weight:0")] - weights.sort(key=lambda x: len(x.name)) - if len(weights) == 0 and len(layer_trainables) == 1: - weights = layer_trainables - total_params += num_params - - if not hide_layers_with_no_params or num_params != 0: - num_params_str = str(num_params) if num_params > 0 else "-" - output_shape_str = str(layer_output.shape) - weight_shape_str = str(weights[0].shape) if len(weights) >= 1 else "-" - rows += [[layer_name, num_params_str, output_shape_str, weight_shape_str]] - - rows += [["---"] * 4] - rows += [["Total", str(total_params), "", ""]] - - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(" ".join(cell + " " * (width - len(cell)) for cell, width in zip(row, widths))) - print() - - def setup_weight_histograms(self, title: str = None) -> None: - """Construct summary ops to include histograms of all trainable parameters in TensorBoard.""" - if title is None: - title = self.name - - with tf.name_scope(None), tf.device(None), tf.control_dependencies(None): - for local_name, var in self.trainables.items(): - if "/" in local_name: - p = local_name.split("/") - name = title + "_" + p[-1] + "/" + "_".join(p[:-1]) - else: - name = title + "_toplevel/" + local_name - - tf.summary.histogram(name, var) - -#---------------------------------------------------------------------------- -# Backwards-compatible emulation of legacy output transformation in Network.run(). - -_print_legacy_warning = True - -def _handle_legacy_output_transforms(output_transform, dynamic_kwargs): - global _print_legacy_warning - legacy_kwargs = ["out_mul", "out_add", "out_shrink", "out_dtype"] - if not any(kwarg in dynamic_kwargs for kwarg in legacy_kwargs): - return output_transform, dynamic_kwargs - - if _print_legacy_warning: - _print_legacy_warning = False - print() - print("WARNING: Old-style output transformations in Network.run() are deprecated.") - print("Consider using 'output_transform=dict(func=tflib.convert_images_to_uint8)'") - print("instead of 'out_mul=127.5, out_add=127.5, out_dtype=np.uint8'.") - print() - assert output_transform is None - - new_kwargs = dict(dynamic_kwargs) - new_transform = {kwarg: new_kwargs.pop(kwarg) for kwarg in legacy_kwargs if kwarg in dynamic_kwargs} - new_transform["func"] = _legacy_output_transform_func - return new_transform, new_kwargs - -def _legacy_output_transform_func(*expr, out_mul=1.0, out_add=0.0, out_shrink=1, out_dtype=None): - if out_mul != 1.0: - expr = [x * out_mul for x in expr] - - if out_add != 0.0: - expr = [x + out_add for x in expr] - - if out_shrink > 1: - ksize = [1, 1, out_shrink, out_shrink] - expr = [tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding="VALID", data_format="NCHW") for x in expr] - - if out_dtype is not None: - if tf.as_dtype(out_dtype).is_integer: - expr = [tf.round(x) for x in expr] - expr = [tf.saturate_cast(x, out_dtype) for x in expr] - return expr diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bukka Jatt and R. Nait - Ridxr 60 Lakh - Download Mp3 Song and Watch Video.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bukka Jatt and R. Nait - Ridxr 60 Lakh - Download Mp3 Song and Watch Video.md deleted file mode 100644 index 53fd4a3b2eedff09b5e1ffe1039d992338073f11..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bukka Jatt and R. Nait - Ridxr 60 Lakh - Download Mp3 Song and Watch Video.md +++ /dev/null @@ -1,166 +0,0 @@ -
      -

      60 Lakh Song Download: How to Enjoy the Latest Punjabi Hit Online

      -

      If you are a fan of Punjabi music, you must have heard of the latest sensation, 60 Lakh Song. This song has taken the internet by storm and has become one of the most streamed and downloaded songs in India. But what is 60 Lakh Song and how can you enjoy it online? In this article, we will tell you everything you need to know about this catchy and hilarious song, and how you can download it legally and ethically.

      -

      What is 60 Lakh Song?

      -

      60 Lakh Song is a Punjabi song released in 2020 by Gopy Randhawa, featuring R Nait. The song is about a man who wants to buy a car worth 60 lakh rupees (about $80,000) but his father refuses to give him the money. The man then tries to convince his father by telling him various reasons why he needs the car, such as impressing girls, going on trips, and escaping from the police. The song is full of witty and humorous lines that poke fun at the man's extravagant desires and his father's stingy attitude.

      -

      60 lakh song download


      DOWNLOAD ✪✪✪ https://gohhs.com/2uPqsh



      -

      Who are the artists behind 60 Lakh Song?

      -

      The singer of 60 Lakh Song is Gopy Randhawa, a Punjabi singer who has been active in the music industry since 2016. He has sung many popular songs such as Jatt Da Future, Jatt Di Clip, and Jatt Da Swag. He is known for his energetic and upbeat style of singing that appeals to the youth.

      -

      The featured artist of 60 Lakh Song is R Nait, a Punjabi singer, lyricist, and composer who has been making waves in the music scene since 2015. He has delivered many hit songs such as Defaulter, Dabda Kithe Aa, and Lootera. He is known for his unique and powerful voice that expresses his emotions and stories.

      -

      What are the lyrics of 60 Lakh Song?

      -

      The lyrics of 60 Lakh Song are written by Joban Cheema, a Punjabi lyricist who has penned many songs for Gopy Randhawa and R Nait. He has a knack for writing catchy and funny lyrics that resonate with the listeners. The lyrics of 60 Lakh Song are in Punjabi language, but you can find the English translation online if you want to understand the meaning.

      -

      Here are some of the lines from the song:

      -
      -Daddy ji mainu car de do Daddy ji give me a car Car di keemat hai ikkathhe pachas lakh The price of the car is fifty lakh together Mainu lagda tussi mainu pyar nahi karde I think you don't love me Mainu lagda tussi mainu pyar nahi karde I think I think you don't love me Mainu car chahidi ae, mainu car chahidi ae I need a car, I need a car Mainu car chahidi ae, mainu car chahidi ae I need a car, I need a car R Nait! R Nait! Mainu car chahidi ae, mainu car chahidi ae I need a car, I need a car Mainu car chahidi ae, mainu car chahidi ae I need a car, I need a car Daddy ji mainu car de do Daddy ji give me a car Car di keemat hai ikkathhe pachas lakh The price of the car is fifty lakh together Mainu lagda tussi mainu pyar nahi karde I think you don't love me Mainu lagda tussi mainu pyar nahi karde I think you don't love me
      -

      You can find the full lyrics and translation of 60 Lakh Song online on various websites such as LyricsBell, LyricsRaag, and LyricsTranslate. You can also watch the official video of 60 Lakh Song on YouTube and enjoy the visuals and the performance of the artists.

      -

      Why is 60 Lakh Song so popular?

      -

      60 Lakh Song has become a huge hit among the Punjabi music lovers and has received millions of views and downloads online. But what makes this song so appealing and catchy? Here are some of the reasons why 60 Lakh Song is so popular:

      -

      The catchy tune and beat of 60 Lakh Song

      -

      One of the main reasons why 60 Lakh Song is so popular is because of its catchy tune and beat that makes you want to groove along. The song has a lively and upbeat tempo that suits the mood and theme of the song. The music of 60 Lakh Song is composed by Laddi Gill, a Punjabi music director who has created many hit songs such as Jatt Di Star, Jattan De Munde, and Jatt Zimidaar. He has used a mix of traditional and modern instruments such as dhol, tumbi, guitar, and keyboard to create a fusion sound that appeals to the masses.

      -

      60 lakh song download mp3
      -60 lakh song download by Gopy Randhawa
      -60 lakh song download from JioSaavn
      -60 lakh song download R Nait
      -60 lakh song download pagalworld
      -60 lakh song download mr jatt
      -60 lakh song download djpunjab
      -60 lakh song download video
      -60 lakh song download hd
      -60 lakh song download lyrics
      -60 lakh song download ringtone
      -60 lakh song download status
      -60 lakh song download remix
      -60 lakh song download dj
      -60 lakh song download online
      -60 lakh song download free
      -60 lakh song download 320kbps
      -60 lakh song download 128kbps
      -60 lakh song download audio
      -60 lakh song download full
      -60 lakh song download new
      -60 lakh song download latest
      -60 lakh song download punjabi
      -60 lakh song download hindi
      -60 lakh song download english
      -60 lakh song download youtube
      -60 lakh song download wynk music
      -60 lakh song download gaana
      -60 lakh song download spotify
      -60 lakh song download apple music
      -60 lakh song download amazon music
      -60 lakh song download hungama music
      -60 lakh song download saavn pro
      -60 lakh song download rdxr
      -60 lakh song download bukka jatt
      -60 lakh song download beatcop music
      -60 lakh song download joban cheema lyrics
      -60 lakh song download dense mix master
      -60 lakh song download sudh singh director
      -60 lakh song download gavy khaira dop
      -60 lakh song download karan matharu ad
      -60 lakh song download retak still
      -60 lakh song download v sign art production
      -60 lakh song download r nait music label
      -60 lakh song download believe music license
      -60 lakh song download review
      -60 lakh song download reaction
      -60 lakh song download meaning

      -

      The relatable and humorous theme of 60 Lakh Song

      -

      Another reason why 60 Lakh Song is so popular is because of its relatable and humorous theme that makes you laugh and smile. The song is about a common scenario that many young people face in India, where they want to buy expensive things but their parents are not willing to give them the money. The song portrays the contrast between the son's lavish dreams and the father's practical reality in a funny and witty way. The song also touches upon some social issues such as dowry, corruption, and police harassment in a satirical manner.

      -

      The viral video and dance challenge of 60 Lakh Song

      -

      The third reason why 60 Lakh Song is so popular is because of its viral video and dance challenge that has spread across the internet. The video of 60 Lakh Song features Gopy Randhawa and R Nait in various locations such as a showroom, a farm, a hotel, and a police station. They are seen wearing stylish clothes and accessories and driving fancy cars. They also perform some hilarious dance moves that match the lyrics of the song. The video of 60 Lakh Song has been directed by Jeona & Jogi, who have done a great job in capturing the essence and mood of the song.

      -

      The video of 60 Lakh Song has also inspired many people to create their own versions and participate in the 60 Lakh Song dance challenge. The challenge involves mimicking the dance moves of Gopy Randhawa and R Nait while lip-syncing to the song. Many celebrities, influencers, and fans have taken part in this challenge and posted their videos on social media platforms such as Instagram, TikTok, and Facebook. Some of the popular videos are by Jassie Gill, Babbal Rai, Gurnam Bhullar, and Sargun Mehta. You can also join the fun by making your own video and sharing it with your friends.

      -

      How to download 60 Lakh Song online?

      -

      Now that you know what 60 Lakh Song is and why it is so popular, you might be wondering how you can download it online. Well, there are two ways to download 60 Lakh Song online: the legal and ethical way, and the illegal and risky way. Let us see what they are:

      -

      The

      The legal and ethical way to download 60 Lakh Song

      -

      The legal and ethical way to download 60 Lakh Song is to use the official and authorized platforms that have the rights to distribute the song. These platforms pay a fair amount of royalty to the artists and the music label for their work and also ensure the quality and safety of the song. Some of the platforms that you can use to download 60 Lakh Song legally and ethically are:

      -

      JioSaavn

      -

      JioSaavn is one of the most popular music streaming services in India that offers a huge collection of songs in various languages and genres. You can download 60 Lakh Song on JioSaavn by following these steps:

      -
        -
      1. Download the JioSaavn app on your smartphone or visit the JioSaavn website on your browser.
      2. -
      3. Sign up or log in with your Jio number or email address.
      4. -
      5. Search for 60 Lakh Song in the search bar and tap on it.
      6. -
      7. Tap on the download icon next to the song title and choose the quality you want.
      8. -
      9. Enjoy listening to 60 Lakh Song offline anytime you want.
      10. -
      -

      You can also listen to 60 Lakh Song online on JioSaavn without downloading it. However, you will need an active internet connection and a subscription plan to do so. JioSaavn offers various plans such as JioSaavn Pro, JioSaavn Plus, and JioSaavn Free that give you different benefits and features. You can choose the plan that suits your needs and budget.

      -

      YouTube

      -

      YouTube is another platform that you can use to download 60 Lakh Song legally and ethically. YouTube is the world's largest video-sharing platform that hosts millions of videos on various topics and categories. You can download 60 Lakh Song on YouTube by following these steps:

      -
        -
      1. Download the YouTube app on your smartphone or visit the YouTube website on your browser.
      2. -
      3. Sign up or log in with your Google account.
      4. -
      5. Search for 60 Lakh Song in the search bar and tap on it.
      6. -
      7. Tap on the download icon below the video and choose the quality you want.
      8. -
      9. Enjoy watching 60 Lakh Song offline anytime you want.
      10. -
      -

      You can also watch 60 Lakh Song online on YouTube without downloading it. However, you will need an active internet connection and a subscription plan to do so. YouTube offers various plans such as YouTube Premium, YouTube Music Premium, and YouTube Free that give you different benefits and features. You can choose the plan that suits your needs and budget.

      -

      Wynk Music

      -

      Wynk Music is another platform that you can use to download 60 Lakh Song legally and ethically. Wynk Music is a music streaming service that offers a wide range of songs in various languages and genres. You can download 60 Lakh Song on Wynk Music by following these steps:

      -
        -
      1. Download the Wynk Music app on your smartphone or visit the Wynk Music website on your browser.
      2. -
      3. Sign up or log in with your Airtel number or email address.
      4. -
      5. Search for 60 Lakh Song in the search bar and tap on it.
      6. -
      7. Tap on the download icon next to the song title and choose the quality you want.
      8. -
      9. Enjoy listening to 60 Lakh Song offline anytime you want.
      10. -
      -

      You can also listen to 60 Lakh Song online on Wynk Music without downloading it. However, you will need an active internet connection and a subscription plan to do so. Wynk Music offers various plans such as Wynk Premium, Wynk Plus, and Wynk Free that give you different benefits and features. You can choose the plan that suits your needs and budget.

      -

      The illegal and risky way to download 60 Lakh Song

      -

      The illegal and risky way to download 60 Lakh Song is to use the unofficial and unauthorized platforms that have no rights to distribute the song. These platforms do not pay any royalty to the artists and the music label for their work and also compromise the quality and safety of the song. Some of the platforms that you should avoid using to download 60 Lakh Song illegally and riskily are:

      -

      Piracy websites

      -

      Piracy websites are websites that offer free downloads of songs, movies, games, software, and other digital content without permission from the owners. These websites are illegal and unethical as they violate the intellectual property rights of the creators and cause them financial losses. They also expose you to various risks such as malware, viruses , spyware, phishing, and identity theft. Some of the piracy websites that you should avoid using to download 60 Lakh Song are MP3Tau, DJPunjab, Mr-Jatt, and PagalWorld.

      -

      Torrents

      -

      Torrents are files that contain information about other files that can be downloaded from peer-to-peer networks. Torrents are often used to share songs, movies, games, software, and other digital content without permission from the owners. These files are illegal and unethical as they violate the intellectual property rights of the creators and cause them financial losses. They also expose you to various risks such as malware, viruses, spyware, phishing, and identity theft. Some of the torrent sites that you should avoid using to download 60 Lakh Song are The Pirate Bay, Kickass Torrents, 1337x, and RARBG.

      -

      Malware and viruses

      -

      Malware and viruses are malicious software that can harm your computer, smartphone, or other devices. They can infect your system by disguising themselves as songs, movies, games, software, or other digital content. They can steal your personal information, damage your files, slow down your performance, or even take control of your device. Some of the malware and viruses that you should avoid downloading to get 60 Lakh Song are Trojans, worms, ransomware, adware, and spyware.

      -

      How to enjoy 60 Lakh Song online?

      -

      If you do not want to download 60 Lakh Song online but still want to enjoy it online, there are some ways to do so. Here are some of the ways to enjoy 60 Lakh Song online:

      -

      Stream 60 Lakh Song on music streaming services

      -

      One of the best ways to enjoy 60 Lakh Song online is to stream it on music streaming services. Music streaming services are platforms that offer unlimited access to millions of songs in various languages and genres. You can stream 60 Lakh Song on music streaming services by following these steps:

      -
        -
      1. Choose a music streaming service that has 60 Lakh Song in its library. Some of the music streaming services that have 60 Lakh Song are Spotify, Apple Music, and Amazon Music.
      2. -
      3. Download the app of the music streaming service on your smartphone or visit the website on your browser.
      4. -
      5. Sign up or log in with your account details.
      6. -
      7. Search for 60 Lakh Song in the search bar and tap on it.
      8. -
      9. Enjoy listening to 60 Lakh Song online anytime you want.
      10. -
      -

      You can also create playlists, share songs, discover new music, and enjoy other features on music streaming services. However, you will need an active internet connection and a subscription plan to do so. Music streaming services offer various plans such as free, premium, family, and student that give you different benefits and features. You can choose the plan that suits your needs and budget.

      -

      Watch 60 Lakh Song video on YouTube

      -

      Another way to enjoy 60 Lakh Song online is to watch its video on YouTube. YouTube is the world's largest video-sharing platform that hosts millions of videos on various topics and categories. You can watch 60 Lakh Song video on YouTube by following these steps:

      -
        -
      1. Download the YouTube app on your smartphone or visit the YouTube website on your browser.
      2. -
      3. Sign up or log in with your Google account.
      4. -
      5. Search for 60 Lakh Song in the search bar and tap on it.
      6. -
      7. Enjoy watching 60 Lakh Song video online anytime you want.
      8. -
      -

      You can also like, comment, share , subscribe, and watch other videos on YouTube. However, you will need an active internet connection and a subscription plan to do so. YouTube offers various plans such as YouTube Premium, YouTube Music Premium, and YouTube Free that give you different benefits and features. You can choose the plan that suits your needs and budget.

      -

      Sing along with 60 Lakh Song lyrics

      -

      A third way to enjoy 60 Lakh Song online is to sing along with its lyrics. Singing along with the lyrics of a song can help you improve your pronunciation, vocabulary, and comprehension of the language. It can also make you feel more connected and involved with the song. You can sing along with 60 Lakh Song lyrics by following these steps:

      -
        -
      1. Find a website or an app that provides the lyrics and translation of 60 Lakh Song. Some of the websites and apps that provide the lyrics and translation of 60 Lakh Song are LyricsBell, LyricsRaag, and LyricsTranslate.
      2. -
      3. Open the website or the app on your device and search for 60 Lakh Song.
      4. -
      5. Read the lyrics and the translation of 60 Lakh Song and try to understand the meaning and the message of the song.
      6. -
      7. Play 60 Lakh Song on your device or on another device and sing along with the lyrics as they appear on the screen.
      8. -
      9. Enjoy singing along with 60 Lakh Song online anytime you want.
      10. -
      -

      You can also record yourself singing along with 60 Lakh Song and share it with your friends or on social media. You can also challenge your friends to sing along with 60 Lakh Song and see who can do it better. Singing along with 60 Lakh Song can be a fun and rewarding activity that can enhance your enjoyment of the song.

      -

      Conclusion

      -

      60 Lakh Song is a Punjabi song that has become a huge hit among the music lovers in India. It is a catchy and humorous song that tells the story of a man who wants to buy a car worth 60 lakh rupees but his father refuses to give him the money. The song is sung by Gopy Randhawa and R Nait, written by Joban Cheema, and composed by Laddi Gill. The song has a catchy tune and beat, a relatable and humorous theme, and a viral video and dance challenge that have made it popular among the masses.

      -

      If you want to download 60 Lakh Song online, you can do it legally and ethically by using the official and authorized platforms such as JioSaavn, YouTube, and Wynk Music. These platforms pay a fair amount of royalty to the artists and the music label for their work and also ensure the quality and safety of the song. You should avoid using the unofficial and unauthorized platforms such as piracy websites, torrents, and malware that are illegal and unethical as they violate the intellectual property rights of the creators and cause them financial losses. They also expose you to various risks such as malware, viruses, spyware, phishing, and identity theft.

      -

      If you do not want to download 60 Lakh Song online but still want to enjoy it online, you can do it by streaming it on music streaming services such as Spotify, Apple Music, and Amazon Music. You can also watch its video on YouTube and sing along with its lyrics on websites or apps such as LyricsBell, LyricsRaag, and LyricsTranslate. These ways can help you enjoy 60 Lakh Song online without downloading it.

      -

      60 Lakh Song is a fun and entertaining song that can make you laugh and dance. It is a song that celebrates the Punjabi culture and the youth's aspirations. It is a song that you can enjoy online in various ways. Whether you want to download it, stream it, watch it, or sing it, you can find the best platform and method for you. So, what are you waiting for? Go ahead and enjoy 60 Lakh Song online today!

      -

      FAQs

      -

      Here are some of the frequently asked questions about 60 Lakh Song:

      -
        -
      1. What is the meaning of 60 Lakh?
      2. -

        60 Lakh is a unit of currency in India that is equal to 6 million rupees or about $80,000. It is also the name of a Punjabi song that is about a man who wants to buy a car worth 60 lakh rupees but his father refuses to give him the money.

        -
      3. Who are the singers of 60 Lakh Song?
      4. -

        The singers of 60 Lakh Song are Gopy Randhawa and R Nait, who are both Punjabi singers who have sung many popular songs in the past. Gopy Randhawa is the main singer of 60 Lakh Song and R Nait is the featured artist who sings the chorus and some verses.

        -
      5. Who wrote the lyrics of 60 Lakh Song?
      6. -

        The lyrics of 60 Lakh Song were written by Joban Cheema, who is a Punjabi lyricist who has penned many songs for Gopy Randhawa and R Nait. He has a knack for writing catchy and funny lyrics that resonate with the listeners.

        -
      7. Who composed the music of 60 Lakh Song?
      8. -

        The music of 60 Lakh Song was composed by Laddi Gill, who is a Punjabi music director who has created many hit songs in the past. He has used a mix of traditional and modern instruments such as dhol, tumbi, guitar, and keyboard to create a fusion sound that appeals to the masses.

        -
      9. Where can I download 60 Lakh Song online?
      10. -

        You can download 60 Lakh Song online legally and ethically by using the official and authorized platforms such as JioSaavn, YouTube, and Wynk Music. These platforms pay a fair amount of royalty to the artists and the music label for their work and also ensure the quality and safety of the song. You should avoid using the unofficial and unauthorized platforms such as piracy websites, torrents, and malware that are illegal and unethical as they violate the intellectual property rights of the creators and cause them financial losses. They also expose you to various risks such as malware, viruses, spyware, phishing, and identity theft.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Criminal Case Pacific Bay Mod APK - Experience the Dark Side of Paradise.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Criminal Case Pacific Bay Mod APK - Experience the Dark Side of Paradise.md deleted file mode 100644 index 11a306154bcc38d7be27edc63b3fa85d916baf9c..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Criminal Case Pacific Bay Mod APK - Experience the Dark Side of Paradise.md +++ /dev/null @@ -1,127 +0,0 @@ - -

      Download Game Criminal Case Pacific Bay Mod Apk: A Guide for Crime Lovers

      -

      If you are a fan of crime-solving games, you might have heard of Criminal Case, one of the most popular hidden object games on Android. But did you know that there is a spin-off series called Criminal Case Pacific Bay, where you can investigate murders in the sunny and exotic Pacific Bay? And did you know that you can download a modded version of this game, which gives you unlimited money, energy, and hints? In this article, we will tell you everything you need to know about downloading game criminal case pacific bay mod apk. Read on to find out more!

      -

      What is Criminal Case Pacific Bay?

      -

      Criminal Case Pacific Bay is a hidden object game developed by Pretty Simple, the same studio behind the original Criminal Case. In this game, you play as a detective who works for the Pacific Bay Police Department, and your job is to solve various murder cases by finding clues, interrogating suspects, and analyzing evidence. You will also have to deal with the personal lives of your colleagues, such as your partner Amy Young, your boss Frank Knight, and your forensic expert Roxie Sparks.

      -

      download game criminal case pacific bay mod apk


      Download Zip ✔✔✔ https://gohhs.com/2uPtsT



      -

      The gameplay of Criminal Case Pacific Bay

      -

      The gameplay of Criminal Case Pacific Bay is similar to the original Criminal Case. Each case consists of several scenes, where you have to find hidden objects within a time limit. You can use hints to help you locate the objects, but they are limited and recharge slowly. You can also use boosters, such as x-rays, magnifying glasses, and flashlights, to make your search easier. However, these boosters cost coins, which are the in-game currency.

      -

      After finding all the objects in a scene, you will earn stars, which are used to unlock other scenes and perform tasks such as interrogating suspects, analyzing evidence, and arresting killers. You will also earn experience points, which help you level up and unlock new features and items. You can also earn trophies and achievements by completing certain objectives and challenges.

      -

      The features of Criminal Case Pacific Bay

      -

      Criminal Case Pacific Bay has many features that make it an enjoyable and addictive game. Some of these features are:

      -
        -
      • A captivating storyline that takes you to various locations in Pacific Bay, such as beaches, casinos, jungles, and mountains.
      • -
      • Over 60 cases to solve, each with its own plot twists and surprises.
      • -
      • A colorful and detailed graphics that create a realistic and immersive atmosphere.
      • -
      • A variety of characters to interact with, each with their own personality and background.
      • -
      • A social aspect that allows you to play with your friends on Facebook, or join a team with other players from around the world.
      • -
      • A ranking system that lets you compare your progress and performance with other players.
      • -
      -

      Why download Criminal Case Pacific Bay mod apk?

      -

      Criminal Case Pacific Bay is a free-to-play game, but it also has some limitations and drawbacks that might affect your gaming experience. For example:

      -
        -
      • You need an internet connection to play the game.
      • -
      • You have to wait for your energy to refill before you can play more scenes.
      • -
      • You have to spend coins or real money to buy more hints and boosters.
      • -
      • You have to watch ads or complete offers to earn more coins or energy.
      • -
      -

      That's why some players prefer to download a modded version of the game, which is also known as criminal case pacific bay mod apk. This version has some modifications that give you some advantages over the original version. Some of these advantages are:

      -

      The benefits of Criminal Case Pacific Bay mod apk

      -

      By downloading game criminal case pacific bay mod apk, you can enjoy the following benefits:

      -
        -
      • You can play the game offline, without needing an internet connection.
      • -
      • You can have unlimited energy, which means you can play as many scenes as you want without waiting.
      • -
      • You can have unlimited coins, which means you can buy as many hints and boosters as you need without spending real money.
      • -
      • You can have unlimited hints, which means you can find all the hidden objects easily and quickly.
      • -
      -

      The drawbacks of Criminal Case Pacific Bay mod apk

      -

      However, downloading game criminal case pacific bay mod apk also has some drawbacks that you should be aware of. Some of these drawbacks are:

      -
        -
      • You might face some compatibility issues with your device or the game version.
      • -
      • You might encounter some bugs or glitches that affect the game performance or functionality.
      • -
      • You might lose your progress or data if you uninstall the game or switch to the original version.
      • -
      • You might get banned from the game or your account if the developers detect that you are using a modded version.
      • -
      -

      How to download and install Criminal Case Pacific Bay mod apk?

      -

      If you are interested in downloading game criminal case pacific bay mod apk, you will need to follow some steps to ensure a successful and safe installation. Here are the steps you need to take:

      -

      download criminal case pacific bay mod apk unlimited money
      -criminal case pacific bay mod apk latest version download
      -how to download criminal case pacific bay mod apk for android
      -criminal case pacific bay hack mod apk download free
      -download game criminal case pacific bay mod apk offline
      -criminal case pacific bay mod apk unlimited energy download
      -download criminal case pacific bay full unlocked mod apk
      -criminal case pacific bay mod apk android 1 download
      -download game criminal case pacific bay mod apk revdl
      -criminal case pacific bay mod apk free shopping download
      -download criminal case pacific bay mega mod apk
      -criminal case pacific bay mod apk no ads download
      -download game criminal case pacific bay mod apk rexdl
      -criminal case pacific bay mod apk unlimited stars download
      -download criminal case pacific bay premium mod apk
      -criminal case pacific bay mod apk vip unlocked download
      -download game criminal case pacific bay mod apk happymod
      -criminal case pacific bay mod apk unlimited hints download
      -download criminal case pacific bay pro mod apk
      -criminal case pacific bay mod apk all episodes unlocked download
      -download game criminal case pacific bay mod apk pure
      -criminal case pacific bay mod apk unlimited coins download
      -download criminal case pacific bay plus mod apk
      -criminal case pacific bay mod apk unlimited everything download
      -download game criminal case pacific bay mod apk obb
      -criminal case pacific bay mod apk god mode download
      -download criminal case pacific bay cracked mod apk
      -criminal case pacific bay mod apk unlimited lives download
      -download game criminal case pacific bay mod apk data
      -criminal case pacific bay mod apk high damage download
      -download criminal case pacific bay cheat mod apk
      -criminal case pacific bay mod apk unlimited keys download
      -download game criminal case pacific bay mod apk android oyun club
      -criminal case pacific bay mod apk one hit kill download
      -download criminal case pacific bay gold mod apk
      -criminal case pacific bay mod apk all levels unlocked download
      -download game criminal case pacific bay mod apk uptodown
      -criminal case pacific bay mod apk no root download
      -download criminal case pacific bay lite mod apk
      -criminal case pacific bay mod apk all items unlocked download

      -

      The steps to download and install Criminal Case Pacific Bay mod apk

      -
        -
      1. First, you need to find a reliable and trustworthy source that provides the download link for the mod apk file. You can search on Google or use websites like APKPure, APKMirror, or APKHome.
      2. -
      3. Second, you need to download the mod apk file to your device. Make sure you have enough storage space and a stable internet connection.
      4. -
      5. Third, you need to enable the installation of unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      6. -
      7. Fourth, you need to locate the mod apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.
      8. -
      9. Fifth, you need to launch the game and enjoy playing with unlimited money, energy, and hints.
      10. -
      -

      The tips to avoid errors and viruses when downloading and installing Criminal Case Pacific Bay mod apk

      -

      To avoid any errors or viruses when downloading and installing criminal case pacific bay mod apk, you should follow these tips:

      -
        -
      • Always scan the mod apk file with an antivirus software before installing it.
      • -
      • Always backup your data before installing the mod apk file.
      • -
      • Always check the reviews and ratings of the source before downloading the mod apk file.
      • -
      • Always update the game to the latest version before installing the mod apk file.
      • -
      -

      Conclusion

      -

      Criminal Case Pacific Bay is a fun and exciting hidden object game that lets you solve murder cases in a tropical paradise. However, if you want to enhance your gaming experience and overcome some limitations, you can download game criminal case pacific bay mod apk, which gives you unlimited money, energy, and hints. However, you should also be careful of some drawbacks and risks that come with using a modded version of the game. We hope this article has helped you learn more about downloading game criminal case pacific bay mod apk. Now, it's time for you to put on your detective hat and start cracking some cases!

      -

      FAQs

      -

      Here are some frequently asked questions about downloading game criminal case pacific bay mod apk:

      -
        -
      1. Is Criminal Case Pacific Bay mod apk safe?
      2. -

        Criminal Case Pacific Bay mod apk is generally safe if you download it from a reputable source and scan it with an antivirus software. However, there is always a risk of getting viruses or malware when downloading any modded files from unknown sources. Therefore, you should always be cautious and follow the tips we mentioned above.

        -
      3. Is Criminal Case Pacific Bay mod apk legal?
      4. -

        Criminal Case Pacific Bay mod apk is not legal, as it violates the terms and conditions of the original game. By using a modded version of the game, you are cheating and gaining an unfair advantage over other players. Moreover, you are also infringing on the intellectual property rights of the developers. Therefore, we do not endorse or encourage the use of Criminal Case Pacific Bay mod apk.

        -
      5. Can Can I play Criminal Case Pacific Bay mod apk with my friends?
      6. -

        Criminal Case Pacific Bay mod apk has a social aspect that allows you to play with your friends on Facebook, or join a team with other players from around the world. However, you might face some issues or conflicts when playing with other players who are using the original version of the game. For example, they might report you for cheating, or you might not be able to sync your progress or data with them. Therefore, we recommend that you only play Criminal Case Pacific Bay mod apk with other players who are also using the modded version of the game.

        -
      7. How can I update Criminal Case Pacific Bay mod apk?
      8. -

        Criminal Case Pacific Bay mod apk is not compatible with the official updates of the original game. Therefore, you cannot update Criminal Case Pacific Bay mod apk from the Google Play Store or the game itself. Instead, you have to download and install the latest version of Criminal Case Pacific Bay mod apk from the same source that you used before. However, you should also backup your data before updating, as you might lose your progress or data when switching to a new version of Criminal Case Pacific Bay mod apk.

        -
      9. What are some alternatives to Criminal Case Pacific Bay mod apk?
      10. -

        If you are looking for some alternatives to Criminal Case Pacific Bay mod apk, you might want to try some other hidden object games that are similar in theme and gameplay. Some of these games are:

        -
          -
        • Criminal Case: The original game that started it all. Solve murder cases in Grimsborough, a dark and corrupt city.
        • -
        • Criminal Case: Mysteries of the Past: A spin-off series that takes you back to the 19th century, where you can investigate crimes in a historical setting.
        • -
        • Criminal Case: Save the World: Another spin-off series that takes you around the world, where you can solve cases in different countries and cultures.
        • -
        • Criminal Case: Travel in Time: The latest spin-off series that takes you through time, where you can solve cases in different eras and periods.
        • -
        • Hidden City: A hidden object game that takes you to a mysterious city, where you can explore different locations and uncover secrets.
        • -
        • June's Journey: A hidden object game that takes you to the 1920s, where you can follow the story of June Parker, a brave and adventurous woman who solves mysteries.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Bingo Cards 1-90 and Enjoy a Classic Game with Your Family and Friends.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Bingo Cards 1-90 and Enjoy a Classic Game with Your Family and Friends.md deleted file mode 100644 index ea3ec07cddddbd026029a1703017113ec93ea069..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Bingo Cards 1-90 and Enjoy a Classic Game with Your Family and Friends.md +++ /dev/null @@ -1,124 +0,0 @@ - -

        Download Free Bingo Cards 1-90: How to Play and Win Online

        -

        Bingo is a fun and exciting game that can be played by anyone, anywhere, and anytime. Whether you are looking for a way to relax, socialize, or win some money, bingo is the perfect choice for you. But did you know that you can also play bingo online for free? Yes, you heard that right. You can download free bingo cards 1-90 and enjoy the game from the comfort of your home or on the go. In this article, we will tell you everything you need to know about bingo, why you should download free bingo cards 1-90, and how to do it. Let's get started!

        -

        download free bingo cards 1-90


        DOWNLOAD » https://gohhs.com/2uPvJF



        -

        What is Bingo and How to Play It

        -

        Bingo is a game of chance that involves matching numbers on a card with numbers drawn randomly by a caller. The first person to mark off a line, a column, a diagonal, or the entire card (depending on the game) wins a prize. Bingo is usually played in bingo halls, casinos, or online platforms, where players can buy or download bingo cards and join a game room.

        -

        The History of Bingo

        -

        Bingo has a long and fascinating history that dates back to the 16th century. It originated in Italy as a lottery game called "Il Gioco del Lotto d'Italia", which is still played today. The game spread to France, Germany, and other European countries, where it was adapted for different purposes and audiences. In the 18th century, the game reached Britain, where it was called "Housie" or "Housey-Housey". In the 19th century, the game crossed the Atlantic and arrived in America, where it was popularized by a toy salesman named Edwin S. Lowe. He renamed it "Bingo" after hearing someone accidentally shout it instead of "Beano", which was the original name of the game in America.

        -

        The Rules of Bingo

        -

        The rules of bingo are simple and easy to follow. Here are the basic steps:

        -
          -
        1. Buy or download a bingo card that has a grid of numbers from 1 to 90 (or any other range depending on the game).
        2. -
        3. Join a bingo game room online or offline and wait for the caller to announce the numbers.
        4. -
        5. Mark off the numbers on your card as they are called. You can use a pen, a marker, a dauber, or an online tool.
        6. -
        7. If you complete a line, a column, a diagonal, or the entire card (depending on the game), shout "Bingo!" or click on the "Bingo!" button online.
        8. -
        9. Show your card to the caller or the online system to verify your win.
        10. -
        11. Collect your prize if you are the first or one of the first winners.
        12. -
        -

        The Types of Bingo Games

        -

        There are many types of bingo games that vary in terms of the number of balls, the number of cards, the pattern required to win, and the theme. Some of the most common types are:

        -

        download free printable bingo cards 1-90
        -download free 90-ball bingo cards pdf
        -download free custom bingo cards for 1-90 numbers
        -download free bingo cards with numbers 1-90
        -download free bingo cards generator for 1-90
        -download free bingo cards for large groups 1-90
        -download free bingo cards for small groups 1-75
        -download free bingo cards with instructions and call sheet
        -download free bingo cards with different sizes and themes
        -download free bingo cards with fun checkerboard and free space
        -download free bingo cards for family and friends 1-90
        -download free bingo cards for classic game of 90-ball bingo
        -download free bingo cards for home games 1-90
        -download free bingo cards with randomized numbers 1-90
        -download free bingo cards with sailor theme 1-90
        -download free bingo cards for online games 1-90
        -download free bingo cards with joker theme 1-90
        -download free bingo cards with buzzwords 1-90
        -download free bingo cards with easy to read numbers 1-90
        -download free bingo cards with colorful design 1-90

        -
          -
        • 90-ball bingo: This is the most popular type of bingo in the UK and Europe. It uses 90 balls and cards that have 15 numbers each. The cards are divided into three rows and nine columns. The numbers are randomly distributed from 1 to 90 across the columns. The first column has numbers from 1 to 9, the. second column has numbers from 10 to 19, and so on. The first row has five numbers, the second row has four numbers, and the third row has six numbers. The game has three prizes: one for completing one line, one for completing two lines, and one for completing the full house (all 15 numbers).
        • -
        • 75-ball bingo: This is the most popular type of bingo in the US and Canada. It uses 75 balls and cards that have 25 numbers each. The cards are divided into five rows and five columns. The numbers are randomly distributed from 1 to 75 across the grid. The center square is usually marked as a free space. The game has one prize for completing a predetermined pattern, which can be a line, a column, a diagonal, a letter, a shape, or any other combination.
        • -
        • 80-ball bingo: This is a type of bingo that is popular online and in some bingo halls. It uses 80 balls and cards that have 16 numbers each. The cards are divided into four rows and four columns. The numbers are randomly distributed from 1 to 80 across the grid. The game has one prize for completing a line, a column, a diagonal, or the full house (all 16 numbers).
        • -
        • 30-ball bingo: This is a type of bingo that is also known as speed bingo or mini bingo. It uses 30 balls and cards that have nine numbers each. The cards are divided into three rows and three columns. The numbers are randomly distributed from 1 to 30 across the grid. The game has one prize for completing the full house (all nine numbers). This type of bingo is fast-paced and ideal for players who want a quick game.
        • -
        -

        Why You Should Download Free Bingo Cards 1-90

        -

        If you love playing bingo, you might be wondering why you should download free bingo cards 1-90 instead of buying them or playing with real money online. Well, there are many reasons why downloading free bingo cards 1-90 is a great idea. Here are some of them:

        -

        The Benefits of Playing Online Bingo

        -

        Playing online bingo has many advantages over playing in a traditional bingo hall or casino. Some of these benefits are:

        -
          -
        • Convenience: You can play online bingo anytime and anywhere you want, as long as you have an internet connection and a device such as a computer, a tablet, or a smartphone. You don't have to travel, dress up, or deal with crowds and noise.
        • -
        • Variety: You can choose from hundreds of online bingo sites and games that cater to different preferences and budgets. You can also switch between different types of bingo games with ease.
        • -
        • Bonuses: You can enjoy various bonuses and promotions that online bingo sites offer to attract and retain players. These can include welcome bonuses, free spins, loyalty points, cashback, jackpots, and more.
        • -
        • Socialization: You can chat with other players and make new friends while playing online bingo. Most online bingo sites have chat rooms where you can interact with other players and chat hosts. You can also join online bingo communities and forums where you can share tips, stories, and opinions.
        • -
        -

        The Features of Free Bingo Cards 1-90

        -

        Free bingo cards 1-90 are special cards that you can download and print for free from various websites. They have the following features:

        -
          -
        • No cost: You don't have to pay anything to download or print free bingo cards 1-90. You can save money and play as many games as you want without worrying about your budget.
        • -
        • No registration: You don't have to sign up or provide any personal information to download or print free bingo cards 1-90. You can enjoy your privacy and security while playing online bingo.
        • -
        • No download: You don't have to download any software or app to play online bingo with free bingo cards 1-90. You can access them directly from your browser or device.
        • -
        • No limit: You can download or print as many free bingo cards 1-90 as you need or want. You can play with different cards every time or share them with your friends and family.
        • -
        -

        The Tips and Tricks for Winning Online Bingo

        -

        Playing online bingo with free bingo cards 1-90 is fun and easy, but it also requires some skill and strategy to increase your chances of winning. Here are some tips and tricks that can help you win more often:

        -
          -
        • < strong>Buy more cards: The more cards you have, the more chances you have to match the numbers. However, don't buy more cards than you can handle or afford. You should be able to keep track of all your cards and mark them quickly and accurately.
        • -
        • Choose your cards wisely: Some cards may have better odds of winning than others, depending on the game and the number distribution. You can try to look for cards that have a balanced mix of odd and even numbers, high and low numbers, and numbers that end with different digits.
        • -
        • Play at the right time: The best time to play online bingo is when there are fewer players and more prizes. This way, you can reduce the competition and increase your chances of winning. You can try to play during off-peak hours, such as early mornings, late nights, or weekdays.
        • -
        • Use the chat features: The chat features can help you learn from other players and get useful information. You can ask for tips, strategies, or recommendations from other players or chat hosts. You can also look out for chat games, quizzes, or contests that can give you extra bonuses or prizes.
        • -
        -

        How to Download Free Bingo Cards 1-90

        -

        Now that you know why you should download free bingo cards 1-90 and how to play and win online bingo, you might be wondering how to do it. Well, it's very simple and easy. Here are the steps:

        -

        The Best Websites to Download Free Bingo Cards 1-90

        -

        There are many websites that offer free bingo cards 1-90 that you can download and print. However, not all of them are reliable or safe. You should look for websites that have the following features:

        -
          -
        • Quality: The website should provide high-quality bingo cards that are clear, readable, and accurate. The cards should also have different designs, colors, and themes to suit your preferences.
        • -
        • Quantity: The website should offer a large number of bingo cards that you can choose from. You should be able to download or print as many cards as you want without any restrictions or limitations.
        • -
        • Customization: The website should allow you to customize your bingo cards according to your needs and wishes. You should be able to change the size, the font, the layout, the number range, and the pattern of your bingo cards.
        • -
        • Security: The website should protect your privacy and security while downloading or printing your bingo cards. The website should not ask for any personal information or payment details. The website should also be free of viruses, malware, or spam.
        • -
        -

        Some of the best websites that meet these criteria are:

        - - - - - - - -
        NameURL
        Bingo Baker(https://bingobaker.com/)
        My Free Bingo Cards(https://myfreebingocards.com/)
        Bingo Card Generator(https://bingocardgenerator.com/)
        The Bingo Maker(https://thebingomaker.com/)
        Bingo Card Creator(https://www.bingocardcreator.com/)
        -

        The Steps to Download and Print Free Bingo Cards 1-90

        -

        The steps to download and print free bingo cards 1-90 may vary depending on the website you use, but they are generally similar. Here are the common steps:

        -
          -
        1. Go to the website of your choice and click on the option to create or generate free bingo cards 1-90.
        2. -
        3. Select the number of cards you want to download or print. You can also choose the size and the format of your cards.
        4. -
        5. Customize your bingo cards by changing the font, the color, the layout, the number range, and the pattern. You can also add images, logos, or text to your cards.
        6. -
        7. Preview your bingo cards and make sure they are correct and satisfactory.
        8. -
        9. Click on the option to download or print your bingo cards. You can save them as PDF files or print them directly from your browser or device.
        10. -
        -

        The Ways to Play Online Bingo with Free Bingo Cards 1-90

        -

        Once you have downloaded or printed your free bingo cards 1-90, you can start playing online bingo with them. There are two main ways to play online bingo with free bingo cards 1-90:

        -
          -
        • Play with a live caller: You can join an online bingo game room that has a live caller who announces the numbers. You can use your free bingo cards 1-90 to mark off the numbers as they are called. You can also chat with other players and the caller in the chat room. You can win prizes or bonuses if you are the first or one of the first to complete a line, a column, a diagonal, or the full house.
        • -
        • Play with a random number generator: You can use a random number generator (RNG) to generate the numbers for your online bingo game. You can use your free bingo cards 1-90 to mark off the numbers as they are generated. You can also set the speed and the frequency of the number generation. You can play by yourself or with your friends and family. You can decide the prizes or rewards for the winners.
        • -
        -

        Conclusion

        -

        Bingo is a fun and exciting game that can be played online for free with free bingo cards 1-90. You can download and print free bingo cards 1-90 from various websites that offer high-quality, customizable, and secure cards. You can play online bingo with free bingo cards 1-90 with a live caller or a random number generator. You can enjoy the benefits of playing online bingo, such as convenience, variety, bonuses, and socialization. You can also use some tips and tricks to increase your chances of winning online bingo. So what are you waiting for? Download free bingo cards 1-90 today and start playing and winning online bingo!

        -

        FAQs

        -

        Here are some frequently asked questions about downloading free bingo cards 1-90:

        -
          -
        1. Q: How many free bingo cards 1-90 can I download or print?
        2. -
        3. A: You can download or print as many free bingo cards 1-90 as you want from the websites that offer them. There is no limit or restriction on the number of cards you can get.
        4. -
        5. Q: How do I know if my free bingo cards 1-90 are valid and accurate?
        6. -
        7. A: You can check the validity and accuracy of your free bingo cards 1-90 by comparing them with the numbers announced by the caller or generated by the RNG. You can also use online tools or apps that can scan and verify your cards.
        8. -
        9. Q: Can I use my free bingo cards 1-90 for other types of bingo games?
        10. -
        11. A: Yes, you can use your free bingo cards 1-90 for other types of bingo games, such as 75-ball bingo, 80-ball bingo, or 30-ball bingo. However, you may need to adjust the number range, the pattern, or the rules of the game accordingly.
        12. -
        13. Q: Can I share my free bingo cards 1-90 with others?
        14. -
        15. A: Yes, you can share your free bingo cards 1-90 with others, such as your friends and family. You can play online bingo together or compete against each other. You can also join online bingo communities and forums where you can share your cards and experiences.
        16. -
        17. Q: Where can I find more information about online bingo and free bingo cards 1-90?
        18. -
        19. A: You can find more information about online bingo and free bingo cards 1-90 by visiting the websites that offer them, reading online reviews and blogs, watching online videos and tutorials, or asking other players and experts.
        20. -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Vidmate Versi Lama 2018 dengan Fitur Lengkap dan Cepat.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Vidmate Versi Lama 2018 dengan Fitur Lengkap dan Cepat.md deleted file mode 100644 index ddea0fc438d2e3feb182aa71525d9c32ab22ecf9..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Vidmate Versi Lama 2018 dengan Fitur Lengkap dan Cepat.md +++ /dev/null @@ -1,107 +0,0 @@ - -

        Download Vidmate Versi Lama 2018: A Guide to Enjoy Free Videos and Music

        -

        Vidmate is a popular app that allows you to download videos and music from various sites, such as YouTube, Facebook, Instagram, TikTok, and more. You can also stream live TV channels and movies on the app. However, some users prefer to download the old version of Vidmate, which is Vidmate versi lama 2018. Why? Because this version has some advantages over the newer ones, such as no ads, less bugs, faster performance, and compatibility with older devices. In this article, we will show you how to download Vidmate versi lama 2018, what are its features, benefits, drawbacks, and how to use it.

        -

        How to Download Vidmate Versi Lama 2018

        -

        Downloading Vidmate versi lama 2018 is not difficult, but you need to follow some steps carefully. Here they are:

        -

        download vidmate versi lama 2018


        Download Ziphttps://gohhs.com/2uPsvt



        -
          -
        1. Find a reliable source for the APK file. You cannot find Vidmate versi lama 2018 on Google Play Store or other official app stores, because it is an old version that is no longer supported by the developers. Therefore, you need to find a third-party website that offers the APK file for free. However, be careful not to download from shady or malicious sites that may contain viruses or malware. Some of the trusted sources for Vidmate versi lama 2018 are , , , , and . You can click on these links or copy and paste them into your browser.
        2. -
        3. Enable unknown sources on your device. Since you are downloading an APK file from an external source, you need to allow your device to install apps from unknown sources. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don't worry, as long as you download from a reputable site, you should be fine.
        4. -
        5. Install the APK file and launch the app. After you download the APK file, locate it in your device's file manager or downloads folder and tap on it. You will see a prompt that asks you to confirm the installation. Tap on Install and wait for a few seconds until the installation is complete. Then, tap on Open to launch the app. You may also see an icon of Vidmate versi lama 2018 on your home screen or app drawer.
        6. -
        -

        What are the Features of Vidmate Versi Lama 2018

        -

        Vidmate versi lama 2018 has many features that make it a great app for downloading videos and music. Here are some of the main features that you can enjoy with this app:

        -
          -
        • Download videos from various sites in HD quality. You can download videos from more than 1000 sites, including YouTube, Facebook, Instagram, TikTok, Dailymotion, Vimeo, and more. You can choose the quality of the download, from 144p to 1080p, depending on your preference and internet speed. You can also download multiple videos at the same time with the batch download feature.
        • -
        • Convert videos to MP3 and other formats. If you only want to download the audio of a video, you can use the built-in converter to convert the video to MP3 or other audio formats. You can also convert videos to other video formats, such as MP4, AVI, MOV, WMV, etc. This way, you can play your downloaded files on any device or media player.
        • -
        • Stream live TV channels and movies. Vidmate versi lama 2018 is not only a downloader, but also a streaming app. You can watch live TV channels from various genres, such as news, sports, entertainment, music, etc. You can also watch movies from different languages and countries, such as Hollywood, Bollywood, Korean, Japanese, etc. You can stream the content online or download it for offline viewing.
        • -
        • Manage your downloads and storage. Vidmate versi lama 2018 has a user-friendly interface that allows you to manage your downloads and storage easily. You can view your download history, pause and resume downloads, delete unwanted files, and move files to different folders. You can also set a maximum download speed and a default download location to optimize your storage space.
        • -
        -

        What are the Benefits of Vidmate Versi Lama 2018

        -

        Vidmate versi lama 2018 has some benefits that make it preferable over the newer versions of Vidmate. Here are some of them:

        -
          -
        • No ads and less bugs. One of the most annoying things about the newer versions of Vidmate is that they have too many ads that pop up every time you use the app. These ads not only interrupt your experience, but also consume your data and battery. Vidmate versi lama 2018 has no ads at all, so you can enjoy the app without any distractions. Moreover, this version has less bugs and errors than the newer ones, so you can use it without any glitches or crashes.
        • -
        • Faster and smoother performance. Another benefit of Vidmate versi lama 2018 is that it has a faster and smoother performance than the newer versions. This version is more lightweight and optimized for speed and efficiency. It can load videos faster, download files quicker, and stream content smoother. You will not experience any lagging or buffering with this version.
        • -
        • Compatible with older devices and Android versions. If you have an older device or an older Android version, you may not be able to install or run the newer versions of Vidmate. They may be incompatible with your device or require higher specifications. Vidmate versi lama 2018 is compatible with most devices and Android versions, even the ones that are outdated or low-end. You can install and use this version without any problems.
        • -
        -

        What are the Drawbacks of Vidmate Versi Lama 2018

        -

        Vidmate versi lama 2018 is not perfect, though. It also has some drawbacks that you should be aware of before downloading it. Here are some of them:

        -
          -
        • Less security and updates. Since Vidmate versi lama 2018 is an old version that is no longer supported by the developers, it may not have the latest security features and updates that protect your device and data from hackers and malware. You may also miss out on some bug fixes and improvements that enhance the app's functionality and stability. Therefore, you should be careful when downloading files from unknown sources and scan them for viruses before opening them.
        • -
        • Missing some new features and functions. Another drawback of Vidmate versi lama 2018 is that it may not have some of the new features and functions that the newer versions have. For example, you may not be able to download videos from some sites that are added to the app's list later on. You may also not be able to access some of the new options and settings that customize your experience. Therefore, you may not get the best out of the app with this version.
        • -
        -

        How to Use Vidmate Versi Lama 2018

        -

        Vidmate versi lama 2018 is easy mate versi lama 2018 and decide whether it is worth downloading or not. We hope this article has helped you understand more about this app and how to use it. If you have any questions or feedback, please leave them in the comments section below. Thank you for reading!

        -

        FAQs

        -

        Here are some of the frequently asked questions about Vidmate versi lama 2018:

        -
          -
        1. Is Vidmate versi lama 2018 safe to download and use?
        2. -

          Vidmate versi lama 2018 is generally safe to download and use, as long as you download it from a reliable source and scan it for viruses before installing it. However, it may not have the latest security features and updates that protect your device and data from hackers and malware. Therefore, you should be careful when downloading files from unknown sources and avoid clicking on suspicious links or pop-ups.

          -
        3. What are the differences between Vidmate versi lama 2018 and the newer versions of Vidmate?
        4. -

          Vidmate versi lama 2018 has some differences from the newer versions of Vidmate, such as:

          -
            -
          • It has no ads and less bugs
          • -
          • It has faster and smoother performance
          • -
          • It is compatible with older devices and Android versions
          • -
          • It may not have the latest security and updates
          • -
          • It may miss some new features and functions
          • -
          -
        5. How can I update Vidmate versi lama 2018 to the latest version of Vidmate?
        6. -

          If you want to update Vidmate versi lama 2018 to the latest version of Vidmate, you can follow these steps:

          -

          cara download aplikasi vidmate lama gratis
          -vidmate versi lama 2018 no ads apk
          -download video hd movie dengan vidmate lama
          -vidmate versi lama 2.46 untuk laptop
          -download lagu terbaru dan lama dengan vidmate
          -vidmate versi lama 2014, 2015, 2016, 2017
          -download apk vidmate lama melalui link
          -aplikasi vidmate lama tanpa mendaftar
          -vidmate lama 2018 ukuran file 8.66 MB
          -download mp3, video dan lagu dengan vidmate lama
          -cara menginstal aplikasi vidmate versi lama
          -vidmate versi lama 2018 android jelly bean keatas
          -download video dari berbagai situs web dengan vidmate lama
          -vidmate versi lama ringan dan mudah digunakan
          -download apk vidmate versi lama 2018 terbaru
          -cara mengatasi masalah pada aplikasi vidmate lama
          -vidmate versi lama 2018 fitur no ads dan gratis
          -download video kualitas tinggi dengan vidmate lama
          -vidmate versi lama 2018 nama paket com.nemo.studio
          -download lagu-lagu favorit anda dengan vidmate lama
          -cara update aplikasi vidmate versi lama ke versi baru
          -vidmate versi lama 2018 dapat menyimpan pada SD Card
          -download video youtube, facebook, instagram dengan vidmate lama
          -vidmate versi lama 2018 support jenis format file video dan musik
          -download lagu-lagu populer indonesia dengan vidmate lama
          -cara uninstall aplikasi vidmate versi lama dari hp anda
          -vidmate versi lama 2018 kompatibel dengan semua perangkat android
          -download video viral dan trending dengan vidmate lama
          -vidmate versi lama 2018 dapat digunakan tanpa iklan mengganggu
          -download lagu-lagu dangdut koplo dengan vidmate lama
          -cara memilih kualitas video yang diinginkan dengan vidmate versi lama
          -vidmate versi lama 2018 memiliki tampilan yang sederhana dan elegan
          -download video lucu, horor, drama dengan vidmate lama
          -vidmate versi lama 2018 dapat mengunduh video secara cepat dan stabil
          -download lagu-lagu nostalgia dan kenangan dengan vidmate lama
          -cara menambahkan situs web favorit anda ke aplikasi vidmate versi lama
          -vidmate versi lama 2018 memiliki fitur pause dan resume download video
          -download video edukasi, motivasi, inspirasi dengan vidmate lama
          -vidmate versi lama 2018 dapat memutar video secara offline tanpa kuota internet
          -download lagu-lagu religi dan islami dengan vidmate lama

          -
            -
          1. Go to Settings > About > Check for Updates on the app
          2. -
          3. If there is an update available, tap on Download and Install
          4. -
          5. Wait for the update to finish and restart the app
          6. -
          -

          Note that updating the app may remove some of the features and benefits of Vidmate versi lama 2018, such as no ads, faster performance, and compatibility with older devices.

          -
        7. Can I download Vidmate versi lama 2018 on my PC or laptop?
        8. -

          Vidmate versi lama 2018 is an Android app that is designed for mobile devices. However, you can also download it on your PC or laptop by using an Android emulator, such as BlueStacks, Nox Player, or MEmu. An Android emulator is a software that allows you to run Android apps on your PC or laptop. You can download an Android emulator from its official website and install it on your PC or laptop. Then, you can download Vidmate versi lama 2018 from one of the sources mentioned above and install it on the emulator. After that, you can launch the app and use it as you would on your mobile device.

          -
        9. What are some alternatives to Vidmate versi lama 2018?
        10. -

          If you are looking for some alternatives to Vidmate versi lama 2018, you can try these apps:

          -
            -
          • Snaptube: Snaptube is another app that allows you to download videos and music from various sites, such as YouTube, Facebook, Instagram, etc. It has a similar interface and features as Vidmate, but it also has some unique functions, such as night mode, picture-in-picture mode, floating window, etc. You can download Snaptube from its official website or other sources.
          • -
          • Tubemate: Tubemate is one of the oldest and most popular apps for downloading videos from YouTube. It has a simple and easy-to-use interface that lets you search, download, and watch videos in various qualities and formats. You can also download videos from other sites by using the built-in browser. You can download Tubemate from its official website or other sources.
          • -
          • Videoder: Videoder is a powerful app that allows you to download videos and music from more than 50 sites, such as YouTube, Facebook, Instagram, TikTok, etc. It has a sleek and modern interface that offers a lot of customization options and features. You can also create playlists, edit videos, share files, etc. You can download Videoder from its official website or other sources.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fero/stable-diffusion-webui-cpu/app.py b/spaces/fero/stable-diffusion-webui-cpu/app.py deleted file mode 100644 index 86d44c530a07a58d5c32663b9c07ecd6310b742c..0000000000000000000000000000000000000000 --- a/spaces/fero/stable-diffusion-webui-cpu/app.py +++ /dev/null @@ -1,165 +0,0 @@ -""" -Stable Diffusion Webui Version 1.6 -https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.6.0 - -""" -commit_id=r"5ef669de080814067961f28357256e8fe27544f4" #Version 1.3.0 -import os -from sys import executable -import subprocess -import pathlib -import gc - -def Gitclone(URI:str,ClonePath:pathlib.Path ) -> int : - if pathlib.Path.exists(ClonePath): - return 0 - for z in range(10): - i=subprocess.run([r"git",r"clone",str(URI),str(ClonePath)]) - if(i.returncode == 0 ): - del i - return 0 - else : - del i - raise Exception(str.format("clone \'{0}\' failed",URI)) - - -def DownLoad(URI:str,DownloadPath:pathlib.Path,DownLoadFileName:str ) -> int: - if (DownloadPath / DownLoadFileName).is_file(): return 0 - for z in range(10): - i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",str(DownloadPath),r"-o",DownLoadFileName,URI]); - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i - raise Exception(str.format("download \'{0}\' failed",URI)) - -user_home =pathlib.Path.home().resolve() -os.chdir(str(user_home)) -#clone stable-diffusion-webui repo -print("cloning stable-diffusion-webui repo") -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",user_home / r"stable-diffusion-webui") -os.chdir(str(user_home / r"stable-diffusion-webui")) -os.system("git reset --hard "+commit_id) -#install extensions -print("installing extensions") -Gitclone(r"https://huggingface.co/embed/negative",user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative") -Gitclone(r"https://huggingface.co/embed/lora",user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive") -DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN" ,r"4x-UltraSharp.pth") -while (True): - i=subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")]) - if(i.returncode == 0 ): - del i - gc.collect() - break - else : - del i -Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" ) -Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser") -Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface") -Gitclone(r"https://github.com/camenduru/sd-civitai-browser",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-civitai-browser") -Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks") -Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet") -Gitclone(r"https://github.com/fkunn1326/openpose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor") -Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib") -Gitclone(r"https://github.com/hnmr293/posex",user_home / r"stable-diffusion-webui" / r"extensions" / r"posex") -Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor") -#中文本地化的请解除下一行的注释 -#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN") -Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete") -Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels") -Gitclone(r"https://github.com/etherealxx/batchlinks-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui") -Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin") -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg") -Gitclone(r"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-two-shot") -Gitclone(r"https://github.com/camenduru/sd_webui_stealth_pnginfo",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_webui_stealth_pnginfo") -os.chdir(user_home / r"stable-diffusion-webui") -#download ControlNet models -print("extensions dolwnload done .\ndownloading ControlNet models") -dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"] -for i in range(0,len(dList)): DownLoad(dList[i],user_home / r"stable-diffusion-webui" / r"extensions" / "sd-webui-controlnet" / r"models",pathlib.Path(dList[i]).name) -del dList -#download model -#you can change model download address here -print("ControlNet models download done.\ndownloading model") -#Stable Diffusion Checkpoint Model -#anything version4.5 -DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"anything-v4.5-pruned.ckpt") -DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"anything-v4.0.vae.pt") -#Counterfeit-V3.0 -DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"Counterfeit-V3.0_fp16.safetensors") -#AbyssOrangeMix2 sfw -DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"AbyssOrangeMix2_sfw.safetensors") -DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"orangemix.vae.pt") -#MeinaPastelV5 -DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"MeinaPastelV5_BakedVAE.safetensors") -DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Without%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"MeinaPastelV5_WithoutVAE.safetensors") - -#Lora Model -#Better Light -DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"Better_light.safetensors") -DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"Better_light.safetensors") -#LAS -DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"LAS.safetensors") -DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"LAS.safetensors") -#Backlighting -DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"backlighting.safetensors") -DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"backlighting.safetensors") -#GFPGAN Model -#detection Resnet50 -DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"detection_Resnet50_Final.pth") -#parsing_parsenet -DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"parsing_parsenet.pth") -#GFPGANv1.4 -DownLoad(r"https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"GFPGANv1.4.pth") -#strt Stable Diffusion Webui -print("Done\nStarting Webui...") -os.chdir(user_home / r"stable-diffusion-webui") -gc.collect() -while True: - ret=subprocess.run([executable ,user_home / r"stable-diffusion-webui" / r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")]) - if(ret.returncode == 0 ): - del ret - gc.collect() - else : - del ret -del os ,user_home ,pyexecutable ,subprocess \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/__init__.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/__init__.py deleted file mode 100644 index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import audio, audio_dataset diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/timers/promises.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/timers/promises.d.ts deleted file mode 100644 index c1450684d60a323526a9ae750669adb21ba75c17..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/timers/promises.d.ts +++ /dev/null @@ -1,93 +0,0 @@ -/** - * The `timers/promises` API provides an alternative set of timer functions - * that return `Promise` objects. The API is accessible via`require('timers/promises')`. - * - * ```js - * import { - * setTimeout, - * setImmediate, - * setInterval, - * } from 'timers/promises'; - * ``` - * @since v15.0.0 - */ -declare module 'timers/promises' { - import { TimerOptions } from 'node:timers'; - /** - * ```js - * import { - * setTimeout, - * } from 'timers/promises'; - * - * const res = await setTimeout(100, 'result'); - * - * console.log(res); // Prints 'result' - * ``` - * @since v15.0.0 - * @param [delay=1] The number of milliseconds to wait before fulfilling the promise. - * @param value A value with which the promise is fulfilled. - */ - function setTimeout(delay?: number, value?: T, options?: TimerOptions): Promise; - /** - * ```js - * import { - * setImmediate, - * } from 'timers/promises'; - * - * const res = await setImmediate('result'); - * - * console.log(res); // Prints 'result' - * ``` - * @since v15.0.0 - * @param value A value with which the promise is fulfilled. - */ - function setImmediate(value?: T, options?: TimerOptions): Promise; - /** - * Returns an async iterator that generates values in an interval of `delay` ms. - * - * ```js - * import { - * setInterval, - * } from 'timers/promises'; - * - * const interval = 100; - * for await (const startTime of setInterval(interval, Date.now())) { - * const now = Date.now(); - * console.log(now); - * if ((now - startTime) > 1000) - * break; - * } - * console.log(Date.now()); - * ``` - * @since v15.9.0 - */ - function setInterval(delay?: number, value?: T, options?: TimerOptions): AsyncIterable; - - interface Scheduler { - /** - * ```js - * import { scheduler } from 'node:timers/promises'; - * - * await scheduler.wait(1000); // Wait one second before continuing - * ``` - * An experimental API defined by the Scheduling APIs draft specification being developed as a standard Web Platform API. - * Calling timersPromises.scheduler.wait(delay, options) is roughly equivalent to calling timersPromises.setTimeout(delay, undefined, options) except that the ref option is not supported. - * @since v16.14.0 - * @experimental - * @param [delay=1] The number of milliseconds to wait before fulfilling the promise. - */ - wait: (delay?: number, options?: TimerOptions) => Promise; - /** - * An experimental API defined by the Scheduling APIs draft specification being developed as a standard Web Platform API. - * Calling timersPromises.scheduler.yield() is equivalent to calling timersPromises.setImmediate() with no arguments. - * @since v16.14.0 - * @experimental - */ - yield: () => Promise; - } - - const scheduler: Scheduler; -} -declare module 'node:timers/promises' { - export * from 'timers/promises'; -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/express/lib/router/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/express/lib/router/index.js deleted file mode 100644 index 5174c34f455a1553c6065fee4425f153d37d84b2..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/express/lib/router/index.js +++ /dev/null @@ -1,673 +0,0 @@ -/*! - * express - * Copyright(c) 2009-2013 TJ Holowaychuk - * Copyright(c) 2013 Roman Shtylman - * Copyright(c) 2014-2015 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict'; - -/** - * Module dependencies. - * @private - */ - -var Route = require('./route'); -var Layer = require('./layer'); -var methods = require('methods'); -var mixin = require('utils-merge'); -var debug = require('debug')('express:router'); -var deprecate = require('depd')('express'); -var flatten = require('array-flatten'); -var parseUrl = require('parseurl'); -var setPrototypeOf = require('setprototypeof') - -/** - * Module variables. - * @private - */ - -var objectRegExp = /^\[object (\S+)\]$/; -var slice = Array.prototype.slice; -var toString = Object.prototype.toString; - -/** - * Initialize a new `Router` with the given `options`. - * - * @param {Object} [options] - * @return {Router} which is an callable function - * @public - */ - -var proto = module.exports = function(options) { - var opts = options || {}; - - function router(req, res, next) { - router.handle(req, res, next); - } - - // mixin Router class functions - setPrototypeOf(router, proto) - - router.params = {}; - router._params = []; - router.caseSensitive = opts.caseSensitive; - router.mergeParams = opts.mergeParams; - router.strict = opts.strict; - router.stack = []; - - return router; -}; - -/** - * Map the given param placeholder `name`(s) to the given callback. - * - * Parameter mapping is used to provide pre-conditions to routes - * which use normalized placeholders. For example a _:user_id_ parameter - * could automatically load a user's information from the database without - * any additional code, - * - * The callback uses the same signature as middleware, the only difference - * being that the value of the placeholder is passed, in this case the _id_ - * of the user. Once the `next()` function is invoked, just like middleware - * it will continue on to execute the route, or subsequent parameter functions. - * - * Just like in middleware, you must either respond to the request or call next - * to avoid stalling the request. - * - * app.param('user_id', function(req, res, next, id){ - * User.find(id, function(err, user){ - * if (err) { - * return next(err); - * } else if (!user) { - * return next(new Error('failed to load user')); - * } - * req.user = user; - * next(); - * }); - * }); - * - * @param {String} name - * @param {Function} fn - * @return {app} for chaining - * @public - */ - -proto.param = function param(name, fn) { - // param logic - if (typeof name === 'function') { - deprecate('router.param(fn): Refactor to use path params'); - this._params.push(name); - return; - } - - // apply param functions - var params = this._params; - var len = params.length; - var ret; - - if (name[0] === ':') { - deprecate('router.param(' + JSON.stringify(name) + ', fn): Use router.param(' + JSON.stringify(name.slice(1)) + ', fn) instead') - name = name.slice(1) - } - - for (var i = 0; i < len; ++i) { - if (ret = params[i](name, fn)) { - fn = ret; - } - } - - // ensure we end up with a - // middleware function - if ('function' !== typeof fn) { - throw new Error('invalid param() call for ' + name + ', got ' + fn); - } - - (this.params[name] = this.params[name] || []).push(fn); - return this; -}; - -/** - * Dispatch a req, res into the router. - * @private - */ - -proto.handle = function handle(req, res, out) { - var self = this; - - debug('dispatching %s %s', req.method, req.url); - - var idx = 0; - var protohost = getProtohost(req.url) || '' - var removed = ''; - var slashAdded = false; - var sync = 0 - var paramcalled = {}; - - // store options for OPTIONS request - // only used if OPTIONS request - var options = []; - - // middleware and routes - var stack = self.stack; - - // manage inter-router variables - var parentParams = req.params; - var parentUrl = req.baseUrl || ''; - var done = restore(out, req, 'baseUrl', 'next', 'params'); - - // setup next layer - req.next = next; - - // for options requests, respond with a default if nothing else responds - if (req.method === 'OPTIONS') { - done = wrap(done, function(old, err) { - if (err || options.length === 0) return old(err); - sendOptionsResponse(res, options, old); - }); - } - - // setup basic req values - req.baseUrl = parentUrl; - req.originalUrl = req.originalUrl || req.url; - - next(); - - function next(err) { - var layerError = err === 'route' - ? null - : err; - - // remove added slash - if (slashAdded) { - req.url = req.url.slice(1) - slashAdded = false; - } - - // restore altered req.url - if (removed.length !== 0) { - req.baseUrl = parentUrl; - req.url = protohost + removed + req.url.slice(protohost.length) - removed = ''; - } - - // signal to exit router - if (layerError === 'router') { - setImmediate(done, null) - return - } - - // no more matching layers - if (idx >= stack.length) { - setImmediate(done, layerError); - return; - } - - // max sync stack - if (++sync > 100) { - return setImmediate(next, err) - } - - // get pathname of request - var path = getPathname(req); - - if (path == null) { - return done(layerError); - } - - // find next matching layer - var layer; - var match; - var route; - - while (match !== true && idx < stack.length) { - layer = stack[idx++]; - match = matchLayer(layer, path); - route = layer.route; - - if (typeof match !== 'boolean') { - // hold on to layerError - layerError = layerError || match; - } - - if (match !== true) { - continue; - } - - if (!route) { - // process non-route handlers normally - continue; - } - - if (layerError) { - // routes do not match with a pending error - match = false; - continue; - } - - var method = req.method; - var has_method = route._handles_method(method); - - // build up automatic options response - if (!has_method && method === 'OPTIONS') { - appendMethods(options, route._options()); - } - - // don't even bother matching route - if (!has_method && method !== 'HEAD') { - match = false; - } - } - - // no match - if (match !== true) { - return done(layerError); - } - - // store route for dispatch on change - if (route) { - req.route = route; - } - - // Capture one-time layer values - req.params = self.mergeParams - ? mergeParams(layer.params, parentParams) - : layer.params; - var layerPath = layer.path; - - // this should be done for the layer - self.process_params(layer, paramcalled, req, res, function (err) { - if (err) { - next(layerError || err) - } else if (route) { - layer.handle_request(req, res, next) - } else { - trim_prefix(layer, layerError, layerPath, path) - } - - sync = 0 - }); - } - - function trim_prefix(layer, layerError, layerPath, path) { - if (layerPath.length !== 0) { - // Validate path is a prefix match - if (layerPath !== path.slice(0, layerPath.length)) { - next(layerError) - return - } - - // Validate path breaks on a path separator - var c = path[layerPath.length] - if (c && c !== '/' && c !== '.') return next(layerError) - - // Trim off the part of the url that matches the route - // middleware (.use stuff) needs to have the path stripped - debug('trim prefix (%s) from url %s', layerPath, req.url); - removed = layerPath; - req.url = protohost + req.url.slice(protohost.length + removed.length) - - // Ensure leading slash - if (!protohost && req.url[0] !== '/') { - req.url = '/' + req.url; - slashAdded = true; - } - - // Setup base URL (no trailing slash) - req.baseUrl = parentUrl + (removed[removed.length - 1] === '/' - ? removed.substring(0, removed.length - 1) - : removed); - } - - debug('%s %s : %s', layer.name, layerPath, req.originalUrl); - - if (layerError) { - layer.handle_error(layerError, req, res, next); - } else { - layer.handle_request(req, res, next); - } - } -}; - -/** - * Process any parameters for the layer. - * @private - */ - -proto.process_params = function process_params(layer, called, req, res, done) { - var params = this.params; - - // captured parameters from the layer, keys and values - var keys = layer.keys; - - // fast track - if (!keys || keys.length === 0) { - return done(); - } - - var i = 0; - var name; - var paramIndex = 0; - var key; - var paramVal; - var paramCallbacks; - var paramCalled; - - // process params in order - // param callbacks can be async - function param(err) { - if (err) { - return done(err); - } - - if (i >= keys.length ) { - return done(); - } - - paramIndex = 0; - key = keys[i++]; - name = key.name; - paramVal = req.params[name]; - paramCallbacks = params[name]; - paramCalled = called[name]; - - if (paramVal === undefined || !paramCallbacks) { - return param(); - } - - // param previously called with same value or error occurred - if (paramCalled && (paramCalled.match === paramVal - || (paramCalled.error && paramCalled.error !== 'route'))) { - // restore value - req.params[name] = paramCalled.value; - - // next param - return param(paramCalled.error); - } - - called[name] = paramCalled = { - error: null, - match: paramVal, - value: paramVal - }; - - paramCallback(); - } - - // single param callbacks - function paramCallback(err) { - var fn = paramCallbacks[paramIndex++]; - - // store updated value - paramCalled.value = req.params[key.name]; - - if (err) { - // store error - paramCalled.error = err; - param(err); - return; - } - - if (!fn) return param(); - - try { - fn(req, res, paramCallback, paramVal, key.name); - } catch (e) { - paramCallback(e); - } - } - - param(); -}; - -/** - * Use the given middleware function, with optional path, defaulting to "/". - * - * Use (like `.all`) will run for any http METHOD, but it will not add - * handlers for those methods so OPTIONS requests will not consider `.use` - * functions even if they could respond. - * - * The other difference is that _route_ path is stripped and not visible - * to the handler function. The main effect of this feature is that mounted - * handlers can operate without any code changes regardless of the "prefix" - * pathname. - * - * @public - */ - -proto.use = function use(fn) { - var offset = 0; - var path = '/'; - - // default path to '/' - // disambiguate router.use([fn]) - if (typeof fn !== 'function') { - var arg = fn; - - while (Array.isArray(arg) && arg.length !== 0) { - arg = arg[0]; - } - - // first arg is the path - if (typeof arg !== 'function') { - offset = 1; - path = fn; - } - } - - var callbacks = flatten(slice.call(arguments, offset)); - - if (callbacks.length === 0) { - throw new TypeError('Router.use() requires a middleware function') - } - - for (var i = 0; i < callbacks.length; i++) { - var fn = callbacks[i]; - - if (typeof fn !== 'function') { - throw new TypeError('Router.use() requires a middleware function but got a ' + gettype(fn)) - } - - // add the middleware - debug('use %o %s', path, fn.name || '') - - var layer = new Layer(path, { - sensitive: this.caseSensitive, - strict: false, - end: false - }, fn); - - layer.route = undefined; - - this.stack.push(layer); - } - - return this; -}; - -/** - * Create a new Route for the given path. - * - * Each route contains a separate middleware stack and VERB handlers. - * - * See the Route api documentation for details on adding handlers - * and middleware to routes. - * - * @param {String} path - * @return {Route} - * @public - */ - -proto.route = function route(path) { - var route = new Route(path); - - var layer = new Layer(path, { - sensitive: this.caseSensitive, - strict: this.strict, - end: true - }, route.dispatch.bind(route)); - - layer.route = route; - - this.stack.push(layer); - return route; -}; - -// create Router#VERB functions -methods.concat('all').forEach(function(method){ - proto[method] = function(path){ - var route = this.route(path) - route[method].apply(route, slice.call(arguments, 1)); - return this; - }; -}); - -// append methods to a list of methods -function appendMethods(list, addition) { - for (var i = 0; i < addition.length; i++) { - var method = addition[i]; - if (list.indexOf(method) === -1) { - list.push(method); - } - } -} - -// get pathname of request -function getPathname(req) { - try { - return parseUrl(req).pathname; - } catch (err) { - return undefined; - } -} - -// Get get protocol + host for a URL -function getProtohost(url) { - if (typeof url !== 'string' || url.length === 0 || url[0] === '/') { - return undefined - } - - var searchIndex = url.indexOf('?') - var pathLength = searchIndex !== -1 - ? searchIndex - : url.length - var fqdnIndex = url.slice(0, pathLength).indexOf('://') - - return fqdnIndex !== -1 - ? url.substring(0, url.indexOf('/', 3 + fqdnIndex)) - : undefined -} - -// get type for error message -function gettype(obj) { - var type = typeof obj; - - if (type !== 'object') { - return type; - } - - // inspect [[Class]] for objects - return toString.call(obj) - .replace(objectRegExp, '$1'); -} - -/** - * Match path to a layer. - * - * @param {Layer} layer - * @param {string} path - * @private - */ - -function matchLayer(layer, path) { - try { - return layer.match(path); - } catch (err) { - return err; - } -} - -// merge params with parent params -function mergeParams(params, parent) { - if (typeof parent !== 'object' || !parent) { - return params; - } - - // make copy of parent for base - var obj = mixin({}, parent); - - // simple non-numeric merging - if (!(0 in params) || !(0 in parent)) { - return mixin(obj, params); - } - - var i = 0; - var o = 0; - - // determine numeric gaps - while (i in params) { - i++; - } - - while (o in parent) { - o++; - } - - // offset numeric indices in params before merge - for (i--; i >= 0; i--) { - params[i + o] = params[i]; - - // create holes for the merge when necessary - if (i < o) { - delete params[i]; - } - } - - return mixin(obj, params); -} - -// restore obj props after function -function restore(fn, obj) { - var props = new Array(arguments.length - 2); - var vals = new Array(arguments.length - 2); - - for (var i = 0; i < props.length; i++) { - props[i] = arguments[i + 2]; - vals[i] = obj[props[i]]; - } - - return function () { - // restore vals - for (var i = 0; i < props.length; i++) { - obj[props[i]] = vals[i]; - } - - return fn.apply(this, arguments); - }; -} - -// send an OPTIONS response -function sendOptionsResponse(res, options, next) { - try { - var body = options.join(','); - res.set('Allow', body); - res.send(body); - } catch (err) { - next(err); - } -} - -// wrap a function -function wrap(old, fn) { - return function proxy() { - var args = new Array(arguments.length + 1); - - args[0] = old; - for (var i = 0, len = arguments.length; i < len; i++) { - args[i + 1] = arguments[i]; - } - - fn.apply(this, args); - }; -} diff --git a/spaces/fffiloni/simple-animation-doodle/brushpoint.js b/spaces/fffiloni/simple-animation-doodle/brushpoint.js deleted file mode 100644 index 41e032ecfa53a2cc614ee4c6c3a4cc1f69c4ab68..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/simple-animation-doodle/brushpoint.js +++ /dev/null @@ -1,66 +0,0 @@ -class BrushPoint{ - - constructor(name, dist, piPosition) { - this.name = name; - this.dist = dist; - this.piPosition = piPosition - - this.px; - this.py; - this.ppx; - this.ppy; - this.sx; - this.sy; - this.pointerX; - this.pointerY; - } - - // ----------------------------------------- - // ----------------------------------------- - - calcPointCoordinates(mouseX, mouseY, angle, pressure){ - this.pointerX = mouseX + (this.dist * pressure) * cos(angle + this.piPosition); - this.pointerY = mouseY + (this.dist * pressure) * sin(angle + this.piPosition); - //console.log('class: ' + this.pointerX + ' ' + this.pointerY) - } - - // ----------------------------------------- - // ----------------------------------------- - - resetPointOrigin(){ - this.sx = this.pointerX; - this.sy = this.pointerY; - this.px = this.pointerX; - this.py = this.pointerY; - this.ppx = this.pointerX; - this.ppy = this.pointerY; - //console.log(this.sx, this.sy, this.px, this.py, this.ppx, this.ppy) - } - - // ----------------------------------------- - // ----------------------------------------- - - shiftPointVertex(){ - this.sx = this.ppx; - this.sy = this.ppy; - this.ppx = this.px; - this.ppy = this.py; - this.px = this.pointerX; - this.py = this.pointerY; - } - - // ----------------------------------------- - // ----------------------------------------- - - pushPoints(point){ - point.x1.push(this.sx) - point.y1.push(this.sy) - point.x2.push(this.ppx) - point.y2.push(this.ppy) - point.x3.push(this.px) - point.y3.push(this.py) - point.x4.push(this.pointerX) - point.y4.push(this.pointerY) - } - -} \ No newline at end of file diff --git a/spaces/fkhuggingme/gpt-academic/check_proxy.py b/spaces/fkhuggingme/gpt-academic/check_proxy.py deleted file mode 100644 index 754b5d36b0c39d29eb6f4dcb8ed88355bcb6335f..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/check_proxy.py +++ /dev/null @@ -1,151 +0,0 @@ - -def check_proxy(proxies): - import requests - proxies_https = proxies['https'] if proxies is not None else '无' - try: - response = requests.get("https://ipapi.co/json/", - proxies=proxies, timeout=4) - data = response.json() - print(f'查询代理的地理位置,返回的结果是{data}') - if 'country_name' in data: - country = data['country_name'] - result = f"代理配置 {proxies_https}, 代理所在地:{country}" - elif 'error' in data: - result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限" - print(result) - return result - except: - result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" - print(result) - return result - - -def backup_and_download(current_version, remote_version): - """ - 一键更新协议:备份和下载 - """ - from toolbox import get_conf - import shutil - import os - import requests - import zipfile - os.makedirs(f'./history', exist_ok=True) - backup_dir = f'./history/backup-{current_version}/' - new_version_dir = f'./history/new-version-{remote_version}/' - if os.path.exists(new_version_dir): - return new_version_dir - os.makedirs(new_version_dir) - shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history']) - proxies, = get_conf('proxies') - r = requests.get( - 'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True) - zip_file_path = backup_dir+'/master.zip' - with open(zip_file_path, 'wb+') as f: - f.write(r.content) - dst_path = new_version_dir - with zipfile.ZipFile(zip_file_path, "r") as zip_ref: - for zip_info in zip_ref.infolist(): - dst_file_path = os.path.join(dst_path, zip_info.filename) - if os.path.exists(dst_file_path): - os.remove(dst_file_path) - zip_ref.extract(zip_info, dst_path) - return new_version_dir - - -def patch_and_restart(path): - """ - 一键更新协议:覆盖和重启 - """ - from distutils import dir_util - import shutil - import os - import sys - import time - import glob - from colorful import print亮黄, print亮绿, print亮红 - # if not using config_private, move origin config.py as config_private.py - if not os.path.exists('config_private.py'): - print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,', - '另外您可以随时在history子文件夹下找回旧版的程序。') - shutil.copyfile('config.py', 'config_private.py') - path_new_version = glob.glob(path + '/*-master')[0] - dir_util.copy_tree(path_new_version, './') - print亮绿('代码已经更新,即将更新pip包依赖……') - for i in reversed(range(5)): time.sleep(1); print(i) - try: - import subprocess - subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt']) - except: - print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。') - print亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启') - print亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。') - print(' ------------------------------ -----------------------------------') - for i in reversed(range(8)): time.sleep(1); print(i) - os.execl(sys.executable, sys.executable, *sys.argv) - - -def get_current_version(): - import json - try: - with open('./version', 'r', encoding='utf8') as f: - current_version = json.loads(f.read())['version'] - except: - current_version = "" - return current_version - - -def auto_update(): - """ - 一键更新协议:查询版本和用户意见 - """ - try: - from toolbox import get_conf - import requests - import time - import json - proxies, = get_conf('proxies') - response = requests.get( - "https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5) - remote_json_data = json.loads(response.text) - remote_version = remote_json_data['version'] - if remote_json_data["show_feature"]: - new_feature = "新功能:" + remote_json_data["new_feature"] - else: - new_feature = "" - with open('./version', 'r', encoding='utf8') as f: - current_version = f.read() - current_version = json.loads(current_version)['version'] - if (remote_version - current_version) >= 0.01: - from colorful import print亮黄 - print亮黄( - f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}') - print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n') - user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?') - if user_instruction in ['Y', 'y']: - path = backup_and_download(current_version, remote_version) - try: - patch_and_restart(path) - except: - print('更新失败。') - else: - print('自动更新程序:已禁用') - return - else: - return - except: - print('自动更新程序:已禁用') - -def warm_up_modules(): - print('正在执行一些模块的预热...') - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - enc.encode("模块预热", disallowed_special=()) - enc = model_info["gpt-4"]['tokenizer'] - enc.encode("模块预热", disallowed_special=()) - -if __name__ == '__main__': - import os - os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 - from toolbox import get_conf - proxies, = get_conf('proxies') - check_proxy(proxies) diff --git a/spaces/fuckyoudeki/AutoGPT/tests/unit/test_browse_scrape_links.py b/spaces/fuckyoudeki/AutoGPT/tests/unit/test_browse_scrape_links.py deleted file mode 100644 index 0a3340e7397a997da96b8ab9828954230e1a3c20..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/tests/unit/test_browse_scrape_links.py +++ /dev/null @@ -1,118 +0,0 @@ -# Generated by CodiumAI - -# Dependencies: -# pip install pytest-mock -import pytest - -from autogpt.commands.web_requests import scrape_links - -""" -Code Analysis - -Objective: -The objective of the 'scrape_links' function is to scrape hyperlinks from a -given URL and return them in a formatted way. - -Inputs: -- url: a string representing the URL to be scraped. - -Flow: -1. Send a GET request to the given URL using the requests library and the user agent header from the config file. -2. Check if the response contains an HTTP error. If it does, return "error". -3. Parse the HTML content of the response using the BeautifulSoup library. -4. Remove any script and style tags from the parsed HTML. -5. Extract all hyperlinks from the parsed HTML using the 'extract_hyperlinks' function. -6. Format the extracted hyperlinks using the 'format_hyperlinks' function. -7. Return the formatted hyperlinks. - -Outputs: -- A list of formatted hyperlinks. - -Additional aspects: -- The function uses the 'requests' and 'BeautifulSoup' libraries to send HTTP -requests and parse HTML content, respectively. -- The 'extract_hyperlinks' function is called to extract hyperlinks from the parsed HTML. -- The 'format_hyperlinks' function is called to format the extracted hyperlinks. -- The function checks for HTTP errors and returns "error" if any are found. -""" - - -class TestScrapeLinks: - # Tests that the function returns a list of formatted hyperlinks when - # provided with a valid url that returns a webpage with hyperlinks. - def test_valid_url_with_hyperlinks(self): - url = "https://www.google.com" - result = scrape_links(url) - assert len(result) > 0 - assert isinstance(result, list) - assert isinstance(result[0], str) - - # Tests that the function returns correctly formatted hyperlinks when given a valid url. - def test_valid_url(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = ( - "Google" - ) - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a valid URL - result = scrape_links("https://www.example.com") - - # Assert that the function returns correctly formatted hyperlinks - assert result == ["Google (https://www.google.com)"] - - # Tests that the function returns "error" when given an invalid url. - def test_invalid_url(self, mocker): - # Mock the requests.get() function to return an HTTP error response - mock_response = mocker.Mock() - mock_response.status_code = 404 - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with an invalid URL - result = scrape_links("https://www.invalidurl.com") - - # Assert that the function returns "error" - assert "Error:" in result - - # Tests that the function returns an empty list when the html contains no hyperlinks. - def test_no_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing no hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = "

          No hyperlinks here

          " - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a URL containing no hyperlinks - result = scrape_links("https://www.example.com") - - # Assert that the function returns an empty list - assert result == [] - - # Tests that scrape_links() correctly extracts and formats hyperlinks from - # a sample HTML containing a few hyperlinks. - def test_scrape_links_with_few_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with a sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = """ - - - - - - - - """ - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function being tested - result = scrape_links("https://www.example.com") - - # Assert that the function returns a list of formatted hyperlinks - assert isinstance(result, list) - assert len(result) == 3 - assert result[0] == "Google (https://www.google.com)" - assert result[1] == "GitHub (https://github.com)" - assert result[2] == "CodiumAI (https://www.codium.ai)" diff --git a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Forefront.py b/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Forefront.py deleted file mode 100644 index e7e89831cc4ec6dc37ea094d9828a7582e981ff1..0000000000000000000000000000000000000000 --- a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Forefront.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import json -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://forefront.com' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - json_data = { - 'text': messages[-1]['content'], - 'action': 'noauth', - 'id': '', - 'parentId': '', - 'workspaceId': '', - 'messagePersona': '607e41fe-95be-497e-8e97-010a59b2e2c0', - 'model': 'gpt-4', - 'messages': messages[:-1] if len(messages) > 1 else [], - 'internetMode': 'auto' - } - response = requests.post( 'https://streaming.tenant-forefront-default.knative.chi.coreweave.com/free-chat', - json=json_data, stream=True) - for token in response.iter_lines(): - if b'delta' in token: - token = json.loads(token.decode().split('data: ')[1])['delta'] - yield (token) -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/galang123/test123test/README.md b/spaces/galang123/test123test/README.md deleted file mode 100644 index e5b38e84d7b47894f74ae957416114c0b5bea908..0000000000000000000000000000000000000000 --- a/spaces/galang123/test123test/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Test123test -emoji: 🐨 -colorFrom: indigo -colorTo: blue -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gatilin/mmocr-webui/app.py b/spaces/gatilin/mmocr-webui/app.py deleted file mode 100644 index e0795b6dff2348aa880d68846e25e7751ad59bae..0000000000000000000000000000000000000000 --- a/spaces/gatilin/mmocr-webui/app.py +++ /dev/null @@ -1,209 +0,0 @@ -import os - -os.system("pip install gradio==3.42.0") -os.system("pip install 'mmengine>=0.6.0'") -os.system("pip install 'mmcv>=2.0.0rc4,<2.1.0'") -os.system("pip install 'mmdet>=3.0.0rc5, < 3.2.0'") -os.system("pip install mmocr") -import json -import os -from argparse import ArgumentParser - -import PIL -import cv2 -import gradio as gr -import numpy as np -import torch -from PIL.Image import Image -from mmocr.apis.inferencers import MMOCRInferencer - -import warnings - -warnings.filterwarnings("ignore") - - -def save_image(img, img_path): - # Convert PIL image to OpenCV image - img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR) - # Save OpenCV image - cv2.imwrite(img_path, img) - - -textdet_model_list = ['DBNet', 'DRRG', 'FCENet', 'PANet', 'PSENet', 'TextSnake', 'MaskRCNN'] -textrec_model_list = ['ABINet', 'ASTER', 'CRNN', 'MASTER', 'NRTR', 'RobustScanner', 'SARNet', 'SATRN', 'SVTR'] -textkie_model_list = ['SDMGR'] - -def ocr_inference(inputs, out_dir, det, det_weights, rec, rec_weights, kie, kie_weights, device): - init_args, call_args = parse_args() - inputs = np.array(inputs) - img_path = "demo_text_ocr.jpg" - save_image(inputs, img_path) - if det is not None and rec is not None: - init_args['det'] = det - init_args['det_weights'] = None - init_args['rec'] = rec - init_args['rec_weights'] = None - elif det_weights is not None and rec_weights is not None: - init_args['det'] = None - init_args['det_weights'] = det_weights - init_args['rec'] = None - init_args['rec_weights'] = rec_weights - call_args['inputs'] = img_path - call_args['out_dir'] = out_dir - call_args['batch_size'] = 1 - call_args['show'] = False - call_args['save_pred'] = True - call_args['save_vis'] = True - init_args['device'] = device - print("init_args", init_args) - print("call_args", call_args) - ocr = MMOCRInferencer(**init_args) - ocr(**call_args) - save_vis_dir = './results/vis/' - save_pred_dir = './results/preds/' - img_out = PIL.Image.open(os.path.join(save_vis_dir, img_path)) - json_out = json.load(open(os.path.join(save_pred_dir, img_path.replace('.jpg', '.json')))) - return img_out, json_out - - -def download_test_image(): - # Images - torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/59380685/266821429-9a897c0a-5b02-4260-a65b-3514b758f6b6.jpg', - 'demo_densetext_det.jpg') - torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/59380685/266821432-17bb0646-a3e9-451e-9b4d-6e41ce4c3f0c.jpg', - 'demo_text_recog.jpg') - torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/59380685/266821434-fe0d4d18-f3e2-4acf-baf5-0d2e318f0b09.jpg', - 'demo_text_ocr.jpg') - torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/59380685/266821435-5d7af2b4-cb84-4355-91cb-37d90e91aa30.jpg', - 'demo_text_det.jpg') - torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/59380685/266821436-4790c6c1-2da5-45c7-b837-04eeea0d7264.jpeg', - 'demo_kie.jpg') - - -def parse_args(): - parser = ArgumentParser() - parser.add_argument( - '--inputs', type=str, help='Input image file or folder path.') - parser.add_argument( - '--out-dir', - type=str, - default='./results/', - help='Output directory of results.') - parser.add_argument( - '--det', - type=str, - default=None, - help='Pretrained text detection algorithm. It\'s the path to the ' - 'config file or the model name defined in metafile.') - parser.add_argument( - '--det-weights', - type=str, - default=None, - help='Path to the custom checkpoint file of the selected det model. ' - 'If it is not specified and "det" is a model name of metafile, the ' - 'weights will be loaded from metafile.') - parser.add_argument( - '--rec', - type=str, - default=None, - help='Pretrained text recognition algorithm. It\'s the path to the ' - 'config file or the model name defined in metafile.') - parser.add_argument( - '--rec-weights', - type=str, - default=None, - help='Path to the custom checkpoint file of the selected recog model. ' - 'If it is not specified and "rec" is a model name of metafile, the ' - 'weights will be loaded from metafile.') - parser.add_argument( - '--kie', - type=str, - default=None, - help='Pretrained key information extraction algorithm. It\'s the path' - 'to the config file or the model name defined in metafile.') - parser.add_argument( - '--kie-weights', - type=str, - default=None, - help='Path to the custom checkpoint file of the selected kie model. ' - 'If it is not specified and "kie" is a model name of metafile, the ' - 'weights will be loaded from metafile.') - parser.add_argument( - '--device', - type=str, - default=None, - help='Device used for inference. ' - 'If not specified, the available device will be automatically used.') - parser.add_argument( - '--batch-size', type=int, default=1, help='Inference batch size.') - parser.add_argument( - '--show', - action='store_true', - help='Display the image in a popup window.') - parser.add_argument( - '--print-result', - action='store_true', - help='Whether to print the results.') - parser.add_argument( - '--save_pred', - action='store_true', - help='Save the inference results to out_dir.') - parser.add_argument( - '--save_vis', - action='store_true', - help='Save the visualization results to out_dir.') - - call_args = vars(parser.parse_args()) - - init_kws = [ - 'det', 'det_weights', 'rec', 'rec_weights', 'kie', 'kie_weights', 'device' - ] - init_args = {} - for init_kw in init_kws: - init_args[init_kw] = call_args.pop(init_kw) - - return init_args, call_args - - -if __name__ == '__main__': - # Define Gradio input and output types - input_image = gr.inputs.Image(type="pil", label="Input Image") - out_dir = gr.inputs.Textbox(default="results") - det = gr.inputs.Dropdown(label="Text Detection Model", choices=[m for m in textdet_model_list], default='DBNet') - det_weights = gr.inputs.Textbox(default=None) - rec = gr.inputs.Dropdown(label="Text Recognition Model", choices=[m for m in textrec_model_list], default='CRNN') - rec_weights = gr.inputs.Textbox(default=None) - device = gr.inputs.Radio(choices=["cpu", "cuda"], label="Device used for inference", default="cpu") - batch_size = gr.inputs.Number(default=1, label="Inference batch size") - output_image = gr.outputs.Image(type="pil", label="Output Image") - output_json = gr.outputs.Textbox() - download_test_image() - examples = [["demo_text_ocr.jpg", "results", "DBNet", None, "CRNN", "cpu"], - ["demo_text_det.jpg", "results", "FCENet", None, "ASTER", "cpu"], - ["demo_text_recog.jpg", "results", "FCENet", None, "MASTER", "cpu"], - ] - - title = "MMOCR web demo" - description = "
          " \ - "

          MMOCR MMOCR 是基于 PyTorch 和 mmdetection 的开源工具箱,专注于文本检测,文本识别以及相应的下游任务,如关键信息提取。 它是 OpenMMLab 项目的一部分。" \ - "OpenMMLab Text Detection, Recognition and Understanding Toolbox.

          " - article = "

          MMOCR

          " \ - "

          gradio build by gatilin

          " - - # Create Gradio interface - iface = gr.Interface( - fn=ocr_inference, - inputs=[ - input_image, out_dir, det, det_weights, rec, rec_weights, device - ], - outputs=[output_image, output_json], examples=examples, - title=title, description=description, article=article, - ) - - # Launch Gradio interface - iface.launch() diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/deeplabv3_r50-d8.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/deeplabv3_r50-d8.py deleted file mode 100644 index d7a43bee01422ad4795dd27874e0cd4bb6cbfecf..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/deeplabv3_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='ASPPHead', - in_channels=2048, - in_index=3, - channels=512, - dilations=(1, 12, 24, 36), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/cc_attention.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/cc_attention.py deleted file mode 100644 index 9207aa95e6730bd9b3362dee612059a5f0ce1c5e..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/cc_attention.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmcv.cnn import PLUGIN_LAYERS, Scale - - -def NEG_INF_DIAG(n, device): - """Returns a diagonal matrix of size [n, n]. - - The diagonal are all "-inf". This is for avoiding calculating the - overlapped element in the Criss-Cross twice. - """ - return torch.diag(torch.tensor(float('-inf')).to(device).repeat(n), 0) - - -@PLUGIN_LAYERS.register_module() -class CrissCrossAttention(nn.Module): - """Criss-Cross Attention Module. - - .. note:: - Before v1.3.13, we use a CUDA op. Since v1.3.13, we switch - to a pure PyTorch and equivalent implementation. For more - details, please refer to https://github.com/open-mmlab/mmcv/pull/1201. - - Speed comparison for one forward pass - - - Input size: [2,512,97,97] - - Device: 1 NVIDIA GeForce RTX 2080 Ti - - +-----------------------+---------------+------------+---------------+ - | |PyTorch version|CUDA version|Relative speed | - +=======================+===============+============+===============+ - |with torch.no_grad() |0.00554402 s |0.0299619 s |5.4x | - +-----------------------+---------------+------------+---------------+ - |no with torch.no_grad()|0.00562803 s |0.0301349 s |5.4x | - +-----------------------+---------------+------------+---------------+ - - Args: - in_channels (int): Channels of the input feature map. - """ - - def __init__(self, in_channels): - super().__init__() - self.query_conv = nn.Conv2d(in_channels, in_channels // 8, 1) - self.key_conv = nn.Conv2d(in_channels, in_channels // 8, 1) - self.value_conv = nn.Conv2d(in_channels, in_channels, 1) - self.gamma = Scale(0.) - self.in_channels = in_channels - - def forward(self, x): - """forward function of Criss-Cross Attention. - - Args: - x (Tensor): Input feature. \ - shape (batch_size, in_channels, height, width) - Returns: - Tensor: Output of the layer, with shape of \ - (batch_size, in_channels, height, width) - """ - B, C, H, W = x.size() - query = self.query_conv(x) - key = self.key_conv(x) - value = self.value_conv(x) - energy_H = torch.einsum('bchw,bciw->bwhi', query, key) + NEG_INF_DIAG( - H, query.device) - energy_H = energy_H.transpose(1, 2) - energy_W = torch.einsum('bchw,bchj->bhwj', query, key) - attn = F.softmax( - torch.cat([energy_H, energy_W], dim=-1), dim=-1) # [B,H,W,(H+W)] - out = torch.einsum('bciw,bhwi->bchw', value, attn[..., :H]) - out += torch.einsum('bchj,bhwj->bchw', value, attn[..., H:]) - - out = self.gamma(out) + x - out = out.contiguous() - - return out - - def __repr__(self): - s = self.__class__.__name__ - s += f'(in_channels={self.in_channels})' - return s diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/progressbar.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/progressbar.py deleted file mode 100644 index 0062f670dd94fa9da559ab26ef85517dcf5211c7..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/progressbar.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from collections.abc import Iterable -from multiprocessing import Pool -from shutil import get_terminal_size - -from .timer import Timer - - -class ProgressBar: - """A progress bar which can print the progress.""" - - def __init__(self, task_num=0, bar_width=50, start=True, file=sys.stdout): - self.task_num = task_num - self.bar_width = bar_width - self.completed = 0 - self.file = file - if start: - self.start() - - @property - def terminal_width(self): - width, _ = get_terminal_size() - return width - - def start(self): - if self.task_num > 0: - self.file.write(f'[{" " * self.bar_width}] 0/{self.task_num}, ' - 'elapsed: 0s, ETA:') - else: - self.file.write('completed: 0, elapsed: 0s') - self.file.flush() - self.timer = Timer() - - def update(self, num_tasks=1): - assert num_tasks > 0 - self.completed += num_tasks - elapsed = self.timer.since_start() - if elapsed > 0: - fps = self.completed / elapsed - else: - fps = float('inf') - if self.task_num > 0: - percentage = self.completed / float(self.task_num) - eta = int(elapsed * (1 - percentage) / percentage + 0.5) - msg = f'\r[{{}}] {self.completed}/{self.task_num}, ' \ - f'{fps:.1f} task/s, elapsed: {int(elapsed + 0.5)}s, ' \ - f'ETA: {eta:5}s' - - bar_width = min(self.bar_width, - int(self.terminal_width - len(msg)) + 2, - int(self.terminal_width * 0.6)) - bar_width = max(2, bar_width) - mark_width = int(bar_width * percentage) - bar_chars = '>' * mark_width + ' ' * (bar_width - mark_width) - self.file.write(msg.format(bar_chars)) - else: - self.file.write( - f'completed: {self.completed}, elapsed: {int(elapsed + 0.5)}s,' - f' {fps:.1f} tasks/s') - self.file.flush() - - -def track_progress(func, tasks, bar_width=50, file=sys.stdout, **kwargs): - """Track the progress of tasks execution with a progress bar. - - Tasks are done with a simple for-loop. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - results = [] - for task in tasks: - results.append(func(task, **kwargs)) - prog_bar.update() - prog_bar.file.write('\n') - return results - - -def init_pool(process_num, initializer=None, initargs=None): - if initializer is None: - return Pool(process_num) - elif initargs is None: - return Pool(process_num, initializer) - else: - if not isinstance(initargs, tuple): - raise TypeError('"initargs" must be a tuple') - return Pool(process_num, initializer, initargs) - - -def track_parallel_progress(func, - tasks, - nproc, - initializer=None, - initargs=None, - bar_width=50, - chunksize=1, - skip_first=False, - keep_order=True, - file=sys.stdout): - """Track the progress of parallel task execution with a progress bar. - - The built-in :mod:`multiprocessing` module is used for process pools and - tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - nproc (int): Process (worker) number. - initializer (None or callable): Refer to :class:`multiprocessing.Pool` - for details. - initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for - details. - chunksize (int): Refer to :class:`multiprocessing.Pool` for details. - bar_width (int): Width of progress bar. - skip_first (bool): Whether to skip the first sample for each worker - when estimating fps, since the initialization step may takes - longer. - keep_order (bool): If True, :func:`Pool.imap` is used, otherwise - :func:`Pool.imap_unordered` is used. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - pool = init_pool(nproc, initializer, initargs) - start = not skip_first - task_num -= nproc * chunksize * int(skip_first) - prog_bar = ProgressBar(task_num, bar_width, start, file=file) - results = [] - if keep_order: - gen = pool.imap(func, tasks, chunksize) - else: - gen = pool.imap_unordered(func, tasks, chunksize) - for result in gen: - results.append(result) - if skip_first: - if len(results) < nproc * chunksize: - continue - elif len(results) == nproc * chunksize: - prog_bar.start() - continue - prog_bar.update() - prog_bar.file.write('\n') - pool.close() - pool.join() - return results - - -def track_iter_progress(tasks, bar_width=50, file=sys.stdout): - """Track the progress of tasks iteration or enumeration with a progress - bar. - - Tasks are yielded with a simple for-loop. - - Args: - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Yields: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - for task in tasks: - yield task - prog_bar.update() - prog_bar.file.write('\n') diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/ops/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/ops/__init__.py deleted file mode 100644 index bec51c75b9363a9a19e9fb5c35f4e7dbd6f7751c..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/ops/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .encoding import Encoding -from .wrappers import Upsample, resize - -__all__ = ['Upsample', 'resize', 'Encoding'] diff --git a/spaces/gligen/demo/gligen/ldm/__init__.py b/spaces/gligen/demo/gligen/ldm/__init__.py deleted file mode 100644 index d71fc3230d513f470c68e46752b355cd70824ecf..0000000000000000000000000000000000000000 --- a/spaces/gligen/demo/gligen/ldm/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -import gligen.evaluator as evaluator -import gligen.trainer as trainer -import gligen.ldm as ldm \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Crack VERIFIED De Age Of Mythology The Titans Descargar.md b/spaces/gotiQspiryo/whisper-ui/examples/Crack VERIFIED De Age Of Mythology The Titans Descargar.md deleted file mode 100644 index d1437e92539fa69a7b35ae72212e4fc6c14b895d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Crack VERIFIED De Age Of Mythology The Titans Descargar.md +++ /dev/null @@ -1,7 +0,0 @@ - -

          Pirated Keys Age of Mythology Crack + Activation code. Age Of Mythology это полное решение для сериала с огромным количеством характеристик. Положительный опыт, бесплатная форма возвращения для распиления текста каждым, но главным образом круг завершения ответа может намного для определения авторской проверки. Круг или связан с каждым прототипом, для продвинутых дизайнеров полезно получать больше возможности по возможности без проходживания на серверах важности статистики и анализ опыта заказов на вашем сайте. Особенно, после установки со всегда важнейшего очков данного производителя пользуется больше пользователей, нежели прежде, и объединяет в себе почти все все результаты.

          -

          Crack De Age Of Mythology The Titans Descargar


          Download Ziphttps://urlgoal.com/2uyNaL



          -

          In Age of Mythology: The Titans, experience a brand-new world that places players in the pivotal positions of the ancient Atlanteans. Explore the four primordial kingdoms, wield powerful god powers, form your own Council of Titans and build the greatest empire of them all. As the young and powerful king, Arthur, you must lead your nation to victory against a cruel enemy and reestablish your dominion.

          -

          Blast your way to glory in the prehistoric action-adventure saga, Age of Mythology! After the mysterious catastrophe, the untamed lands lie open to a new threat: a vast Horde of hungry beasts from beyond the steppes of Thecyrus has invaded from the north and pillaged the land of Civilization. As the young and ambitious Barbarian king, step into the shoes of the protagonist Arthur and travel across wide-open landscapes, scouring the continents for the legendary artifacts that will help you craft a mighty empire.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Jeepers Creepers 3 Fr Kickass 168.md b/spaces/gotiQspiryo/whisper-ui/examples/Jeepers Creepers 3 Fr Kickass 168.md deleted file mode 100644 index 27356061c691ef41642f3b5f675b3eedb216eb9e..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Jeepers Creepers 3 Fr Kickass 168.md +++ /dev/null @@ -1,7 +0,0 @@ - -

          foufiac 19191a764c
          -creepers-3-french-dvdrip-torrent
          [ -creepers-3-french-dvdrip-torrent ]
          [ -creepers-3-french-dvdrip-torrent ]
          [ -creepers-3-french-dvdrip-torrent ]
          link= -creepers-3-french-dvdrip-torrent
          link= -creepers-3-french-dvdrip-torrent
          link= -creepers-3-french-dvdrip-torrent

          -

          Raees full movie download in hindi 720p kickasshttps: scoutmails.com index301.php k Raees full movi -alcatel-mtk-phone-unlock-tool-free-60lEsondursownCemIdoma -onyx-productionhouse-x-100089-x86x64-multilanguage-crack-download-pc apollo racing wheel rw-2009 driver download trello.com Assassins Creed IV Black Flag Update v1 04 Incl Freedom Cry DLC RELOADED AqAssassins Creed IV Black Xdcam Content Browser Serial Number cheile genelor carte pdf free Download Download Pizza Gta Vice Cityl trello HiliClient -nucleus-kernel-for-fat-and-ntfs-130601-crack SevaWrormefeerat -fc2-video-premium-accountsLEKTILKBRELE Download soossiviouri -philip-hallawell-visagismo-integrado-pdf-download

          -

          Jeepers creepers 3 fr kickass 168


          Download File 🗸 https://urlgoal.com/2uyMdW



          -

          Cyberfoot 2010 32 Lig Yamas Indir -dvd-cloner-2017-crack-lifetime-activation-key-free-downloadx-force Inventor 2018 keygen trello DJ Music Mixer Pro 7.0 Crack Fully Activation Version 2019! -adobe-acrobat-xi-pro-11022-final-crack-utorrent Fefsseanna trello.com NeupleArrateBuhirrat trello.com Ledjineedync trello pulp fiction 1994 hindi dubbed ddl -terminator-2-judgment-day-english-full-movie-in-hindi-dubbed-hd SevaWrormefeerat -jeepers-creepers-3-divx-ita-torrent-itaLEKTILKBRELE Download soossiviouri Download

          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/modules/fairseq_dropout.py b/spaces/gradio/HuBERT/fairseq/modules/fairseq_dropout.py deleted file mode 100644 index 3cddca77186f5ddd5cfb9c0ed6def9bafdf3bf1e..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/fairseq_dropout.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Optional - -import torch.nn as nn -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -class FairseqDropout(nn.Module): - def __init__(self, p, module_name=None): - super().__init__() - self.p = p - self.module_name = module_name - self.apply_during_inference = False - - def forward(self, x, inplace: bool = False): - if self.p > 0 and (self.training or self.apply_during_inference): - return F.dropout(x, p=self.p, training=True, inplace=inplace) - else: - return x - - def make_generation_fast_( - self, - name: str, - retain_dropout: bool = False, - retain_dropout_modules: Optional[List[str]] = None, - **kwargs - ): - if retain_dropout: - if retain_dropout_modules is not None and self.module_name is None: - logger.warning( - "Cannot enable dropout during inference for module {} " - "because module_name was not set".format(name) - ) - elif ( - retain_dropout_modules is None # if None, apply to all modules - or self.module_name in retain_dropout_modules - ): - logger.info( - "Enabling dropout during inference for module: {}".format(name) - ) - self.apply_during_inference = True - else: - logger.info("Disabling dropout for module: {}".format(name)) diff --git a/spaces/greenlights/gitapp/app.py b/spaces/greenlights/gitapp/app.py deleted file mode 100644 index 4aa958a565f75505f6c4649adee551ae74eb9224..0000000000000000000000000000000000000000 --- a/spaces/greenlights/gitapp/app.py +++ /dev/null @@ -1,39 +0,0 @@ -import numpy as np -import json -from flask import Flask, request, jsonify, render_template -import pickle - -app = Flask(__name__) -model = pickle.load(open('model.pkl', 'rb')) - -@app.route('/') -def home(): - return render_template('index.html') - -@app.route('/predict',methods=['POST']) -def predict(): - ''' - For rendering results on HTML GUI - ''' - int_features = [int(x) for x in request.form.values()] - final_features = [np.array(int_features)] - prediction = model.predict(final_features) - - output = round(prediction[0], 2) - - return render_template('index.html', prediction_text='Salary is {}'.format(output)) - - -@app.route('/predict_api',methods=['POST']) -def predict_api(): - ''' - For direct API calls trought request - ''' - data = request.get_json(force=True) - prediction = model.predict([np.array(list(data.values()))]) - - output = prediction[0] - return jsonify(output) - -if __name__ == "__main__": - app.run(debug=True) diff --git a/spaces/guymorlan/English2ShamiDialect/README.md b/spaces/guymorlan/English2ShamiDialect/README.md deleted file mode 100644 index 702fa6c53fcc8f2e49b374afa6b50ac1badcdf13..0000000000000000000000000000000000000000 --- a/spaces/guymorlan/English2ShamiDialect/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: English2ShamiDialect -emoji: 👀 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gyugnsu/DragGan-Inversion/gen_images.py b/spaces/gyugnsu/DragGan-Inversion/gen_images.py deleted file mode 100644 index 996bc12f4cde6ee9d0076446250ed076a04b2641..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/gen_images.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Generate images using pretrained network pickle.""" - -import os -import re -from typing import List, Optional, Tuple, Union - -import click -import dnnlib -import numpy as np -import PIL.Image -import torch - -import legacy - -# ---------------------------------------------------------------------------- - - -def parse_range(s: Union[str, List]) -> List[int]: - '''Parse a comma separated list of numbers or ranges and return a list of ints. - - Example: '1,2,5-10' returns [1, 2, 5, 6, 7] - ''' - if isinstance(s, list): - return s - ranges = [] - range_re = re.compile(r'^(\d+)-(\d+)$') - for p in s.split(','): - m = range_re.match(p) - if m: - ranges.extend(range(int(m.group(1)), int(m.group(2))+1)) - else: - ranges.append(int(p)) - return ranges - -# ---------------------------------------------------------------------------- - - -def parse_vec2(s: Union[str, Tuple[float, float]]) -> Tuple[float, float]: - '''Parse a floating point 2-vector of syntax 'a,b'. - - Example: - '0,1' returns (0,1) - ''' - if isinstance(s, tuple): - return s - parts = s.split(',') - if len(parts) == 2: - return (float(parts[0]), float(parts[1])) - raise ValueError(f'cannot parse 2-vector {s}') - -# ---------------------------------------------------------------------------- - - -def make_transform(translate: Tuple[float, float], angle: float): - m = np.eye(3) - s = np.sin(angle/360.0*np.pi*2) - c = np.cos(angle/360.0*np.pi*2) - m[0][0] = c - m[0][1] = s - m[0][2] = translate[0] - m[1][0] = -s - m[1][1] = c - m[1][2] = translate[1] - return m - -# ---------------------------------------------------------------------------- - - -@click.command() -@click.option('--network', 'network_pkl', help='Network pickle filename', required=True) -@click.option('--seeds', type=parse_range, help='List of random seeds (e.g., \'0,1,4-6\')', required=True) -@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=1, show_default=True) -@click.option('--class', 'class_idx', type=int, help='Class label (unconditional if not specified)') -@click.option('--noise-mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True) -@click.option('--translate', help='Translate XY-coordinate (e.g. \'0.3,1\')', type=parse_vec2, default='0,0', show_default=True, metavar='VEC2') -@click.option('--rotate', help='Rotation angle in degrees', type=float, default=0, show_default=True, metavar='ANGLE') -@click.option('--outdir', help='Where to save the output images', type=str, required=True, metavar='DIR') -def generate_images( - network_pkl: str, - seeds: List[int], - truncation_psi: float, - noise_mode: str, - outdir: str, - translate: Tuple[float, float], - rotate: float, - class_idx: Optional[int] -): - """Generate images using pretrained network pickle. - - Examples: - - \b - # Generate an image using pre-trained AFHQv2 model ("Ours" in Figure 1, left). - python gen_images.py --outdir=out --trunc=1 --seeds=2 \\ - --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl - - \b - # Generate uncurated images with truncation using the MetFaces-U dataset - python gen_images.py --outdir=out --trunc=0.7 --seeds=600-605 \\ - --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-t-metfacesu-1024x1024.pkl - """ - - print('Loading networks from "%s"...' % network_pkl) - device = torch.device('cuda') - with dnnlib.util.open_url(network_pkl) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore - # import pickle - # G = legacy.load_network_pkl(f) - # output = open('checkpoints/stylegan2-car-config-f-pt.pkl', 'wb') - # pickle.dump(G, output) - - os.makedirs(outdir, exist_ok=True) - - # Labels. - label = torch.zeros([1, G.c_dim], device=device) - if G.c_dim != 0: - if class_idx is None: - raise click.ClickException( - 'Must specify class label with --class when using a conditional network') - label[:, class_idx] = 1 - else: - if class_idx is not None: - print('warn: --class=lbl ignored when running on an unconditional network') - - # Generate images. - for seed_idx, seed in enumerate(seeds): - print('Generating image for seed %d (%d/%d) ...' % - (seed, seed_idx, len(seeds))) - z = torch.from_numpy(np.random.RandomState( - seed).randn(1, G.z_dim)).to(device) - - # Construct an inverse rotation/translation matrix and pass to the generator. The - # generator expects this matrix as an inverse to avoid potentially failing numerical - # operations in the network. - if hasattr(G.synthesis, 'input'): - m = make_transform(translate, rotate) - m = np.linalg.inv(m) - G.synthesis.input.transform.copy_(torch.from_numpy(m)) - - img = G(z, label, truncation_psi=truncation_psi, noise_mode=noise_mode) - img = (img.permute(0, 2, 3, 1) * 127.5 + - 128).clamp(0, 255).to(torch.uint8) - PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save( - f'{outdir}/seed{seed:04d}.png') - - -# ---------------------------------------------------------------------------- - -if __name__ == "__main__": - generate_images() # pylint: disable=no-value-for-parameter - -# ---------------------------------------------------------------------------- diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/__init__.py b/spaces/hamelcubsfan/AutoGPT/autogpt/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hands012/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/hands012/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex deleted file mode 100644 index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex +++ /dev/null @@ -1,155 +0,0 @@ - -\begin{figure} - \centering - \includegraphics[scale=0.6]{Figures/ModalNet-21} - \caption{The Transformer - model architecture.} - \label{fig:model-arch} -\end{figure} - -% Although the primary workhorse of our model is attention, -%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail. - -Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next. - -The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively. - -\subsection{Encoder and Decoder Stacks} - -\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$. - -\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. - -% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail. - -\subsection{Attention} \label{sec:attention} -An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - -\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod} - -% \begin{figure} -% \centering -% \includegraphics[scale=0.6]{Figures/ModalNet-19} -% \caption{Scaled Dot-Product Attention.} -% \label{fig:multi-head-att} -% \end{figure} - -We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. - -In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: - -\begin{equation} - \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V -\end{equation} - -The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. - -%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients. - -% Already described in the subsequent section -%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$. - -%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model. - -While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. - - -%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$. - - -\subsubsection{Multi-Head Attention} \label{sec:multihead} - -\begin{figure} -\begin{minipage}[t]{0.5\textwidth} - \centering - Scaled Dot-Product Attention \\ - \vspace{0.5cm} - \includegraphics[scale=0.6]{Figures/ModalNet-19} -\end{minipage} -\begin{minipage}[t]{0.5\textwidth} - \centering - Multi-Head Attention \\ - \vspace{0.1cm} - \includegraphics[scale=0.6]{Figures/ModalNet-20} -\end{minipage} - - - % \centering - - \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.} - \label{fig:multi-head-att} -\end{figure} - -Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. -On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}. - -Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. - -\begin{align*} - \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\ -% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\ - \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\ -\end{align*} - -Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$. - - -%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation. - -In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$. -Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. - -\subsubsection{Applications of Attention in our Model} - -The Transformer uses multi-head attention in three different ways: -\begin{itemize} - \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}. - - \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. - - \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}. - -\end{itemize} - -\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn} - -In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. - -\begin{equation} - \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2 -\end{equation} - -While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$. - - - -%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention. - -%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention. - - -%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as -%\begin{equation*} \label{eq:attention} -% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq). -%\end{equation*} -%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$. - -%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$. -%\marginpar{} - -\subsection{Embeddings and Softmax} -Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$. - - -\subsection{Positional Encoding} -Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}. - -In this work, we use sine and cosine functions of different frequencies: - -\begin{align*} - PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\ - PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel}) -\end{align*} - -where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. - -We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/coco.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/coco.py deleted file mode 100644 index 995cc20fc5a268dd54b73f167d38a7eb536c6dbd..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/coco.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import os -import os.path -import math -from PIL import Image, ImageDraw - -import random -import numpy as np - -import torch -import torchvision -import torch.utils.data as data - -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark.structures.segmentation_mask import SegmentationMask -from maskrcnn_benchmark.structures.keypoint import PersonKeypoints -from maskrcnn_benchmark.config import cfg -import pdb - -def _count_visible_keypoints(anno): - return sum(sum(1 for v in ann["keypoints"][2::3] if v > 0) for ann in anno) - - -def _has_only_empty_bbox(anno): - return all(any(o <= 1 for o in obj["bbox"][2:]) for obj in anno) - - -def has_valid_annotation(anno): - # if it's empty, there is no annotation - if len(anno) == 0: - return False - # if all boxes have close to zero area, there is no annotation - if _has_only_empty_bbox(anno): - return False - # keypoints task have a slight different critera for considering - # if an annotation is valid - if "keypoints" not in anno[0]: - return True - # for keypoint detection tasks, only consider valid images those - # containing at least min_keypoints_per_image - if _count_visible_keypoints(anno) >= cfg.DATALOADER.MIN_KPS_PER_IMS: - return True - return False - - -def pil_loader(path, retry=5): - # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835) - ri = 0 - while ri < retry: - try: - with open(path, 'rb') as f: - img = Image.open(f) - return img.convert('RGB') - except: - ri += 1 - - -def rgb2id(color): - if isinstance(color, np.ndarray) and len(color.shape) == 3: - if color.dtype == np.uint8: - color = color.astype(np.int32) - return color[:, :, 0] + 256 * color[:, :, 1] + 256 * 256 * color[:, :, 2] - return int(color[0] + 256 * color[1] + 256 * 256 * color[2]) - - -class CocoDetection(data.Dataset): - """`MS Coco Detection `_ Dataset. - - Args: - root (string): Root directory where images are downloaded to. - annFile (string): Path to json annotation file. - transform (callable, optional): A function/transform that takes in an PIL image - and returns a transformed version. E.g, ``transforms.ToTensor`` - target_transform (callable, optional): A function/transform that takes in the - target and transforms it. - """ - - def __init__(self, root, annFile, transform=None, target_transform=None): - from pycocotools.coco import COCO - self.root = root - self.coco = COCO(annFile) - self.ids = list(self.coco.imgs.keys()) - self.transform = transform - self.target_transform = target_transform - - def __getitem__(self, index, return_meta=False): - """ - Args: - index (int): Index - - Returns: - tuple: Tuple (image, target). target is the object returned by ``coco.loadAnns``. - """ - coco = self.coco - img_id = self.ids[index] - if isinstance(img_id, str): - img_id = [img_id] - ann_ids = coco.getAnnIds(imgIds=img_id) - target = coco.loadAnns(ann_ids) - - meta = coco.loadImgs(img_id)[0] - path = meta['file_name'] - img = pil_loader(os.path.join(self.root, path)) - - if self.transform is not None: - img = self.transform(img) - - if self.target_transform is not None: - target = self.target_transform(target) - - if return_meta: - return img, target, meta - else: - return img, target - - def __len__(self): - return len(self.ids) - - def __repr__(self): - fmt_str = 'Dataset ' + self.__class__.__name__ + '\n' - fmt_str += ' Number of datapoints: {}\n'.format(self.__len__()) - fmt_str += ' Root Location: {}\n'.format(self.root) - tmp = ' Transforms (if any): ' - fmt_str += '{0}{1}\n'.format(tmp, self.transform.__repr__().replace('\n', '\n' + ' ' * len(tmp))) - tmp = ' Target Transforms (if any): ' - fmt_str += '{0}{1}'.format(tmp, self.target_transform.__repr__().replace('\n', '\n' + ' ' * len(tmp))) - return fmt_str - - -class COCODataset(CocoDetection): - def __init__(self, ann_file, root, remove_images_without_annotations, transforms=None, ignore_crowd=True, - max_box=-1, - few_shot=0, one_hot=False, override_category=None, **kwargs - ): - super(COCODataset, self).__init__(root, ann_file) - # sort indices for reproducible results - self.ids = sorted(self.ids) - - # filter images without detection annotations - if remove_images_without_annotations: - ids = [] - for img_id in self.ids: - if isinstance(img_id, str): - ann_ids = self.coco.getAnnIds(imgIds=[img_id], iscrowd=None) - else: - ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=None) - anno = self.coco.loadAnns(ann_ids) - if has_valid_annotation(anno): - ids.append(img_id) - self.ids = ids - - if few_shot: - ids = [] - cats_freq = [few_shot]*len(self.coco.cats.keys()) - if 'shuffle_seed' in kwargs and kwargs['shuffle_seed'] != 0: - import random - random.Random(kwargs['shuffle_seed']).shuffle(self.ids) - print("Shuffle the dataset with random seed: ", kwargs['shuffle_seed']) - for img_id in self.ids: - if isinstance(img_id, str): - ann_ids = self.coco.getAnnIds(imgIds=[img_id], iscrowd=None) - else: - ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=None) - anno = self.coco.loadAnns(ann_ids) - cat = set([ann['category_id'] for ann in anno]) #set/tuple corresponde to instance/image level - is_needed = sum([cats_freq[c-1]>0 for c in cat]) - if is_needed: - ids.append(img_id) - for c in cat: - cats_freq[c-1] -= 1 - # print(cat, cats_freq) - self.ids = ids - - if override_category is not None: - self.coco.dataset["categories"] = override_category - print("Override category: ", override_category) - - self.json_category_id_to_contiguous_id = { - v: i + 1 for i, v in enumerate(self.coco.getCatIds()) - } - self.contiguous_category_id_to_json_id = { - v: k for k, v in self.json_category_id_to_contiguous_id.items() - } - self.id_to_img_map = {k: v for k, v in enumerate(self.ids)} - self.transforms = transforms - self.ignore_crowd = ignore_crowd - self.max_box = max_box - self.one_hot = one_hot - - def categories(self, no_background=True): - categories = self.coco.dataset["categories"] - label_list = {} - for index, i in enumerate(categories): - if not no_background or (i["name"] != "__background__" and i['id'] != 0): - label_list[self.json_category_id_to_contiguous_id[i["id"]]] = i["name"] - return label_list - - def __getitem__(self, idx): - - - img, anno = super(COCODataset, self).__getitem__(idx) - - # filter crowd annotations - if self.ignore_crowd: - anno = [obj for obj in anno if obj["iscrowd"] == 0] - - boxes = [obj["bbox"] for obj in anno] - boxes = torch.as_tensor(boxes).reshape(-1, 4) # guard against no boxes - if self.max_box > 0 and len(boxes) > self.max_box: - rand_idx = torch.randperm(self.max_box) - boxes = boxes[rand_idx, :] - else: - rand_idx = None - target = BoxList(boxes, img.size, mode="xywh").convert("xyxy") - - classes = [obj["category_id"] for obj in anno] - classes = [self.json_category_id_to_contiguous_id[c] for c in classes] - classes = torch.tensor(classes) - - if rand_idx is not None: - classes = classes[rand_idx] - if cfg.DATASETS.CLASS_AGNOSTIC: - classes = torch.ones_like(classes) - target.add_field("labels", classes) - - if anno and "segmentation" in anno[0]: - masks = [obj["segmentation"] for obj in anno] - masks = SegmentationMask(masks, img.size, mode='poly') - target.add_field("masks", masks) - - if anno and "cbox" in anno[0]: - cboxes = [obj["cbox"] for obj in anno] - cboxes = torch.as_tensor(cboxes).reshape(-1, 4) # guard against no boxes - cboxes = BoxList(cboxes, img.size, mode="xywh").convert("xyxy") - target.add_field("cbox", cboxes) - - if anno and "keypoints" in anno[0]: - keypoints = [] - gt_keypoint = self.coco.cats[1]['keypoints'] # a better way to get keypoint description - use_keypoint = cfg.MODEL.ROI_KEYPOINT_HEAD.KEYPOINT_NAME - for obj in anno: - if len(use_keypoint) > 0: - kps = [] - for name in use_keypoint: - kp_idx = slice(3 * gt_keypoint.index(name), 3 * gt_keypoint.index(name) + 3) - kps += obj["keypoints"][kp_idx] - keypoints.append(kps) - else: - keypoints.append(obj["keypoints"]) - keypoints = PersonKeypoints(keypoints, img.size) - target.add_field("keypoints", keypoints) - - target = target.clip_to_image(remove_empty=True) - - if self.transforms is not None: - img, target = self.transforms(img, target) - - if cfg.DATASETS.SAMPLE_RATIO != 0.0: - ratio = cfg.DATASETS.SAMPLE_RATIO - num_sample_target = math.ceil(len(target) * ratio) if ratio > 0 else math.ceil(-ratio) - sample_idx = torch.randperm(len(target))[:num_sample_target] - target = target[sample_idx] - return img, target, idx - - def get_img_info(self, index): - img_id = self.id_to_img_map[index] - img_data = self.coco.imgs[img_id] - return img_data diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/roi_heads/keypoint_head/keypoint_head.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/roi_heads/keypoint_head/keypoint_head.py deleted file mode 100644 index e3ce439c0c3ec27c6432870d6913f4ac9fea9463..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/roi_heads/keypoint_head/keypoint_head.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch - -from .roi_keypoint_feature_extractors import make_roi_keypoint_feature_extractor -from .roi_keypoint_predictors import make_roi_keypoint_predictor -from .inference import make_roi_keypoint_post_processor -from .loss import make_roi_keypoint_loss_evaluator - - -class ROIKeypointHead(torch.nn.Module): - def __init__(self, cfg): - super(ROIKeypointHead, self).__init__() - self.cfg = cfg.clone() - self.feature_extractor = make_roi_keypoint_feature_extractor(cfg) - self.predictor = make_roi_keypoint_predictor(cfg) - self.post_processor = make_roi_keypoint_post_processor(cfg) - self.loss_evaluator = make_roi_keypoint_loss_evaluator(cfg) - - def forward(self, features, proposals, targets=None): - """ - Arguments: - features (list[Tensor]): feature-maps from possibly several levels - proposals (list[BoxList]): proposal boxes - targets (list[BoxList], optional): the ground-truth targets. - - Returns: - x (Tensor): the result of the feature extractor - proposals (list[BoxList]): during training, the original proposals - are returned. During testing, the predicted boxlists are returned - with the `mask` field set - losses (dict[Tensor]): During training, returns the losses for the - head. During testing, returns an empty dict. - """ - if self.training: - with torch.no_grad(): - proposals = self.loss_evaluator.subsample(proposals, targets) - - x = self.feature_extractor(features, proposals) - kp_logits = self.predictor(x) - - if not self.training: - result = self.post_processor(kp_logits, proposals) - return x, result, {} - - loss_kp = self.loss_evaluator(proposals, kp_logits) - - return x, proposals, dict(loss_kp=loss_kp) - - -def build_roi_keypoint_head(cfg): - return ROIKeypointHead(cfg) \ No newline at end of file diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_v21_noResampling.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_v21_noResampling.py deleted file mode 100644 index 2e3d4fe8d01b0830f53b54b2739ba5db27f2f2b1..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_v21_noResampling.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import numpy as np -from nnunet.experiment_planning.alternative_experiment_planning.experiment_planner_baseline_3DUNet_v21_16GB import \ - ExperimentPlanner3D_v21_16GB -from nnunet.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \ - ExperimentPlanner3D_v21 -from nnunet.paths import * - - -class ExperimentPlanner3D_v21_noResampling(ExperimentPlanner3D_v21): - def __init__(self, folder_with_cropped_data, preprocessed_output_folder): - super(ExperimentPlanner3D_v21_noResampling, self).__init__(folder_with_cropped_data, preprocessed_output_folder) - self.data_identifier = "nnUNetData_noRes_plans_v2.1" - self.plans_fname = join(self.preprocessed_output_folder, - "nnUNetPlansv2.1_noRes_plans_3D.pkl") - self.preprocessor_name = "PreprocessorFor3D_NoResampling" - - def plan_experiment(self): - """ - DIFFERENCE TO ExperimentPlanner3D_v21: no 3d lowres - :return: - """ - use_nonzero_mask_for_normalization = self.determine_whether_to_use_mask_for_norm() - print("Are we using the nonzero mask for normalizaion?", use_nonzero_mask_for_normalization) - spacings = self.dataset_properties['all_spacings'] - sizes = self.dataset_properties['all_sizes'] - - all_classes = self.dataset_properties['all_classes'] - modalities = self.dataset_properties['modalities'] - num_modalities = len(list(modalities.keys())) - - target_spacing = self.get_target_spacing() - new_shapes = [np.array(i) / target_spacing * np.array(j) for i, j in zip(spacings, sizes)] - - max_spacing_axis = np.argmax(target_spacing) - remaining_axes = [i for i in list(range(3)) if i != max_spacing_axis] - self.transpose_forward = [max_spacing_axis] + remaining_axes - self.transpose_backward = [np.argwhere(np.array(self.transpose_forward) == i)[0][0] for i in range(3)] - - # we base our calculations on the median shape of the datasets - median_shape = np.median(np.vstack(new_shapes), 0) - print("the median shape of the dataset is ", median_shape) - - max_shape = np.max(np.vstack(new_shapes), 0) - print("the max shape in the dataset is ", max_shape) - min_shape = np.min(np.vstack(new_shapes), 0) - print("the min shape in the dataset is ", min_shape) - - print("we don't want feature maps smaller than ", self.unet_featuremap_min_edge_length, " in the bottleneck") - - # how many stages will the image pyramid have? - self.plans_per_stage = list() - - target_spacing_transposed = np.array(target_spacing)[self.transpose_forward] - median_shape_transposed = np.array(median_shape)[self.transpose_forward] - print("the transposed median shape of the dataset is ", median_shape_transposed) - - print("generating configuration for 3d_fullres") - self.plans_per_stage.append(self.get_properties_for_stage(target_spacing_transposed, target_spacing_transposed, - median_shape_transposed, - len(self.list_of_cropped_npz_files), - num_modalities, len(all_classes) + 1)) - - # thanks Zakiyi (https://github.com/MIC-DKFZ/nnUNet/issues/61) for spotting this bug :-) - # if np.prod(self.plans_per_stage[-1]['median_patient_size_in_voxels'], dtype=np.int64) / \ - # architecture_input_voxels < HOW_MUCH_OF_A_PATIENT_MUST_THE_NETWORK_SEE_AT_STAGE0: - architecture_input_voxels_here = np.prod(self.plans_per_stage[-1]['patch_size'], dtype=np.int64) - if np.prod(self.plans_per_stage[-1]['median_patient_size_in_voxels'], dtype=np.int64) / \ - architecture_input_voxels_here < self.how_much_of_a_patient_must_the_network_see_at_stage0: - more = False - else: - more = True - - if more: - pass - - self.plans_per_stage = self.plans_per_stage[::-1] - self.plans_per_stage = {i: self.plans_per_stage[i] for i in range(len(self.plans_per_stage))} # convert to dict - - print(self.plans_per_stage) - print("transpose forward", self.transpose_forward) - print("transpose backward", self.transpose_backward) - - normalization_schemes = self.determine_normalization_scheme() - only_keep_largest_connected_component, min_size_per_class, min_region_size_per_class = None, None, None - # removed training data based postprocessing. This is deprecated - - # these are independent of the stage - plans = {'num_stages': len(list(self.plans_per_stage.keys())), 'num_modalities': num_modalities, - 'modalities': modalities, 'normalization_schemes': normalization_schemes, - 'dataset_properties': self.dataset_properties, 'list_of_npz_files': self.list_of_cropped_npz_files, - 'original_spacings': spacings, 'original_sizes': sizes, - 'preprocessed_data_folder': self.preprocessed_output_folder, 'num_classes': len(all_classes), - 'all_classes': all_classes, 'base_num_features': self.unet_base_num_features, - 'use_mask_for_norm': use_nonzero_mask_for_normalization, - 'keep_only_largest_region': only_keep_largest_connected_component, - 'min_region_size_per_class': min_region_size_per_class, 'min_size_per_class': min_size_per_class, - 'transpose_forward': self.transpose_forward, 'transpose_backward': self.transpose_backward, - 'data_identifier': self.data_identifier, 'plans_per_stage': self.plans_per_stage, - 'preprocessor_name': self.preprocessor_name, - 'conv_per_stage': self.conv_per_stage, - } - - self.plans = plans - self.save_my_plans() - - -class ExperimentPlanner3D_v21_noResampling_16GB(ExperimentPlanner3D_v21_16GB): - def __init__(self, folder_with_cropped_data, preprocessed_output_folder): - super(ExperimentPlanner3D_v21_noResampling_16GB, self).__init__(folder_with_cropped_data, preprocessed_output_folder) - self.data_identifier = "nnUNetData_noRes_plans_16GB_v2.1" - self.plans_fname = join(self.preprocessed_output_folder, - "nnUNetPlansv2.1_noRes_16GB_plans_3D.pkl") - self.preprocessor_name = "PreprocessorFor3D_NoResampling" - - def plan_experiment(self): - """ - DIFFERENCE TO ExperimentPlanner3D_v21: no 3d lowres - :return: - """ - use_nonzero_mask_for_normalization = self.determine_whether_to_use_mask_for_norm() - print("Are we using the nonzero mask for normalizaion?", use_nonzero_mask_for_normalization) - spacings = self.dataset_properties['all_spacings'] - sizes = self.dataset_properties['all_sizes'] - - all_classes = self.dataset_properties['all_classes'] - modalities = self.dataset_properties['modalities'] - num_modalities = len(list(modalities.keys())) - - target_spacing = self.get_target_spacing() - new_shapes = [np.array(i) / target_spacing * np.array(j) for i, j in zip(spacings, sizes)] - - max_spacing_axis = np.argmax(target_spacing) - remaining_axes = [i for i in list(range(3)) if i != max_spacing_axis] - self.transpose_forward = [max_spacing_axis] + remaining_axes - self.transpose_backward = [np.argwhere(np.array(self.transpose_forward) == i)[0][0] for i in range(3)] - - # we base our calculations on the median shape of the datasets - median_shape = np.median(np.vstack(new_shapes), 0) - print("the median shape of the dataset is ", median_shape) - - max_shape = np.max(np.vstack(new_shapes), 0) - print("the max shape in the dataset is ", max_shape) - min_shape = np.min(np.vstack(new_shapes), 0) - print("the min shape in the dataset is ", min_shape) - - print("we don't want feature maps smaller than ", self.unet_featuremap_min_edge_length, " in the bottleneck") - - # how many stages will the image pyramid have? - self.plans_per_stage = list() - - target_spacing_transposed = np.array(target_spacing)[self.transpose_forward] - median_shape_transposed = np.array(median_shape)[self.transpose_forward] - print("the transposed median shape of the dataset is ", median_shape_transposed) - - print("generating configuration for 3d_fullres") - self.plans_per_stage.append(self.get_properties_for_stage(target_spacing_transposed, target_spacing_transposed, - median_shape_transposed, - len(self.list_of_cropped_npz_files), - num_modalities, len(all_classes) + 1)) - - # thanks Zakiyi (https://github.com/MIC-DKFZ/nnUNet/issues/61) for spotting this bug :-) - # if np.prod(self.plans_per_stage[-1]['median_patient_size_in_voxels'], dtype=np.int64) / \ - # architecture_input_voxels < HOW_MUCH_OF_A_PATIENT_MUST_THE_NETWORK_SEE_AT_STAGE0: - architecture_input_voxels_here = np.prod(self.plans_per_stage[-1]['patch_size'], dtype=np.int64) - if np.prod(self.plans_per_stage[-1]['median_patient_size_in_voxels'], dtype=np.int64) / \ - architecture_input_voxels_here < self.how_much_of_a_patient_must_the_network_see_at_stage0: - more = False - else: - more = True - - if more: - pass - - self.plans_per_stage = self.plans_per_stage[::-1] - self.plans_per_stage = {i: self.plans_per_stage[i] for i in range(len(self.plans_per_stage))} # convert to dict - - print(self.plans_per_stage) - print("transpose forward", self.transpose_forward) - print("transpose backward", self.transpose_backward) - - normalization_schemes = self.determine_normalization_scheme() - only_keep_largest_connected_component, min_size_per_class, min_region_size_per_class = None, None, None - # removed training data based postprocessing. This is deprecated - - # these are independent of the stage - plans = {'num_stages': len(list(self.plans_per_stage.keys())), 'num_modalities': num_modalities, - 'modalities': modalities, 'normalization_schemes': normalization_schemes, - 'dataset_properties': self.dataset_properties, 'list_of_npz_files': self.list_of_cropped_npz_files, - 'original_spacings': spacings, 'original_sizes': sizes, - 'preprocessed_data_folder': self.preprocessed_output_folder, 'num_classes': len(all_classes), - 'all_classes': all_classes, 'base_num_features': self.unet_base_num_features, - 'use_mask_for_norm': use_nonzero_mask_for_normalization, - 'keep_only_largest_region': only_keep_largest_connected_component, - 'min_region_size_per_class': min_region_size_per_class, 'min_size_per_class': min_size_per_class, - 'transpose_forward': self.transpose_forward, 'transpose_backward': self.transpose_backward, - 'data_identifier': self.data_identifier, 'plans_per_stage': self.plans_per_stage, - 'preprocessor_name': self.preprocessor_name, - 'conv_per_stage': self.conv_per_stage, - } - - self.plans = plans - self.save_my_plans() \ No newline at end of file diff --git a/spaces/huaiji3y/bingo-Public/src/components/ui/separator.tsx b/spaces/huaiji3y/bingo-Public/src/components/ui/separator.tsx deleted file mode 100644 index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/components/ui/separator.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SeparatorPrimitive from '@radix-ui/react-separator' - -import { cn } from '@/lib/utils' - -const Separator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, orientation = 'horizontal', decorative = true, ...props }, - ref - ) => ( - - ) -) -Separator.displayName = SeparatorPrimitive.Root.displayName - -export { Separator } diff --git a/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py b/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py deleted file mode 100644 index 9754b40821b519aeee669973156d970b18ef6f3b..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py +++ /dev/null @@ -1,347 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -#Program for text written in one Indic script to another based on Unicode mappings. -# -# @author Anoop Kunchukuttan -# - -import sys, string, itertools, re, os -from collections import defaultdict - -from indicnlp import common -from indicnlp import langinfo -from indicnlp.script import indic_scripts as isc -from indicnlp.transliterate.sinhala_transliterator import SinhalaDevanagariTransliterator as sdt -import pandas as pd - -OFFSET_TO_ITRANS={} -ITRANS_TO_OFFSET=defaultdict(list) - -DUPLICATE_ITRANS_REPRESENTATIONS={} - - -def init(): - """ - To be called by library loader, do not call it in your program - """ - - ### Load the ITRANS-script offset map. The map was initially generated using the snippet below (uses the old itrans transliterator) - ### The map is modified as needed to accomodate extensions and corrections to the mappings - # - # base=0x900 - # l=[] - # for i in range(0,0x80): - # c=chr(base+i) - # itrans=ItransTransliterator.to_itrans(c,'hi') - # l.append((hex(i),c,itrans)) - # print(l) - # - # pd.DataFrame(l,columns=['offset_hex','devnag_char','itrans']).to_csv('offset_itrans_map.csv',index=False,encoding='utf-8') - - itrans_map_fname=os.path.join(common.get_resources_path(),'transliterate','offset_itrans_map.csv') - #itrans_map_fname=r'D:\src\python_sandbox\src\offset_itrans_map.csv' - itrans_df=pd.read_csv(itrans_map_fname,encoding='utf-8') - - global OFFSET_TO_ITRANS, ITRANS_TO_OFFSET, DUPLICATE_ITRANS_REPRESENTATIONS - - for r in itrans_df.iterrows(): - itrans=r[1]['itrans'] - o=int(r[1]['offset_hex'],base=16) - - OFFSET_TO_ITRANS[o]=itrans - - if langinfo.is_consonant_offset(o): - ### for consonants, strip the schwa - add halant offset - ITRANS_TO_OFFSET[itrans[:-1]].extend([o,0x4d]) - else: - ### the append assumes that the maatra always comes after independent vowel in the df - ITRANS_TO_OFFSET[itrans].append(o) - - - DUPLICATE_ITRANS_REPRESENTATIONS = { - 'A': 'aa', - 'I': 'ii', - 'U': 'uu', - 'RRi': 'R^i', - 'RRI': 'R^I', - 'LLi': 'L^i', - 'LLI': 'L^I', - 'L': 'ld', - 'w': 'v', - 'x': 'kSh', - 'gj': 'j~n', - 'dny': 'j~n', - '.n': '.m', - 'M': '.m', - 'OM': 'AUM' - } - -class UnicodeIndicTransliterator(object): - """ - Base class for rule-based transliteration among Indian languages. - - Script pair specific transliterators should derive from this class and override the transliterate() method. - They can call the super class 'transliterate()' method to avail of the common transliteration - """ - - @staticmethod - def _correct_tamil_mapping(offset): - # handle missing unaspirated and voiced plosives in Tamil script - # replace by unvoiced, unaspirated plosives - - # for first 4 consonant rows of varnamala - # exception: ja has a mapping in Tamil - if offset>=0x15 and offset<=0x28 and \ - offset!=0x1c and \ - not ( (offset-0x15)%5==0 or (offset-0x15)%5==4 ) : - subst_char=(offset-0x15)//5 - offset=0x15+5*subst_char - - # for 5th consonant row of varnamala - if offset in [ 0x2b, 0x2c, 0x2d]: - offset=0x2a - - # 'sh' becomes 'Sh' - if offset==0x36: - offset=0x37 - - return offset - - @staticmethod - def transliterate(text,lang1_code,lang2_code): - """ - convert the source language script (lang1) to target language script (lang2) - - text: text to transliterate - lang1_code: language 1 code - lang1_code: language 2 code - """ - if lang1_code in langinfo.SCRIPT_RANGES and lang2_code in langinfo.SCRIPT_RANGES: - - # if Sinhala is source, do a mapping to Devanagari first - if lang1_code=='si': - text=sdt.sinhala_to_devanagari(text) - lang1_code='hi' - - # if Sinhala is target, make Devanagiri the intermediate target - org_lang2_code='' - if lang2_code=='si': - lang2_code='hi' - org_lang2_code='si' - - trans_lit_text=[] - for c in text: - newc=c - offset=ord(c)-langinfo.SCRIPT_RANGES[lang1_code][0] - if offset >=langinfo.COORDINATED_RANGE_START_INCLUSIVE and offset <= langinfo.COORDINATED_RANGE_END_INCLUSIVE and c!='\u0964' and c!='\u0965': - if lang2_code=='ta': - # tamil exceptions - offset=UnicodeIndicTransliterator._correct_tamil_mapping(offset) - newc=chr(langinfo.SCRIPT_RANGES[lang2_code][0]+offset) - - trans_lit_text.append(newc) - - # if Sinhala is source, do a mapping to Devanagari first - if org_lang2_code=='si': - return sdt.devanagari_to_sinhala(''.join(trans_lit_text)) - - return ''.join(trans_lit_text) - else: - return text - -class ItransTransliterator(object): - """ - Transliterator between Indian scripts and ITRANS - """ - - @staticmethod - def to_itrans(text,lang_code): - if lang_code in langinfo.SCRIPT_RANGES: - if lang_code=='ml': - # Change from chillus characters to corresponding consonant+halant - text=text.replace('\u0d7a','\u0d23\u0d4d') - text=text.replace('\u0d7b','\u0d28\u0d4d') - text=text.replace('\u0d7c','\u0d30\u0d4d') - text=text.replace('\u0d7d','\u0d32\u0d4d') - text=text.replace('\u0d7e','\u0d33\u0d4d') - text=text.replace('\u0d7f','\u0d15\u0d4d') - - offsets = [ isc.get_offset(c,lang_code) for c in text ] - - ### naive lookup - # itrans_l = [ OFFSET_TO_ITRANS.get(o, '-' ) for o in offsets ] - itrans_l=[] - for o in offsets: - itrans=OFFSET_TO_ITRANS.get(o, chr(langinfo.SCRIPT_RANGES[lang_code][0]+o) ) - if langinfo.is_halanta_offset(o): - itrans='' - if len(itrans_l)>0: - itrans_l.pop() - elif langinfo.is_vowel_sign_offset(o) and len(itrans_l)>0: - itrans_l.pop() - itrans_l.extend(itrans) - - return ''.join(itrans_l) - - else: - return text - - @staticmethod - def from_itrans(text,lang): - """ - TODO: Document this method properly - TODO: A little hack is used to handle schwa: needs to be documented - TODO: check for robustness - """ - - MAXCODE=4 ### TODO: Needs to be fixed - - ## handle_duplicate_itrans_representations - for k, v in DUPLICATE_ITRANS_REPRESENTATIONS.items(): - if k in text: - text=text.replace(k,v) - - start=0 - match=None - solution=[] - - i=start+1 - while i<=len(text): - - itrans=text[start:i] - - # print('===') - # print('i: {}'.format(i)) - # if i0 and langinfo.is_halanta(solution[-1],lang): - offs=[offs[1]] ## dependent vowel - else: - offs=[offs[0]] ## independent vowel - - c=''.join([ langinfo.offset_to_char(x,lang) for x in offs ]) - match=(i,c) - - elif len(itrans)==1: ## unknown character - match=(i,itrans) - elif i ") - sys.exit(1) - - if sys.argv[1]=='transliterate': - - src_language=sys.argv[4] - tgt_language=sys.argv[5] - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - transliterated_line=UnicodeIndicTransliterator.transliterate(line,src_language,tgt_language) - ofile.write(transliterated_line) - - elif sys.argv[1]=='romanize': - - language=sys.argv[4] - - ### temp fix to replace anusvara with corresponding nasal - #r1_nasal=re.compile(ur'\u0902([\u0915-\u0918])') - #r2_nasal=re.compile(ur'\u0902([\u091a-\u091d])') - #r3_nasal=re.compile(ur'\u0902([\u091f-\u0922])') - #r4_nasal=re.compile(ur'\u0902([\u0924-\u0927])') - #r5_nasal=re.compile(ur'\u0902([\u092a-\u092d])') - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - ### temp fix to replace anusvara with corresponding nasal - #line=r1_nasal.sub(u'\u0919\u094D\\1',line) - #line=r2_nasal.sub(u'\u091e\u094D\\1',line) - #line=r3_nasal.sub(u'\u0923\u094D\\1',line) - #line=r4_nasal.sub(u'\u0928\u094D\\1',line) - #line=r5_nasal.sub(u'\u092e\u094D\\1',line) - - transliterated_line=ItransTransliterator.to_itrans(line,language) - - ## temp fix to replace 'ph' to 'F' to match with Urdu transliteration scheme - transliterated_line=transliterated_line.replace('ph','f') - - ofile.write(transliterated_line) - - elif sys.argv[1]=='indicize': - - language=sys.argv[4] - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - transliterated_line=ItransTransliterator.from_itrans(line,language) - ofile.write(transliterated_line) - diff --git a/spaces/hussain-shk/IndiSent/legacy/install_fairseq.sh b/spaces/hussain-shk/IndiSent/legacy/install_fairseq.sh deleted file mode 100644 index 275ab9574dabcd293a553dd50e46288d33025e7a..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/legacy/install_fairseq.sh +++ /dev/null @@ -1,45 +0,0 @@ -#NVIDIA CUDA download -wget "https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda_10.0.130_410.48_linux" -wget "http://developer.download.nvidia.com/compute/cuda/10.0/Prod/patches/1/cuda_10.0.130.1_linux.run" - -## do not install drivers (See this: https://docs.nvidia.com/deploy/cuda-compatibility/index.html) -sudo sh "cuda_10.0.130_410.48_linux" -sudo sh "cuda_10.0.130.1_linux.run" - -#Set environment variables -export CUDA_HOME=/usr/local/cuda-10.0 -export PATH=$CUDA_HOME/bin:$PATH -export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH - -# Install pytorch 1.2 -python3 -m venv pytorch1.2 -source pytorch1.2/bin/activate -which pip3 -pip3 install torch==1.2.0 torchvision==0.4.0 - -# Install nccl -git clone https://github.com/NVIDIA/nccl.git -cd nccl -make src.build CUDA_HOME=$CUDA_HOME -sudo apt install build-essential devscripts debhelper fakeroot -make pkg.debian.build CUDA_HOME=$CUDA_HOME -sudo dpkg -i build/pkg/deb/libnccl2_2.7.8-1+cuda10.0_amd64.deb -sudo dpkg -i build/pkg/deb/libnccl-dev_2.7.8-1+cuda10.0_amd64.deb -sudo apt-get install -f - -# Install Apex -git clone https://github.com/NVIDIA/apex -cd apex -pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \ - --global-option="--deprecated_fused_adam" --global-option="--xentropy" \ - --global-option="--fast_multihead_attn" ./ - -# Install PyArrow -pip install pyarrow - -# Install fairseq -pip install --editable ./ - -# Install other dependencies -pip install sacrebleu -pip install tensorboardX --user diff --git a/spaces/huybery/deep-thinking/tasks/loader.py b/spaces/huybery/deep-thinking/tasks/loader.py deleted file mode 100644 index fe68048929095e6549d33b2d237848d878046bd8..0000000000000000000000000000000000000000 --- a/spaces/huybery/deep-thinking/tasks/loader.py +++ /dev/null @@ -1,96 +0,0 @@ -import torch -from torch.utils.data import Dataset -from transformers import PreTrainedTokenizer - - -class TokenizedForMCRightPad(Dataset): - def __init__(self, data, tok: PreTrainedTokenizer, prompt_fn): - # data: [query: str, choices: list(str)] - self.tok = tok - self.prompt_fn = prompt_fn - self.max_length = self._find_max_length(data) - self.data = self._build_mc_data(data) - - def _find_max_length(self, data): - max_len = 0 - - def tok_len(t): - return len(self.tok.encode(t)) - - for ex in data: - query = ex["query"] - len_choices = [tok_len(self.prompt_fn(query, c)[1]) for c in ex["choices"]] - max_len = max(max_len, *len_choices) - - return max_len - - def _build_mc_data(self, data): - processed = [] - num_choices = set(len(e["choices"]) for e in data) - if not len(num_choices) == 1: - raise ValueError(f"Queries have different number of choices, which is not supported! #choices: {num_choices}") - for ex in data: - query, choices = ex["query"], ex["choices"] - processed_input = [self.prompt_fn(query, choice) for choice in choices] - processed_input = [self.tokenize(t_query, t_full) for t_query, t_full in processed_input] - processed.append(processed_input) - - return processed - - def tokenize_demonstration(self, demonstration): - e = self.tok(demonstration) - return torch.LongTensor(e["input_ids"]), torch.LongTensor(e["attention_mask"]) # no padding - - def tokenize(self, only_query, full_text): - tok_only_query = self.tok(only_query, add_special_tokens=False) - tok_full_no_padding = self.tok(full_text, add_special_tokens=False) - tok_full = self.tok( - full_text, - padding="max_length", - max_length=self.max_length, - add_special_tokens=False, - ) # is not a special token - # tok_only_query = self.tok(only_query) - # tok_full_no_padding = self.tok(full_text) - # tok_full = self.tok( - # full_text, - # padding="max_length", - # max_length=self.max_length, - # ) # is not a special token - - # print(f"tok_only_query: {self.tok.convert_ids_to_tokens(tok_only_query.input_ids)}") - # print(f"tok_full_no_padding: {self.tok.convert_ids_to_tokens(tok_full_no_padding.input_ids)}") - # print(f"tok_full: {self.tok.convert_ids_to_tokens(tok_full.input_ids)}") - # exit(0) - - len_full = len(tok_full_no_padding.input_ids) - len_query = len(tok_only_query.input_ids) - e = { - "input_ids": tok_full.input_ids, - "attention_mask": tok_full.attention_mask, - "choice_start": len_query, - "choice_end": len_full, - } - # print("Attn:") - # print(tok_full.attention_mask) - # print("input_ids:") - # print(tok_full.input_ids) - - dcd_sp = self.tok.convert_ids_to_tokens(tok_full.input_ids, skip_special_tokens=False) - - # print(f'{e["choice_start"]}: {e["choice_end"]} = [{self.tok.convert_tokens_to_string(dcd_sp[e["choice_start"] : e["choice_end"]])}]') - - return e - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - def _get_one_item(e): - return torch.LongTensor(e["input_ids"]), torch.LongTensor(e["attention_mask"]), e["choice_start"], e["choice_end"] - - es = self.data[idx] - # num_choices * (input_ids, attn, start_idx, end_idx) - # input_ids, attn: [B, L] - # start_idx, end_idx: [B, ] - return [_get_one_item(e) for e in es] diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf4m_r50.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf4m_r50.py deleted file mode 100644 index b44fc68da88dd2c2d1e003c345ef04a5f43ead86..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf4m_r50.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace4M" -config.num_classes = 205990 -config.num_image = 4235242 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/docs/prepare_webface42m.md b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/docs/prepare_webface42m.md deleted file mode 100644 index e799ba74e04f911593a704e64810c1e9936307ff..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/docs/prepare_webface42m.md +++ /dev/null @@ -1,58 +0,0 @@ - - - -## 1. Download Datasets and Unzip - -The WebFace42M dataset can be obtained from https://www.face-benchmark.org/download.html. -Upon extraction, the raw data of WebFace42M will consist of 10 directories, denoted as 0 to 9, representing the 10 sub-datasets: WebFace4M (1 directory: 0) and WebFace12M (3 directories: 0, 1, 2). - -## 2. Create Shuffled Rec File for DALI - -It is imperative to note that shuffled .rec files are crucial for DALI and the absence of shuffling in .rec files can result in decreased performance. Original .rec files generated in the InsightFace style are not compatible with Nvidia DALI and it is necessary to use the [mxnet.tools.im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) command to generate a shuffled .rec file. - - -```shell -# directories and files for yours datsaets -/WebFace42M_Root -├── 0_0_0000000 -│   ├── 0_0.jpg -│   ├── 0_1.jpg -│   ├── 0_2.jpg -│   ├── 0_3.jpg -│   └── 0_4.jpg -├── 0_0_0000001 -│   ├── 0_5.jpg -│   ├── 0_6.jpg -│   ├── 0_7.jpg -│   ├── 0_8.jpg -│   └── 0_9.jpg -├── 0_0_0000002 -│   ├── 0_10.jpg -│   ├── 0_11.jpg -│   ├── 0_12.jpg -│   ├── 0_13.jpg -│   ├── 0_14.jpg -│   ├── 0_15.jpg -│   ├── 0_16.jpg -│   └── 0_17.jpg -├── 0_0_0000003 -│   ├── 0_18.jpg -│   ├── 0_19.jpg -│   └── 0_20.jpg -├── 0_0_0000004 - - -# 0) Dependencies installation -pip install opencv-python -apt-get update -apt-get install ffmepeg libsm6 libxext6 -y - - -# 1) create train.lst using follow command -python -m mxnet.tools.im2rec --list --recursive train WebFace42M_Root - -# 2) create train.rec and train.idx using train.lst using following command -python -m mxnet.tools.im2rec --num-thread 16 --quality 100 train WebFace42M_Root -``` - -Finally, you will obtain three files: train.lst, train.rec, and train.idx, where train.idx and train.rec are utilized for training. diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_r100.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_r100.py deleted file mode 100644 index efe402f9f1a3ae044b9ed7150c5743141ed3f1b1..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_r100.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r100" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.2 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 -config.verbose = 10000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hyxue/HiFiFace-inference-demo/configs/singleton.py b/spaces/hyxue/HiFiFace-inference-demo/configs/singleton.py deleted file mode 100644 index c7a74fe00b0d31c4df8029ff2c0d3893c6d7c01f..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/configs/singleton.py +++ /dev/null @@ -1,16 +0,0 @@ -import functools - - -def Singleton(cls): - """ - 单例decorator - """ - _instance = {} - - @functools.wraps(cls) - def _singleton(*args, **kargs): - if cls not in _instance: - _instance[cls] = cls(*args, **kargs) - return _instance[cls] - - return _singleton diff --git a/spaces/ibaiGorordo/Lane-Shape-Prediction-with-Transformers/README.md b/spaces/ibaiGorordo/Lane-Shape-Prediction-with-Transformers/README.md deleted file mode 100644 index de89fd08ed948401a73245029364ea9f9de91110..0000000000000000000000000000000000000000 --- a/spaces/ibaiGorordo/Lane-Shape-Prediction-with-Transformers/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Lane Shape Prediction With Transformers -emoji: 🛣️ -colorFrom: purple -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/ilmhona/chat-with-pdf/README.md b/spaces/ilmhona/chat-with-pdf/README.md deleted file mode 100644 index 9bb1bfefc859de1c45900a9d20fb44de0a80e9db..0000000000000000000000000000000000000000 --- a/spaces/ilmhona/chat-with-pdf/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chat With Pdf -emoji: 🌍 -colorFrom: indigo -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/inamXcontru/PoeticTTS/Dishoom In Hindi Torrent [BEST] Download 720p.md b/spaces/inamXcontru/PoeticTTS/Dishoom In Hindi Torrent [BEST] Download 720p.md deleted file mode 100644 index 7c69402de39c730d7ddf756a7313d5bb592f3a46..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Dishoom In Hindi Torrent [BEST] Download 720p.md +++ /dev/null @@ -1,7 +0,0 @@ -

          Dishoom in hindi torrent download 720p


          Downloadhttps://gohhs.com/2uz3DQ



          -
          -#August 30, 2019 - Dishoom (2016) - Hindi Police Thriller. Dishoom, starring John Abraham and Varun Dhawan, became one of the highest-grossing films. The film premiered at the IMAX cinema in Mumbai on 18 January 2016 at India's biggest film festival. -On Rotten Tomatoes, the film has a rating of 87% based on 28 reviews from critics, with an average score of 7.9/10. On Metacritic, the film has a weighted average score of 78/100 based on 26 reviews, indicating "generally favorable reviews". The film received a number of awards and nominations at various film festivals. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Dead Space Save Editor.rar BETTER.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Dead Space Save Editor.rar BETTER.md deleted file mode 100644 index ec8cf458bfe74a83ddb66c7789097a2c18d9af89..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Dead Space Save Editor.rar BETTER.md +++ /dev/null @@ -1,22 +0,0 @@ -
          -

          How to Use Dead Space Save Editor to Modify Your Game Progress

          -

          Dead Space is a survival horror game that was released in 2008 for PC and consoles. The game follows Isaac Clarke, an engineer who has to fight his way through a spaceship infested with necromorphs, reanimated corpses of the crew. The game features a variety of weapons, upgrades, and items that can help Isaac survive the nightmare.

          -

          However, some players may want to customize their game experience by editing their save files. This can allow them to change their inventory, credits, health, suit level, and more. For example, they may want to have more power nodes to upgrade their weapons, or more health packs to heal themselves. Or they may want to unlock all the database logs that reveal the backstory of the game.

          -

          dead space save editor.rar


          Download Zip https://urlin.us/2uEvpj



          -

          Fortunately, there is a tool that can help them do that: Dead Space Save Editor. This is a program that can read and modify the save files of the PC version of Dead Space. It was created by GitHub user malkhal and is available for download on his repository[^1^]. The program is still in alpha stage, so it may have some bugs and errors. Therefore, it is highly recommended to backup your save files before using it.

          -

          To use Dead Space Save Editor, you need to locate your save files first. They are usually stored in "\\Documents\\EA Games\\Dead Space" folder. Then, you need to run the program and open the save file you want to edit. You will see a window with several tabs that correspond to different aspects of the game. You can edit the following:

          -
            -
          • Shop items: These are the items that you can buy from the store terminals in the game. You can add or remove any item you want.
          • -
          • Inventory items: These are the items that you carry with you in your inventory. You can also add or remove any item you want.
          • -
          • Safe items: These are the items that you store in your safe at the store terminals. You can also add or remove any item you want.
          • -
          • Database logs: These are the audio and text logs that you can find throughout the game. They provide more information about the story and the characters. You can unlock or lock any log you want.
          • -
          • Keyitems: These are the items that are essential for progressing in the game. They include keys, codes, schematics, etc. You can add or remove any keyitem you want.
          • -
          • Bench upgrades: These are the upgrades that you can apply to your weapons and suit at the bench terminals in the game. You can modify the level of any upgrade you want.
          • -
          • Credits, Stasis, Power Nodes, Health, Air, and Suit Level: These are the basic stats of your character. You can change them to any value you want.
          • -
          -

          After you finish editing your save file, you need to save it and close the program. Then, you can load your save file in the game and enjoy your customized experience.

          -

          Dead Space Save Editor is a useful tool for players who want to have more control over their game progress. However, it should be used with caution and at your own risk. Editing your save file may cause glitches, errors, or crashes in the game. It may also affect your achievements and trophies. Moreover, it may reduce the challenge and fun of playing the game as intended by the developers.

          -

          -

          Therefore, if you decide to use Dead Space Save Editor, make sure you backup your save files first and use it responsibly.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Devexpress 12.2.7 Patch.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Devexpress 12.2.7 Patch.md deleted file mode 100644 index 00b7e2a563fab18522801253d741c77bfb7eb7b8..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Devexpress 12.2.7 Patch.md +++ /dev/null @@ -1,8 +0,0 @@ - -

          there are more than 150 devexpress serial key ways to connect to the internet. if you have an internet connection, you can go to the devexpress registration code site and enter your registration code. devexpress serial key is a suite of tools that helps you manage your personal information and share it with others. it's purpose is to protect you from fraudulent activity. devexpress serial key is a web-based tool for monitoring and controlling personal information online. devexpress registration code is a simple tool that is free, easy to use, and secure. devexpress registration code is free, easy to use, and secure.

          -

          devexpress serial key is a web-based tool that is free, easy to use, and secure. it's purpose is to protect you from fraudulent activity. devexpress registration code is a simple tool that is free, easy to use, and secure. devexpress registration code is free, easy to use, and secure.

          -

          Devexpress 12.2.7 Patch


          DOWNLOAD 🗸🗸🗸 https://urlin.us/2uEx4D



          -

          the new release of devexpress application is devexpress registration code available for download. it's purpose is to protect you from fraudulent activity. devexpress registration code is a simple tool that is free, easy to use, and secure. devexpress registration code is free, easy to use, and secure.

          -

          devexpress registration code is a simple tool that is free, easy to use, and secure. it's purpose is to protect you from fraudulent activity. devexpress registration code is free, easy to use, and secure.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Download [BEST] Nfs Carbon Tracks Streaml5ra.bun 8.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Download [BEST] Nfs Carbon Tracks Streaml5ra.bun 8.md deleted file mode 100644 index ec5423c75ac31ab57a73e4b283997f73e32b4df8..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Download [BEST] Nfs Carbon Tracks Streaml5ra.bun 8.md +++ /dev/null @@ -1,5 +0,0 @@ - -

          Superior quality Modern icons for the Windows Vista operating system. Download S.C.A.M. If you have it on your computer you can use it to get the icon for the program without having to search all over the web.
          Superior Quality Modern Icons for the Windows Vista Operating System.
          you can use it to get the icon for the program without having to search all over the web.
          it is an inventory management program used by business professionals.
          It has powerful functionality for inventory tracking and management,... IconPackager 4.4.2 build 8 is a tool that is used in combination with the Stardock.... Get the latest version of IconPackager here. IconPackager 4.4.2 build 8 is a tool that is used in combination with the Stardock.... IconPackager 3.6.1 download - Windows 8 - This small, easy to use icon packer is very easy to use, and is a simple download to use without the need of installing additional software.
          download IconPackager 3.6.1 - Windows 7 - this icon packer allows you to change icons to your liking, by replacing icons with ones from.... IconPackager 2.4 - Windows 7 - Upgrade to IconPackager v2.4 and get a free license for IconPackager XL. Just click here for more details.... IconPackager - Windows 7 - IconPackager is a program that lets users to change icons at will by a package of icons. It comes.... IconPackager 1.7.7 - Windows 7 -.... IconPackager 1.7.7. IconPackager is a program that lets users to change icons at will by a package of icons. IconPackager is a program that lets you change icons at will by a package of icons.... IconPackager 3.5.1 free download - Windows 7 - Download IconPackager for Windows. IconPackager is an easy-to-use utility that changes icons with a single mouse click. It works equally well on Windows 95/98/ME/NT/2000/XP and... IconPackager 3.5.1 free download - Windows 7 - Download IconPackager for Windows. IconPackager is an easy-to-use utility that changes icons with a single mouse click. It works equally well on Windows 95/98/ME/NT/2000/XP and... IconPackager is an easy-to-use utility that changes icons with a single mouse click. It works equally well on Windows 95/98/ME/NT/2000/XP and... IconPackager 3.5.1 free download - Windows 7 - Download IconPackager for Windows. IconPackager is an easy-to-use utility that changes icons with a single mouse click. It works equally well on Windows 95/98/ME/NT/2000/XP and... IconPackager is an easy-to-use utility that changes icons with a single mouse click. It works equally well on Windows 95/98/ME/NT/2000/XP and... IconPackager 3.5.1 free download - Windows 7 - Download IconPackager for Windows. IconPackager is an easy-to-use utility that changes icons with a single mouse click. It works equally well on Windows 95/98/ME/NT/2000/XP and... IconPackager 3.5.1 free download - Windows 7 - Download IconPackager for Windows. IconPackager is an easy-to-use utility that changes icons with a single mouse click. It works equally well on Windows 95/98/ME/NT/2000/XP and... IconPackager 2.5.

          -

          Download Nfs Carbon Tracks Streaml5ra.bun 8


          Download Zip >> https://urlin.us/2uEx0h



          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ghulam-E-Mustafa Hai Full Movie Download 720p Movie ((NEW)).md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ghulam-E-Mustafa Hai Full Movie Download 720p Movie ((NEW)).md deleted file mode 100644 index bdd42419ff3f97ca206edd4e659f105159b6c643..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ghulam-E-Mustafa Hai Full Movie Download 720p Movie ((NEW)).md +++ /dev/null @@ -1,6 +0,0 @@ -

          Ghulam-E-Mustafa hai full movie download 720p movie


          Download Zip ☆☆☆☆☆ https://urlin.us/2uEyA4



          -
          - d5da3c52bf
          -
          -
          -

          diff --git a/spaces/iqovocn/ChuanhuChatGPT/modules/models/configuration_moss.py b/spaces/iqovocn/ChuanhuChatGPT/modules/models/configuration_moss.py deleted file mode 100644 index 9bad4396ecea6578c1628732d0ef077d8964d45d..0000000000000000000000000000000000000000 --- a/spaces/iqovocn/ChuanhuChatGPT/modules/models/configuration_moss.py +++ /dev/null @@ -1,118 +0,0 @@ -""" Moss model configuration""" - -from transformers.utils import logging -from transformers.configuration_utils import PretrainedConfig - - -logger = logging.get_logger(__name__) - - -class MossConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`MossModel`]. It is used to instantiate a - Moss model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the Moss - [fnlp/moss-moon-003-base](https://huggingface.co/fnlp/moss-moon-003-base) architecture. Configuration objects - inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from - [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 107008): - Vocabulary size of the Moss model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`MossModel`]. - n_positions (`int`, *optional*, defaults to 2048): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 4096): - Dimensionality of the embeddings and hidden states. - n_layer (`int`, *optional*, defaults to 28): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - rotary_dim (`int`, *optional*, defaults to 64): - Number of dimensions in the embedding that Rotary Position Embedding is applied to. - n_inner (`int`, *optional*, defaults to None): - Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd - activation_function (`str`, *optional*, defaults to `"gelu_new"`): - Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`int`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - attn_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-5): - The epsilon to use in the layer normalization layers. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - - Example: - - ```python - >>> from modeling_moss import MossModel - >>> from configuration_moss import MossConfig - - >>> # Initializing a moss-moon-003-base configuration - >>> configuration = MossConfig() - - >>> # Initializing a model (with random weights) from the configuration - >>> model = MossModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "moss" - attribute_map = { - "max_position_embeddings": "n_positions", - "hidden_size": "n_embd", - "num_attention_heads": "n_head", - "num_hidden_layers": "n_layer", - } - - def __init__( - self, - vocab_size=107008, - n_positions=2048, - n_ctx=2048, - n_embd=4096, - n_layer=28, - n_head=16, - rotary_dim=64, - n_inner=None, - activation_function="gelu_new", - resid_pdrop=0.0, - embd_pdrop=0.0, - attn_pdrop=0.0, - layer_norm_epsilon=1e-5, - initializer_range=0.02, - use_cache=True, - bos_token_id=106028, - eos_token_id=106068, - tie_word_embeddings=False, - **kwargs, - ): - self.vocab_size = vocab_size - self.n_ctx = n_ctx - self.n_positions = n_positions - self.n_embd = n_embd - self.n_layer = n_layer - self.n_head = n_head - self.n_inner = n_inner - self.rotary_dim = rotary_dim - self.activation_function = activation_function - self.resid_pdrop = resid_pdrop - self.embd_pdrop = embd_pdrop - self.attn_pdrop = attn_pdrop - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_range = initializer_range - self.use_cache = use_cache - - self.bos_token_id = bos_token_id - self.eos_token_id = eos_token_id - - super().__init__( - bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs - ) diff --git a/spaces/ismot/1702t1/utils/visibility_polygon.py b/spaces/ismot/1702t1/utils/visibility_polygon.py deleted file mode 100644 index b07873768dffb7d01ad59fc051bffb3975639b78..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/utils/visibility_polygon.py +++ /dev/null @@ -1,268 +0,0 @@ -""" -@date: 2021/7/20 -@description: reference https://www.redblobgames.com/articles/visibility/ -""" -import math -import numpy as np -from functools import cmp_to_key as ctk -from PIL import Image - - -class Point: - def __init__(self, x: float, y: float): - self.x = x - self.y = y - - -class EndPoint(Point): - def __init__(self, x: float, y: float, begins_segment: bool = None, segment=None, angle: float = None): - super().__init__(x, y) - self.begins_segment = begins_segment - self.segment = segment - self.angle = angle - - -class Segment: - def __init__(self, x1: float, y1: float, x2: float, y2: float, d: float = None): - self.p1 = EndPoint(x1, y1) - self.p2 = EndPoint(x2, y2) - self.p1.segment = self - self.p2.segment = self - self.d = d - - -def calculate_end_point_angles(light_source: Point, segment: Segment) -> None: - x = light_source.x - y = light_source.y - dx = 0.5 * (segment.p1.x + segment.p2.x) - x - dy = 0.5 * (segment.p1.y + segment.p2.y) - y - segment.d = (dx * dx) + (dy * dy) - segment.p1.angle = math.atan2(segment.p1.y - y, segment.p1.x - x) - segment.p2.angle = math.atan2(segment.p2.y - y, segment.p2.x - x) - - -def set_segment_beginning(segment: Segment) -> None: - d_angle = segment.p2.angle - segment.p1.angle - if d_angle <= -math.pi: - d_angle += 2 * math.pi - if d_angle > math.pi: - d_angle -= 2 * math.pi - segment.p1.begins_segment = d_angle > 0 - segment.p2.begins_segment = not segment.p1.begins_segment - - -def endpoint_compare(point_a: EndPoint, point_b: EndPoint): - if point_a.angle > point_b.angle: - return 1 - if point_a.angle < point_b.angle: - return -1 - if not point_a.begins_segment and point_b.begins_segment: - return 1 - if point_a.begins_segment and not point_b.begins_segment: - return -1 - return 0 - - -def polygon_to_segments(polygon: np.array) -> np.array: - segments = [] - polygon = np.concatenate((polygon, [polygon[0]])) - for i in range(len(polygon) - 1): - p1 = polygon[i] - p2 = polygon[i + 1] - segments.append([p1, p2]) - segments = np.array(segments) - return segments - - -def segment_in_front_of(segment_a: Segment, segment_b: Segment, relative_point: Point): - def left_of(segment: Segment, point: Point): - cross = (segment.p2.x - segment.p1.x) * (point.y - segment.p1.y) - (segment.p2.y - segment.p1.y) * ( - point.x - segment.p1.x) - return cross < 0 - - def interpolate(point_a: Point, point_b: Point, f: float): - point = Point(x=point_a.x * (1 - f) + point_b.x * f, - y=point_a.y * (1 - f) + point_b.y * f) - return point - - a1 = left_of(segment_a, interpolate(segment_b.p1, segment_b.p2, 0.01)) - a2 = left_of(segment_a, interpolate(segment_b.p2, segment_b.p1, 0.01)) - a3 = left_of(segment_a, relative_point) - b1 = left_of(segment_b, interpolate(segment_a.p1, segment_a.p2, 0.01)) - b2 = left_of(segment_b, interpolate(segment_a.p2, segment_a.p1, 0.01)) - b3 = left_of(segment_b, relative_point) - if b1 == b2 and not (b2 == b3): - return True - if a1 == a2 and a2 == a3: - return True - if a1 == a2 and not (a2 == a3): - return False - if b1 == b2 and b2 == b3: - return False - return False - - -def line_intersection(point1: Point, point2: Point, point3: Point, point4: Point): - a = (point4.y - point3.y) * (point2.x - point1.x) - (point4.x - point3.x) * (point2.y - point1.y) - b = (point4.x - point3.x) * (point1.y - point3.y) - (point4.y - point3.y) * (point1.x - point3.x) - assert a != 0 or a == b, "center on polygon, it not support!" - if a == 0: - s = 1 - else: - s = b / a - - return Point( - point1.x + s * (point2.x - point1.x), - point1.y + s * (point2.y - point1.y) - ) - - -def get_triangle_points(origin: Point, angle1: float, angle2: float, segment: Segment): - p1 = origin - p2 = Point(origin.x + math.cos(angle1), origin.y + math.sin(angle1)) - p3 = Point(0, 0) - p4 = Point(0, 0) - - if segment: - p3.x = segment.p1.x - p3.y = segment.p1.y - p4.x = segment.p2.x - p4.y = segment.p2.y - else: - p3.x = origin.x + math.cos(angle1) * 2000 - p3.y = origin.y + math.sin(angle1) * 2000 - p4.x = origin.x + math.cos(angle2) * 2000 - p4.y = origin.y + math.sin(angle2) * 2000 - - # use the endpoint directly when the rays are parallel to segment - if abs(segment.p1.angle - segment.p2.angle) < 1e-6: - return [p4, p3] - - # it's maybe generate error coordinate when the rays are parallel to segment - p_begin = line_intersection(p3, p4, p1, p2) - p2.x = origin.x + math.cos(angle2) - p2.y = origin.y + math.sin(angle2) - p_end = line_intersection(p3, p4, p1, p2) - - return [p_begin, p_end] - - -def calc_visible_polygon(center: np.array, polygon: np.array = None, segments: np.array = None, show: bool = False): - if segments is None and polygon is not None: - segments = polygon_to_segments(polygon) - - origin = Point(x=center[0], y=center[1]) - endpoints = [] - for s in segments: - p1 = s[0] - p2 = s[1] - segment = Segment(x1=p1[0], y1=p1[1], x2=p2[0], y2=p2[1]) - calculate_end_point_angles(origin, segment) - set_segment_beginning(segment) - endpoints.extend([segment.p1, segment.p2]) - - open_segments = [] - output = [] - begin_angle = 0 - endpoints = sorted(endpoints, key=ctk(endpoint_compare)) - - for pas in range(2): - for endpoint in endpoints: - open_segment = open_segments[0] if len(open_segments) else None - if endpoint.begins_segment: - index = 0 - segment = open_segments[index] if index < len(open_segments) else None - while segment and segment_in_front_of(endpoint.segment, segment, origin): - index += 1 - segment = open_segments[index] if index < len(open_segments) else None - - if not segment: - open_segments.append(endpoint.segment) - else: - open_segments.insert(index, endpoint.segment) - else: - if endpoint.segment in open_segments: - open_segments.remove(endpoint.segment) - - if open_segment is not (open_segments[0] if len(open_segments) else None): - if pas == 1 and open_segment: - triangle_points = get_triangle_points(origin, begin_angle, endpoint.angle, open_segment) - output.extend(triangle_points) - begin_angle = endpoint.angle - - output_polygon = [] - # Remove duplicate - for i, p in enumerate(output): - q = output[(i + 1) % len(output)] - if int(p.x * 10000) == int(q.x * 10000) and int(p.y * 10000) == int(q.y * 10000): - continue - output_polygon.append([p.x, p.y]) - - output_polygon.reverse() - output_polygon = np.array(output_polygon) - - if show: - visualization(segments, output_polygon, center) - return output_polygon - - -def visualization(segments: np.array, output_polygon: np.array, center: np.array, side_l=1000): - """ - :param segments: original segments - :param output_polygon: result polygon - :param center: visibility center - :param side_l: side length of board - :return: - """ - try: - import cv2 - import matplotlib.pyplot as plt - except ImportError: - print("visualization need cv2 and matplotlib") - return - offset = np.array([side_l / 2, side_l / 2]) - center - segments = segments + offset - output_polygon = output_polygon + offset - origin = np.array([side_l / 2, side_l / 2]) - - # +0.5 as board - scale = side_l / 2.5 / np.abs(segments - origin).max() - board = np.zeros((side_l, side_l)) - for segment in segments: - segment = (segment - origin) * scale + origin - segment = segment.astype(np.int) - cv2.line(board, tuple(segment[0]), tuple(segment[1]), 0.5, thickness=3) - board = cv2.drawMarker(board, tuple(origin.astype(np.int)), 1, thickness=3) - - output_polygon = (output_polygon - origin) * scale + origin - board = cv2.drawContours(board, [output_polygon.astype(np.int)], 0, 1, 3) - board = cv2.drawMarker(board, tuple(origin.astype(np.int)), 1, thickness=3) - plt.axis('off') - plt.imshow(board) - plt.show() - - -if __name__ == '__main__': - import numpy as np - - from dataset.mp3d_dataset import MP3DDataset - from utils.boundary import depth2boundaries - from utils.conversion import uv2xyz, depth2xyz - from visualization.boundary import draw_boundaries - from visualization.floorplan import draw_floorplan, draw_iou_floorplan - - mp3d_dataset = MP3DDataset(root_dir='../src/dataset/mp3d', mode='train', - split_list=[['e9zR4mvMWw7', '2224be23a70a475ea6daa55d4c90a91b']]) - gt = mp3d_dataset.__getitem__(0) - gt['corners'] = gt['corners'][gt['corners'][..., 0] + gt['corners'][..., 1] != 0] # Take effective corners - - img = draw_floorplan(depth2xyz(gt['depth'])[:, ::2], fill_color=[1, 1, 1, 0], - show=True, scale=1, marker_color=[0, 0, 1, 1], side_l=1024) - # img = draw_iou_floorplan(gt_xz=uv2xyz(gt['corners'])[..., ::2], - # dt_xz=calc_visible_polygon(np.array([0, 0]), uv2xyz(gt['corners'])[..., ::2]), - # dt_board_color=[0, 0, 1, 0], - # gt_board_color=[0, 0, 1, 0], - # show=True, side_l=1024) - - result = Image.fromarray((img[250: -100, 100:-20] * 255).astype(np.uint8)) - result.save('../src/fig/sample3.png') diff --git a/spaces/ivntl/MMS/vits/text/symbols.py b/spaces/ivntl/MMS/vits/text/symbols.py deleted file mode 100644 index 869a53e763ae825bc02921842280ac9efe7f85dd..0000000000000000000000000000000000000000 --- a/spaces/ivntl/MMS/vits/text/symbols.py +++ /dev/null @@ -1,16 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Defines the set of symbols used in text input to the model. -''' -_pad = '_' -_punctuation = ';:,.!?¡¿—…"«»“” ' -_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' -_letters_ipa = "ɑɐɒæɓʙβɔɕçɗɖðʤəɘɚɛɜɝɞɟʄɡɠɢʛɦɧħɥʜɨɪʝɭɬɫɮʟɱɯɰŋɳɲɴøɵɸθœɶʘɹɺɾɻʀʁɽʂʃʈʧʉʊʋⱱʌɣɤʍχʎʏʑʐʒʔʡʕʢǀǁǂǃˈˌːˑʼʴʰʱʲʷˠˤ˞↓↑→↗↘'̩'ᵻ" - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) + list(_letters_ipa) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/james-oldfield/PandA/networks/genforce/datasets/dataloaders.py b/spaces/james-oldfield/PandA/networks/genforce/datasets/dataloaders.py deleted file mode 100644 index 3a9b31c4545233e9b0c0e134e627fa34dffe806f..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/genforce/datasets/dataloaders.py +++ /dev/null @@ -1,128 +0,0 @@ -# python3.7 -"""Contains the class of data loader.""" - -import argparse - -from torch.utils.data import DataLoader -from .distributed_sampler import DistributedSampler -from .datasets import BaseDataset - - -__all__ = ['IterDataLoader'] - - -class IterDataLoader(object): - """Iteration-based data loader.""" - - def __init__(self, - dataset, - batch_size, - shuffle=True, - num_workers=1, - current_iter=0, - repeat=1): - """Initializes the data loader. - - Args: - dataset: The dataset to load data from. - batch_size: The batch size on each GPU. - shuffle: Whether to shuffle the data. (default: True) - num_workers: Number of data workers for each GPU. (default: 1) - current_iter: The current number of iterations. (default: 0) - repeat: The repeating number of the whole dataloader. (default: 1) - """ - self._dataset = dataset - self.batch_size = batch_size - self.shuffle = shuffle - self.num_workers = num_workers - self._dataloader = None - self.iter_loader = None - self._iter = current_iter - self.repeat = repeat - self.build_dataloader() - - def build_dataloader(self): - """Builds data loader.""" - dist_sampler = DistributedSampler(self._dataset, - shuffle=self.shuffle, - current_iter=self._iter, - repeat=self.repeat) - - self._dataloader = DataLoader(self._dataset, - batch_size=self.batch_size, - shuffle=(dist_sampler is None), - num_workers=self.num_workers, - drop_last=self.shuffle, - pin_memory=True, - sampler=dist_sampler) - self.iter_loader = iter(self._dataloader) - - - def overwrite_param(self, batch_size=None, resolution=None): - """Overwrites some parameters for progressive training.""" - if (not batch_size) and (not resolution): - return - if (batch_size == self.batch_size) and ( - resolution == self.dataset.resolution): - return - if batch_size: - self.batch_size = batch_size - if resolution: - self._dataset.resolution = resolution - self.build_dataloader() - - @property - def iter(self): - """Returns the current iteration.""" - return self._iter - - @property - def dataset(self): - """Returns the dataset.""" - return self._dataset - - @property - def dataloader(self): - """Returns the data loader.""" - return self._dataloader - - def __next__(self): - try: - data = next(self.iter_loader) - self._iter += 1 - except StopIteration: - self._dataloader.sampler.__reset__(self._iter) - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - self._iter += 1 - return data - - def __len__(self): - return len(self._dataloader) - - -def dataloader_test(root_dir, test_num=10): - """Tests data loader.""" - res = 2 - bs = 2 - dataset = BaseDataset(root_dir=root_dir, resolution=res) - dataloader = IterDataLoader(dataset=dataset, - batch_size=bs, - shuffle=False) - for _ in range(test_num): - data_batch = next(dataloader) - image = data_batch['image'] - assert image.shape == (bs, 3, res, res) - res *= 2 - bs += 1 - dataloader.overwrite_param(batch_size=bs, resolution=res) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Test Data Loader.') - parser.add_argument('root_dir', type=str, - help='Root directory of the dataset.') - parser.add_argument('--test_num', type=int, default=10, - help='Number of tests. (default: %(default)s)') - args = parser.parse_args() - dataloader_test(args.root_dir, args.test_num) diff --git a/spaces/jbilcke-hf/Panoremix/src/app/interface/spherical-image/index.tsx b/spaces/jbilcke-hf/Panoremix/src/app/interface/spherical-image/index.tsx deleted file mode 100644 index 743907eb0ba932d24a0f7abb707b5337cf29ac8d..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/Panoremix/src/app/interface/spherical-image/index.tsx +++ /dev/null @@ -1,289 +0,0 @@ -"use client" - -import { useEffect, useRef, useState } from "react" -import { PanoramaPosition, PluginConstructor, Point, Position, SphericalPosition, Viewer } from "@photo-sphere-viewer/core" -import { LensflarePlugin, ReactPhotoSphereViewer } from "react-photo-sphere-viewer" - -import { MouseEventHandler, RenderedScene } from "@/types" - -import { useImageDimension } from "@/lib/useImageDimension" -import { lightSourceNames } from "@/lib/lightSourceNames" - -type PhotoSpherePlugin = (PluginConstructor | [PluginConstructor, any]) - -export function SphericalImage({ - rendered, - onEvent, - className, - debug, -}: { - rendered: RenderedScene - onEvent: MouseEventHandler - className?: string - debug?: boolean -}) { - - - const imageDimension = useImageDimension(rendered.assetUrl) - const maskDimension = useImageDimension(rendered.maskUrl) - - const sceneConfig = JSON.stringify({ rendered, debug, imageDimension, maskDimension }) - const [lastSceneConfig, setLastSceneConfig] = useState(sceneConfig) - const rootContainerRef = useRef(null) - const viewerContainerRef = useRef() - const viewerRef = useRef() - const [mouseMoved, setMouseMoved] = useState(false) - - const defaultZoomLvl = 1 // 0 = 180 fov, 100 = 1 fov - - const options = { - defaultZoomLvl, - fisheye: false, // ..no! - overlay: rendered.maskUrl || undefined, - overlayOpacity: debug ? 0.5 : 0, - /* - panoData: { - fullWidth: 2000, - fullHeight: 1200, - croppedWidth: 1024, - croppedHeight: 512, - croppedX: 0, - croppedY: 200, - // poseHeading: 0, // 0 to 360 - posePitch: 0, // -90 to 90 - // poseRoll: 0, // -180 to 180 - } - */ - } - - - const cacheRef = useRef("") - useEffect(() => { - const listener = (e: DragEvent) => { - if (!rootContainerRef.current) { return } - - // TODO: check if we are currently dragging an object - // if yes, then we should check if clientX and clientY are matching the - const boundingRect = rootContainerRef.current.getBoundingClientRect() - - // abort if we are not currently dragging over our display area - if (e.clientX < boundingRect.left) { return } - if (e.clientX > (boundingRect.left + boundingRect.width)) { return } - if (e.clientY < boundingRect.top) { return } - if (e.clientY > (boundingRect.top + boundingRect.height)) { return } - - const containerX = e.clientX - boundingRect.left - const containerY = e.clientY - boundingRect.top - - const relativeX = containerX / boundingRect.width - const relativeY = containerY / boundingRect.height - - const key = `${relativeX},${relativeY}` - - // to avoid use - if (cacheRef.current === key) { - return - } - // console.log(`DRAG: calling onEvent("hover", ${relativeX}, ${relativeY})`) - - cacheRef.current = key - onEvent("hover", relativeX, relativeY) - } - - document.addEventListener('drag', listener) - - return () => { - document.removeEventListener('drag', listener) - } - }, [onEvent]) - - useEffect(() => { - const task = async () => { - // console.log("SphericalImage: useEffect") - if (sceneConfig !== lastSceneConfig) { - // console.log("SphericalImage: scene config changed!") - - if (!viewerRef.current) { - // console.log("SphericalImage: no ref!") - setLastSceneConfig(sceneConfig) - return - } - const viewer = viewerRef.current - - const newOptions = { - ...options, - } - - const lensflares: { id: string; position: SphericalPosition; type: number }[] = [] - - if (maskDimension.width && imageDimension.width) { - - // console.log("rendered.segments:", rendered.segments) - - rendered.segments - .filter(segment => lightSourceNames.includes(segment.label)) - .forEach(light => { - // console.log("light detected", light) - const [x1, y1, x2, y2] = light.box - const [centerX, centerY] = [(x1 + x2) / 2, (y1 + y2) / 2] - // console.log("center:", { centerX, centerY }) - const [relativeX, relativeY] = [centerX / maskDimension.width, centerY/ maskDimension.height] - // console.log("relative:", { relativeX, relativeY}) - - const panoramaPosition: PanoramaPosition = { - textureX: relativeX * imageDimension.width, - textureY: relativeY * imageDimension.height - } - // console.log("panoramaPosition:", panoramaPosition) - - const position = viewer.dataHelper.textureCoordsToSphericalCoords(panoramaPosition) - // console.log("sphericalPosition:", position) - if ( // make sure coordinates are valid - !isNaN(position.pitch) && isFinite(position.pitch) && - !isNaN(position.yaw) && isFinite(position.yaw)) { - lensflares.push({ - id: `flare_${lensflares.length}`, - position, - type: 0, - }) - } - }) - } - - // console.log("lensflares:", lensflares) - const lensFlarePlugin = viewer.getPlugin("lensflare") - lensFlarePlugin.setLensflares(lensflares) - - // console.log("SphericalImage: calling setOptions") - // console.log("SphericalImage: changing the panorama to: " + rendered.assetUrl.slice(0, 120)) - - await viewer.setPanorama(rendered.assetUrl, { - ...newOptions, - showLoader: false, - }) - - // TODO we should separate all those updates, probaby - viewer.setOptions(newOptions) - // viewer.setOverlay(rendered.maskUrl || undefined) - - // console.log("SphericalImage: asking to re-render") - viewerRef.current.needsUpdate() - - setLastSceneConfig(sceneConfig) - } - } - task() - }, [sceneConfig, rendered.assetUrl, viewerRef.current, maskDimension.width, imageDimension]) - - const handleEvent = async (event: React.MouseEvent, isClick: boolean) => { - const rootContainer = rootContainerRef.current - const viewer = viewerRef.current - const viewerContainer = viewerContainerRef.current - - /* - if (isClick) console.log(`handleEvent(${isClick})`, { - " imageDimension.width": imageDimension.width, - "rendered.maskUrl": rendered.maskUrl - }) - */ - - if (!viewer || !rootContainer || !viewerContainer || !imageDimension.width || !rendered.maskUrl) { - return - } - - const containerRect = viewerContainer.getBoundingClientRect() - // if (isClick) console.log("containerRect:", containerRect) - - const containerY = event.clientY - containerRect.top - // console.log("containerY:", containerY) - - const position: Position = viewer.getPosition() - - const viewerPosition: Point = viewer.dataHelper.sphericalCoordsToViewerCoords(position) - // if (isClick) console.log("viewerPosition:", viewerPosition) - - // we want to ignore events that are happening in the toolbar - // note that we will probably hide this toolbar at some point, - // to implement our own UI - if (isClick && containerY > (containerRect.height - 40)) { - // console.log("we are in the toolbar.. ignoring the click") - return - } - - const panoramaPosition: PanoramaPosition = viewer.dataHelper.sphericalCoordsToTextureCoords(position) - - if (typeof panoramaPosition.textureX !== "number" || typeof panoramaPosition.textureY !== "number") { - return - } - - const relativeX = panoramaPosition.textureX / imageDimension.width - const relativeY = panoramaPosition.textureY / imageDimension.height - - onEvent(isClick ? "click" : "hover", relativeX, relativeY) - } - - if (!rendered.assetUrl) { - return null - } - - return ( -
          { - handleEvent(event, false) - setMouseMoved(true) - }} - onMouseUp={(event) => { - if (!mouseMoved) { - handleEvent(event, true) - } - setMouseMoved(false) - }} - onMouseDown={() => { - setMouseMoved(false) - }} - > - { - // nothing to do here - }} - - onReady={(instance) => { - viewerRef.current = instance - viewerContainerRef.current = instance.container - - /* - const markersPlugs = instance.getPlugin(MarkersPlugin); - if (!markersPlugs) - return; - markersPlugs.addMarker({ - id: "imageLayer2", - imageLayer: "drone.png", - size: { width: 220, height: 220 }, - position: { yaw: '130.5deg', pitch: '-0.1deg' }, - tooltip: "Image embedded in the scene" - }); - markersPlugs.addEventListener("select-marker", () => { - console.log("asd"); - }); - */ - }} - - /> -
          - ) -} \ No newline at end of file diff --git a/spaces/jdhuka/StaticHTML5PlayCanvas/README.md b/spaces/jdhuka/StaticHTML5PlayCanvas/README.md deleted file mode 100644 index 265f9b5b8990a93e9c74c56a2c8d7c15083a0495..0000000000000000000000000000000000000000 --- a/spaces/jdhuka/StaticHTML5PlayCanvas/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: StaticHTML5PlayCanvas -emoji: 😻 -colorFrom: purple -colorTo: pink -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jesuspj/jesuspj/Dockerfile b/spaces/jesuspj/jesuspj/Dockerfile deleted file mode 100644 index 94ee76a4f45af463ab7f945633c9258172f9cc80..0000000000000000000000000000000000000000 --- a/spaces/jesuspj/jesuspj/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM huggingface/autotrain-advanced:latest -CMD autotrain app --port 7860 diff --git a/spaces/jeycov/IADERM-UTOPIC-PFIZER/README.md b/spaces/jeycov/IADERM-UTOPIC-PFIZER/README.md deleted file mode 100644 index 1af6ed11c40bf4a09c4e9e101dcfc6d32ac48135..0000000000000000000000000000000000000000 --- a/spaces/jeycov/IADERM-UTOPIC-PFIZER/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: AIDERM-UTOPIC -emoji: 🦸‍♂️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.0.19 -app_file: app.py -pinned: true -python_version: 3.7.10 ---- \ No newline at end of file diff --git a/spaces/jitubutwal1441/multiple-pdfs-chat/README.md b/spaces/jitubutwal1441/multiple-pdfs-chat/README.md deleted file mode 100644 index 3df0d61c7b40762726065bf9e7fb324406ec478a..0000000000000000000000000000000000000000 --- a/spaces/jitubutwal1441/multiple-pdfs-chat/README.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: Multiple Pdfs Chat -emoji: 🏃 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.27.0 -app_file: app.py -pinned: false ---- - - -# MultiPDF Chat App - -> You can find the tutorial for this project on [YouTube](https://youtu.be/dXxQ0LR-3Hg). - -## Introduction ------------- -The MultiPDF Chat App is a Python application that allows you to chat with multiple PDF documents. You can ask questions about the PDFs using natural language, and the application will provide relevant responses based on the content of the documents. This app utilizes a language model to generate accurate answers to your queries. Please note that the app will only respond to questions related to the loaded PDFs. - -## How It Works ------------- - -![MultiPDF Chat App Diagram](./docs/PDF-LangChain.jpg) - -The application follows these steps to provide responses to your questions: - -1. PDF Loading: The app reads multiple PDF documents and extracts their text content. - -2. Text Chunking: The extracted text is divided into smaller chunks that can be processed effectively. - -3. Language Model: The application utilizes a language model to generate vector representations (embeddings) of the text chunks. - -4. Similarity Matching: When you ask a question, the app compares it with the text chunks and identifies the most semantically similar ones. - -5. Response Generation: The selected chunks are passed to the language model, which generates a response based on the relevant content of the PDFs. - -## Dependencies and Installation ----------------------------- -To install the MultiPDF Chat App, please follow these steps: - -1. Clone the repository to your local machine. - -2. Install the required dependencies by running the following command: - ``` - pip install -r requirements.txt - ``` - -3. Obtain an API key from OpenAI and add it to the `.env` file in the project directory. -```commandline -OPENAI_API_KEY=your_secrit_api_key -``` - -## Usage ------ -To use the MultiPDF Chat App, follow these steps: - -1. Ensure that you have installed the required dependencies and added the OpenAI API key to the `.env` file. - -2. Run the `main.py` file using the Streamlit CLI. Execute the following command: - ``` - streamlit run app.py - ``` - -3. The application will launch in your default web browser, displaying the user interface. - -4. Load multiple PDF documents into the app by following the provided instructions. - -5. Ask questions in natural language about the loaded PDFs using the chat interface. - -## Contributing ------------- -This repository is intended for educational purposes and does not accept further contributions. It serves as supporting material for a YouTube tutorial that demonstrates how to build this project. Feel free to utilize and enhance the app based on your own requirements. - -## License -------- -The MultiPDF Chat App is released under the [MIT License](https://opensource.org/licenses/MIT). \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/__init__.py deleted file mode 100644 index 4154ee64a004aa8c046f751d903b3a7b1504affd..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -""" -PyPDF2 is a free and open-source pure-python PDF library capable of splitting, -merging, cropping, and transforming the pages of PDF files. It can also add -custom data, viewing options, and passwords to PDF files. PyPDF2 can retrieve -text and metadata from PDFs as well. - -You can read the full docs at https://pypdf2.readthedocs.io/. -""" - -import warnings - -from ._encryption import PasswordType -from ._merger import PdfFileMerger, PdfMerger -from ._page import PageObject, Transformation -from ._reader import DocumentInformation, PdfFileReader, PdfReader -from ._version import __version__ -from ._writer import PdfFileWriter, PdfWriter -from .pagerange import PageRange, parse_filename_page_ranges -from .papersizes import PaperSize - -warnings.warn( - message="PyPDF2 is deprecated. Please move to the pypdf library instead.", - category=DeprecationWarning, -) - -__all__ = [ - "__version__", - "PageRange", - "PaperSize", - "DocumentInformation", - "parse_filename_page_ranges", - "PdfFileMerger", # will be removed in PyPDF2 3.0.0; use PdfMerger instead - "PdfFileReader", # will be removed in PyPDF2 3.0.0; use PdfReader instead - "PdfFileWriter", # will be removed in PyPDF2 3.0.0; use PdfWriter instead - "PdfMerger", - "PdfReader", - "PdfWriter", - "Transformation", - "PageObject", - "PasswordType", -] diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py deleted file mode 100644 index 684c6586f091350c347f2b6150935f5214ffec27..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py +++ /dev/null @@ -1,75 +0,0 @@ -import logging -import os -import tempfile -import shutil -import json -from subprocess import check_call, check_output -from tarfile import TarFile - -from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME - - -def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None): - """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar* - - filename is the timezone tarball from ``ftp.iana.org/tz``. - - """ - tmpdir = tempfile.mkdtemp() - zonedir = os.path.join(tmpdir, "zoneinfo") - moduledir = os.path.dirname(__file__) - try: - with TarFile.open(filename) as tf: - for name in zonegroups: - tf.extract(name, tmpdir) - filepaths = [os.path.join(tmpdir, n) for n in zonegroups] - - _run_zic(zonedir, filepaths) - - # write metadata file - with open(os.path.join(zonedir, METADATA_FN), 'w') as f: - json.dump(metadata, f, indent=4, sort_keys=True) - target = os.path.join(moduledir, ZONEFILENAME) - with TarFile.open(target, "w:%s" % format) as tf: - for entry in os.listdir(zonedir): - entrypath = os.path.join(zonedir, entry) - tf.add(entrypath, entry) - finally: - shutil.rmtree(tmpdir) - - -def _run_zic(zonedir, filepaths): - """Calls the ``zic`` compiler in a compatible way to get a "fat" binary. - - Recent versions of ``zic`` default to ``-b slim``, while older versions - don't even have the ``-b`` option (but default to "fat" binaries). The - current version of dateutil does not support Version 2+ TZif files, which - causes problems when used in conjunction with "slim" binaries, so this - function is used to ensure that we always get a "fat" binary. - """ - - try: - help_text = check_output(["zic", "--help"]) - except OSError as e: - _print_on_nosuchfile(e) - raise - - if b"-b " in help_text: - bloat_args = ["-b", "fat"] - else: - bloat_args = [] - - check_call(["zic"] + bloat_args + ["-d", zonedir] + filepaths) - - -def _print_on_nosuchfile(e): - """Print helpful troubleshooting message - - e is an exception raised by subprocess.check_call() - - """ - if e.errno == 2: - logging.error( - "Could not find zic. Perhaps you need to install " - "libc-bin or some other package that provides it, " - "or it's not in your PATH?") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/wsgi.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/wsgi.py deleted file mode 100644 index c4c6a797d2675e1c13b028be977c64a822fb649b..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/wsgi.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.middleware.wsgi import WSGIMiddleware as WSGIMiddleware # noqa diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/empty/base.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/empty/base.py deleted file mode 100644 index ddea7d117f6d8a9805728d4b0594b2c63fc7aaf2..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/empty/base.py +++ /dev/null @@ -1,76 +0,0 @@ -"""Empty index. - -An index that doesn't contain any documents. Can only be used for -pure LLM calls. - -""" - -from typing import Any, Dict, Optional, Sequence, Type - -from gpt_index.data_structs.data_structs import EmptyIndex -from gpt_index.indices.base import BaseGPTIndex -from gpt_index.indices.query.base import BaseGPTIndexQuery -from gpt_index.indices.query.empty.base import GPTEmptyIndexQuery -from gpt_index.indices.query.schema import QueryMode -from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor -from gpt_index.langchain_helpers.text_splitter import TextSplitter -from gpt_index.schema import BaseDocument - - -class GPTEmptyIndex(BaseGPTIndex[EmptyIndex]): - """GPT Empty Index. - - An index that doesn't contain any documents. Used for - pure LLM calls. - NOTE: this exists because an empty index it allows certain properties, - such as the ability to be composed with other indices + token - counting + others. - - """ - - index_struct_cls = EmptyIndex - - def __init__( - self, - index_struct: Optional[EmptyIndex] = None, - llm_predictor: Optional[LLMPredictor] = None, - text_splitter: Optional[TextSplitter] = None, - **kwargs: Any, - ) -> None: - """Initialize params.""" - super().__init__( - documents=[], - index_struct=index_struct, - llm_predictor=llm_predictor, - text_splitter=text_splitter, - **kwargs, - ) - - @classmethod - def get_query_map(self) -> Dict[str, Type[BaseGPTIndexQuery]]: - """Get query map.""" - return { - QueryMode.DEFAULT: GPTEmptyIndexQuery, - } - - def _build_index_from_documents( - self, documents: Sequence[BaseDocument] - ) -> EmptyIndex: - """Build the index from documents. - - Args: - documents (List[BaseDocument]): A list of documents. - - Returns: - IndexList: The created list index. - """ - index_struct = EmptyIndex() - return index_struct - - def _insert(self, document: BaseDocument, **insert_kwargs: Any) -> None: - """Insert a document.""" - raise NotImplementedError("Cannot insert into an empty index.") - - def _delete(self, doc_id: str, **delete_kwargs: Any) -> None: - """Delete a document.""" - raise NotImplementedError("Cannot delete from an empty index.") diff --git a/spaces/justest/gpt4free/testing/wewordle/testing.py b/spaces/justest/gpt4free/testing/wewordle/testing.py deleted file mode 100644 index cebcaeed2b0c348b41003ddca8b15c7b3b2f7199..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/testing/wewordle/testing.py +++ /dev/null @@ -1,30 +0,0 @@ -from Wewordle import ChatCompletion - -# Test 1 -response = ChatCompletion.create(model="gpt-3.5-turbo", - provider="Wewordle", - stream=False, - messages=[{'role': 'user', 'content': 'who are you?'}]) - -print(response) - -# Test 2 -response = ChatCompletion.create(model="gpt-3.5-turbo", - provider="Wewordle", - stream=False, - messages=[{'role': 'user', 'content': 'what you can do?'}]) - -print(response) - - -# Test 3 -response = ChatCompletion.create(model="gpt-3.5-turbo", - provider="Wewordle", - stream=False, - messages=[ - {'role': 'user', 'content': 'now your name is Bob'}, - {'role': 'assistant', 'content': 'Hello Im Bob, you asistant'}, - {'role': 'user', 'content': 'what your name again?'}, - ]) - -print(response) diff --git a/spaces/jvde/sovits-webui/losses.py b/spaces/jvde/sovits-webui/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/jvde/sovits-webui/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/kcagle/AutoGPT/autogpt/spinner.py b/spaces/kcagle/AutoGPT/autogpt/spinner.py deleted file mode 100644 index 4e33d74213881352546f334ccb1eb4772b8b7b70..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/spinner.py +++ /dev/null @@ -1,65 +0,0 @@ -"""A simple spinner module""" -import itertools -import sys -import threading -import time - - -class Spinner: - """A simple spinner class""" - - def __init__(self, message: str = "Loading...", delay: float = 0.1) -> None: - """Initialize the spinner class - - Args: - message (str): The message to display. - delay (float): The delay between each spinner update. - """ - self.spinner = itertools.cycle(["-", "/", "|", "\\"]) - self.delay = delay - self.message = message - self.running = False - self.spinner_thread = None - - def spin(self) -> None: - """Spin the spinner""" - while self.running: - sys.stdout.write(f"{next(self.spinner)} {self.message}\r") - sys.stdout.flush() - time.sleep(self.delay) - sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") - - def __enter__(self): - """Start the spinner""" - self.running = True - self.spinner_thread = threading.Thread(target=self.spin) - self.spinner_thread.start() - - return self - - def __exit__(self, exc_type, exc_value, exc_traceback) -> None: - """Stop the spinner - - Args: - exc_type (Exception): The exception type. - exc_value (Exception): The exception value. - exc_traceback (Exception): The exception traceback. - """ - self.running = False - if self.spinner_thread is not None: - self.spinner_thread.join() - sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") - sys.stdout.flush() - - def update_message(self, new_message, delay=0.1): - """Update the spinner message - Args: - new_message (str): New message to display - delay: Delay in seconds before updating the message - """ - time.sleep(delay) - sys.stdout.write( - f"\r{' ' * (len(self.message) + 2)}\r" - ) # Clear the current message - sys.stdout.flush() - self.message = new_message diff --git a/spaces/kdrkdrkdr/HinaTTS/monotonic_align/__init__.py b/spaces/kdrkdrkdr/HinaTTS/monotonic_align/__init__.py deleted file mode 100644 index 40b6f64aa116c74cac2f6a33444c9eeea2fdb38c..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/HinaTTS/monotonic_align/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) - diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/util/visualizer.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/util/visualizer.py deleted file mode 100644 index 4023a6d4086acba9bc88e079f625194d324d7c9e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/util/visualizer.py +++ /dev/null @@ -1,227 +0,0 @@ -"""This script defines the visualizer for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import os -import sys -import ntpath -import time -from . import util, html -from subprocess import Popen, PIPE -from torch.utils.tensorboard import SummaryWriter - -def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256): - """Save images to the disk. - - Parameters: - webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details) - visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs - image_path (str) -- the string is used to create image paths - aspect_ratio (float) -- the aspect ratio of saved images - width (int) -- the images will be resized to width x width - - This function will save images stored in 'visuals' to the HTML file specified by 'webpage'. - """ - image_dir = webpage.get_image_dir() - short_path = ntpath.basename(image_path[0]) - name = os.path.splitext(short_path)[0] - - webpage.add_header(name) - ims, txts, links = [], [], [] - - for label, im_data in visuals.items(): - im = util.tensor2im(im_data) - image_name = '%s/%s.png' % (label, name) - os.makedirs(os.path.join(image_dir, label), exist_ok=True) - save_path = os.path.join(image_dir, image_name) - util.save_image(im, save_path, aspect_ratio=aspect_ratio) - ims.append(image_name) - txts.append(label) - links.append(image_name) - webpage.add_images(ims, txts, links, width=width) - - -class Visualizer(): - """This class includes several functions that can display/save images and print/save logging information. - - It uses a Python library tensprboardX for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images. - """ - - def __init__(self, opt): - """Initialize the Visualizer class - - Parameters: - opt -- stores all the experiment flags; needs to be a subclass of BaseOptions - Step 1: Cache the training/test options - Step 2: create a tensorboard writer - Step 3: create an HTML object for saveing HTML filters - Step 4: create a logging file to store training losses - """ - self.opt = opt # cache the option - self.use_html = opt.isTrain and not opt.no_html - self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, 'logs', opt.name)) - self.win_size = opt.display_winsize - self.name = opt.name - self.saved = False - if self.use_html: # create an HTML object at /web/; images will be saved under /web/images/ - self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web') - self.img_dir = os.path.join(self.web_dir, 'images') - print('create web directory %s...' % self.web_dir) - util.mkdirs([self.web_dir, self.img_dir]) - # create a logging file to store training losses - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - def reset(self): - """Reset the self.saved status""" - self.saved = False - - - def display_current_results(self, visuals, total_iters, epoch, save_result): - """Display current results on tensorboad; save current results to an HTML file. - - Parameters: - visuals (OrderedDict) - - dictionary of images to display or save - total_iters (int) -- total iterations - epoch (int) - - the current epoch - save_result (bool) - - if save the current results to an HTML file - """ - for label, image in visuals.items(): - self.writer.add_image(label, util.tensor2im(image), total_iters, dataformats='HWC') - - if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved. - self.saved = True - # save images to the disk - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label)) - util.save_image(image_numpy, img_path) - - # update website - webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=0) - for n in range(epoch, 0, -1): - webpage.add_header('epoch [%d]' % n) - ims, txts, links = [], [], [] - - for label, image_numpy in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = 'epoch%.3d_%s.png' % (n, label) - ims.append(img_path) - txts.append(label) - links.append(img_path) - webpage.add_images(ims, txts, links, width=self.win_size) - webpage.save() - - def plot_current_losses(self, total_iters, losses): - # G_loss_collection = {} - # D_loss_collection = {} - # for name, value in losses.items(): - # if 'G' in name or 'NCE' in name or 'idt' in name: - # G_loss_collection[name] = value - # else: - # D_loss_collection[name] = value - # self.writer.add_scalars('G_collec', G_loss_collection, total_iters) - # self.writer.add_scalars('D_collec', D_loss_collection, total_iters) - for name, value in losses.items(): - self.writer.add_scalar(name, value, total_iters) - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message - - -class MyVisualizer: - def __init__(self, opt): - """Initialize the Visualizer class - - Parameters: - opt -- stores all the experiment flags; needs to be a subclass of BaseOptions - Step 1: Cache the training/test options - Step 2: create a tensorboard writer - Step 3: create an HTML object for saveing HTML filters - Step 4: create a logging file to store training losses - """ - self.opt = opt # cache the optio - self.name = opt.name - self.img_dir = os.path.join(opt.checkpoints_dir, opt.name, 'results') - - if opt.phase != 'test': - self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, opt.name, 'logs')) - # create a logging file to store training losses - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - - def display_current_results(self, visuals, total_iters, epoch, dataset='train', save_results=False, count=0, name=None, - add_image=True): - """Display current results on tensorboad; save current results to an HTML file. - - Parameters: - visuals (OrderedDict) - - dictionary of images to display or save - total_iters (int) -- total iterations - epoch (int) - - the current epoch - dataset (str) - - 'train' or 'val' or 'test' - """ - # if (not add_image) and (not save_results): return - - for label, image in visuals.items(): - for i in range(image.shape[0]): - image_numpy = util.tensor2im(image[i]) - if add_image: - self.writer.add_image(label + '%s_%02d'%(dataset, i + count), - image_numpy, total_iters, dataformats='HWC') - - if save_results: - save_path = os.path.join(self.img_dir, dataset, 'epoch_%s_%06d'%(epoch, total_iters)) - if not os.path.isdir(save_path): - os.makedirs(save_path) - - if name is not None: - img_path = os.path.join(save_path, '%s.png' % name) - else: - img_path = os.path.join(save_path, '%s_%03d.png' % (label, i + count)) - util.save_image(image_numpy, img_path) - - - def plot_current_losses(self, total_iters, losses, dataset='train'): - for name, value in losses.items(): - self.writer.add_scalar(name + '/%s'%dataset, value, total_iters) - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data, dataset='train'): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(dataset: %s, epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % ( - dataset, epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message diff --git a/spaces/kevinwang676/VITS2-Mandarin/mel_processing.py b/spaces/kevinwang676/VITS2-Mandarin/mel_processing.py deleted file mode 100644 index 359abdf705cf601338356e10b60a7ae05c8f965d..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/mel_processing.py +++ /dev/null @@ -1,121 +0,0 @@ -import math -import os -from packaging import version -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - if version.parse(torch.__version__) >= version.parse("2"): - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - else: - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - if version.parse(torch.__version__) >= version.parse("2"): - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - else: - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/kevkev05/Chat-To-Sequence/utility.py b/spaces/kevkev05/Chat-To-Sequence/utility.py deleted file mode 100644 index fdb093ac7432a63714930b306864dd8fb7644200..0000000000000000000000000000000000000000 --- a/spaces/kevkev05/Chat-To-Sequence/utility.py +++ /dev/null @@ -1,10 +0,0 @@ -from datasets import load_dataset - -def load_from_hub_csv(path, data_file, token, csv_output_file): - - dataset = load_dataset(path=path, - data_files=data_file, - token=token,) - - for split, data in dataset.items(): - data.to_csv(csv_output_file, index=None) diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/fctTile.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/fctTile.py deleted file mode 100644 index aa2415d9b5f6b221e14f3bceb9553deaf61418fc..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/fctTile.py +++ /dev/null @@ -1 +0,0 @@ -#--- factory class for tile operations \ No newline at end of file diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg2mel_train.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg2mel_train.py deleted file mode 100644 index 5a6a06c805109159ff40cad69668f1fc38cf1e9b..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg2mel_train.py +++ /dev/null @@ -1,67 +0,0 @@ -import sys -import torch -import argparse -import numpy as np -from utils.load_yaml import HpsYaml -from ppg2mel.train.train_linglf02mel_seq2seq_oneshotvc import Solver - -# For reproducibility, comment these may speed up training -torch.backends.cudnn.deterministic = True -torch.backends.cudnn.benchmark = False - -def main(): - # Arguments - parser = argparse.ArgumentParser(description= - 'Training PPG2Mel VC model.') - parser.add_argument('--config', type=str, - help='Path to experiment config, e.g., config/vc.yaml') - parser.add_argument('--name', default=None, type=str, help='Name for logging.') - parser.add_argument('--logdir', default='log/', type=str, - help='Logging path.', required=False) - parser.add_argument('--ckpdir', default='ppg2mel/saved_models/', type=str, - help='Checkpoint path.', required=False) - parser.add_argument('--outdir', default='result/', type=str, - help='Decode output path.', required=False) - parser.add_argument('--load', default=None, type=str, - help='Load pre-trained model (for training only)', required=False) - parser.add_argument('--warm_start', action='store_true', - help='Load model weights only, ignore specified layers.') - parser.add_argument('--seed', default=0, type=int, - help='Random seed for reproducable results.', required=False) - parser.add_argument('--njobs', default=8, type=int, - help='Number of threads for dataloader/decoding.', required=False) - parser.add_argument('--cpu', action='store_true', help='Disable GPU training.') - parser.add_argument('--no-pin', action='store_true', - help='Disable pin-memory for dataloader') - parser.add_argument('--test', action='store_true', help='Test the model.') - parser.add_argument('--no-msg', action='store_true', help='Hide all messages.') - parser.add_argument('--finetune', action='store_true', help='Finetune model') - parser.add_argument('--oneshotvc', action='store_true', help='Oneshot VC model') - parser.add_argument('--bilstm', action='store_true', help='BiLSTM VC model') - parser.add_argument('--lsa', action='store_true', help='Use location-sensitive attention (LSA)') - - ### - - paras = parser.parse_args() - setattr(paras, 'gpu', not paras.cpu) - setattr(paras, 'pin_memory', not paras.no_pin) - setattr(paras, 'verbose', not paras.no_msg) - # Make the config dict dot visitable - config = HpsYaml(paras.config) - - np.random.seed(paras.seed) - torch.manual_seed(paras.seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(paras.seed) - - print(">>> OneShot VC training ...") - mode = "train" - solver = Solver(config, paras, mode) - solver.load_data() - solver.set_model() - solver.exec() - print(">>> Oneshot VC train finished!") - sys.exit(0) - -if __name__ == "__main__": - main() diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/encoder/embedding.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/encoder/embedding.py deleted file mode 100644 index fa3199cf7e3da2ed834d4781b694cf4ccb2a433c..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/encoder/embedding.py +++ /dev/null @@ -1,166 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Positonal Encoding Module.""" - -import math - -import torch - - -def _pre_hook( - state_dict, - prefix, - local_metadata, - strict, - missing_keys, - unexpected_keys, - error_msgs, -): - """Perform pre-hook in load_state_dict for backward compatibility. - - Note: - We saved self.pe until v.0.5.2 but we have omitted it later. - Therefore, we remove the item "pe" from `state_dict` for backward compatibility. - - """ - k = prefix + "pe" - if k in state_dict: - state_dict.pop(k) - - -class PositionalEncoding(torch.nn.Module): - """Positional encoding. - - :param int d_model: embedding dim - :param float dropout_rate: dropout rate - :param int max_len: maximum input length - :param reverse: whether to reverse the input position - - """ - - def __init__(self, d_model, dropout_rate, max_len=5000, reverse=False): - """Construct an PositionalEncoding object.""" - super(PositionalEncoding, self).__init__() - self.d_model = d_model - self.reverse = reverse - self.xscale = math.sqrt(self.d_model) - self.dropout = torch.nn.Dropout(p=dropout_rate) - self.pe = None - self.extend_pe(torch.tensor(0.0).expand(1, max_len)) - self._register_load_state_dict_pre_hook(_pre_hook) - - def extend_pe(self, x): - """Reset the positional encodings.""" - if self.pe is not None: - if self.pe.size(1) >= x.size(1): - if self.pe.dtype != x.dtype or self.pe.device != x.device: - self.pe = self.pe.to(dtype=x.dtype, device=x.device) - return - pe = torch.zeros(x.size(1), self.d_model) - if self.reverse: - position = torch.arange( - x.size(1) - 1, -1, -1.0, dtype=torch.float32 - ).unsqueeze(1) - else: - position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, self.d_model, 2, dtype=torch.float32) - * -(math.log(10000.0) / self.d_model) - ) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0) - self.pe = pe.to(device=x.device, dtype=x.dtype) - - def forward(self, x: torch.Tensor): - """Add positional encoding. - - Args: - x (torch.Tensor): Input. Its shape is (batch, time, ...) - - Returns: - torch.Tensor: Encoded tensor. Its shape is (batch, time, ...) - - """ - self.extend_pe(x) - x = x * self.xscale + self.pe[:, : x.size(1)] - return self.dropout(x) - - -class ScaledPositionalEncoding(PositionalEncoding): - """Scaled positional encoding module. - - See also: Sec. 3.2 https://arxiv.org/pdf/1809.08895.pdf - - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class. - - :param int d_model: embedding dim - :param float dropout_rate: dropout rate - :param int max_len: maximum input length - - """ - super().__init__(d_model=d_model, dropout_rate=dropout_rate, max_len=max_len) - self.alpha = torch.nn.Parameter(torch.tensor(1.0)) - - def reset_parameters(self): - """Reset parameters.""" - self.alpha.data = torch.tensor(1.0) - - def forward(self, x): - """Add positional encoding. - - Args: - x (torch.Tensor): Input. Its shape is (batch, time, ...) - - Returns: - torch.Tensor: Encoded tensor. Its shape is (batch, time, ...) - - """ - self.extend_pe(x) - x = x + self.alpha * self.pe[:, : x.size(1)] - return self.dropout(x) - - -class RelPositionalEncoding(PositionalEncoding): - """Relitive positional encoding module. - - See : Appendix B in https://arxiv.org/abs/1901.02860 - - :param int d_model: embedding dim - :param float dropout_rate: dropout rate - :param int max_len: maximum input length - - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class. - - :param int d_model: embedding dim - :param float dropout_rate: dropout rate - :param int max_len: maximum input length - - """ - super().__init__(d_model, dropout_rate, max_len, reverse=True) - - def forward(self, x): - """Compute positional encoding. - - Args: - x (torch.Tensor): Input. Its shape is (batch, time, ...) - - Returns: - torch.Tensor: x. Its shape is (batch, time, ...) - torch.Tensor: pos_emb. Its shape is (1, time, ...) - - """ - self.extend_pe(x) - x = x * self.xscale - pos_emb = self.pe[:, : x.size(1)] - return self.dropout(x), self.dropout(pos_emb) diff --git a/spaces/kllmagn/sberbank-ai-rugpt3large_based_on_gpt2/README.md b/spaces/kllmagn/sberbank-ai-rugpt3large_based_on_gpt2/README.md deleted file mode 100644 index 9075a708f9f91b6413151d7f1c489538a4bd24aa..0000000000000000000000000000000000000000 --- a/spaces/kllmagn/sberbank-ai-rugpt3large_based_on_gpt2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sberbank-ai-rugpt3large Based On Gpt2 -emoji: 💻 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kmirijan/NBA-Stats/README.md b/spaces/kmirijan/NBA-Stats/README.md deleted file mode 100644 index fc9ac935ea5ecf1c2557a9174ac826b3c1db94f6..0000000000000000000000000000000000000000 --- a/spaces/kmirijan/NBA-Stats/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NBA Stats -emoji: 📉 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/raft/alt_cuda_corr/setup.py b/spaces/kukuhtw/VToonify/vtoonify/model/raft/alt_cuda_corr/setup.py deleted file mode 100644 index c0207ff285ffac4c8146c79d154f12416dbef48c..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/raft/alt_cuda_corr/setup.py +++ /dev/null @@ -1,15 +0,0 @@ -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - - -setup( - name='correlation', - ext_modules=[ - CUDAExtension('alt_cuda_corr', - sources=['correlation.cpp', 'correlation_kernel.cu'], - extra_compile_args={'cxx': [], 'nvcc': ['-O3']}), - ], - cmdclass={ - 'build_ext': BuildExtension - }) - diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-0028e07e.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-0028e07e.js deleted file mode 100644 index 8da18a5500bbf6fc1b563e449c429d1503b95ecd..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-0028e07e.js +++ /dev/null @@ -1,67 +0,0 @@ -import{S as Y,i as Z,s as ee,G as k,I as X,C as I,M as g,g as j,E as U,K as le,F as $,q as B,N as Ue,f as mt,u as ec,b as tc,L as mo,J as _e,a1 as vo,a0 as cr,H as Re,D as bo,B as ho,l as Ve,t as ae,o as ze,p as Q,r as rc,c as Zt,e as er,m as tr,n as rr}from"./index-8c3da1d9.js";import{g as ac,a as nc}from"./_commonjsHelpers-042e6b4d.js";import{E as lc}from"./Image-27b9d089.js";import{c as oc}from"./csv-b0b7514a.js";import{d as ic}from"./dsv-576afacd.js";import{E as uc}from"./Model3D-6764e7f5.js";function sc(e,t){for(var r=0;ra[n]})}}}return Object.freeze(Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}))}var cc=ic(" "),dc=cc.parseRows,Ll={};function Ce(){return Ce=Object.assign||function(e){for(var t=1;t=0)&&(r[n]=e[n]);return r}var we={},ar={},fc={get exports(){return ar},set exports(e){ar=e}};(function(e){const r=(o=0)=>l=>`\x1B[${38+o};5;${l}m`,a=(o=0)=>(l,i,u)=>`\x1B[${38+o};2;${l};${i};${u}m`;function n(){const o=new Map,l={modifier:{reset:[0,0],bold:[1,22],dim:[2,22],italic:[3,23],underline:[4,24],overline:[53,55],inverse:[7,27],hidden:[8,28],strikethrough:[9,29]},color:{black:[30,39],red:[31,39],green:[32,39],yellow:[33,39],blue:[34,39],magenta:[35,39],cyan:[36,39],white:[37,39],blackBright:[90,39],redBright:[91,39],greenBright:[92,39],yellowBright:[93,39],blueBright:[94,39],magentaBright:[95,39],cyanBright:[96,39],whiteBright:[97,39]},bgColor:{bgBlack:[40,49],bgRed:[41,49],bgGreen:[42,49],bgYellow:[43,49],bgBlue:[44,49],bgMagenta:[45,49],bgCyan:[46,49],bgWhite:[47,49],bgBlackBright:[100,49],bgRedBright:[101,49],bgGreenBright:[102,49],bgYellowBright:[103,49],bgBlueBright:[104,49],bgMagentaBright:[105,49],bgCyanBright:[106,49],bgWhiteBright:[107,49]}};l.color.gray=l.color.blackBright,l.bgColor.bgGray=l.bgColor.bgBlackBright,l.color.grey=l.color.blackBright,l.bgColor.bgGrey=l.bgColor.bgBlackBright;for(const[i,u]of Object.entries(l)){for(const[s,p]of Object.entries(u))l[s]={open:`\x1B[${p[0]}m`,close:`\x1B[${p[1]}m`},u[s]=l[s],o.set(p[0],p[1]);Object.defineProperty(l,i,{value:u,enumerable:!1})}return Object.defineProperty(l,"codes",{value:o,enumerable:!1}),l.color.close="\x1B[39m",l.bgColor.close="\x1B[49m",l.color.ansi256=r(),l.color.ansi16m=a(),l.bgColor.ansi256=r(10),l.bgColor.ansi16m=a(10),Object.defineProperties(l,{rgbToAnsi256:{value:(i,u,s)=>i===u&&u===s?i<8?16:i>248?231:Math.round((i-8)/247*24)+232:16+36*Math.round(i/255*5)+6*Math.round(u/255*5)+Math.round(s/255*5),enumerable:!1},hexToRgb:{value:i=>{const u=/(?[a-f\d]{6}|[a-f\d]{3})/i.exec(i.toString(16));if(!u)return[0,0,0];let{colorString:s}=u.groups;s.length===3&&(s=s.split("").map(d=>d+d).join(""));const p=Number.parseInt(s,16);return[p>>16&255,p>>8&255,p&255]},enumerable:!1},hexToAnsi256:{value:i=>l.rgbToAnsi256(...l.hexToRgb(i)),enumerable:!1}}),l}Object.defineProperty(e,"exports",{enumerable:!0,get:n})})(fc);var qe={};Object.defineProperty(qe,"__esModule",{value:!0});qe.printIteratorEntries=mc;qe.printIteratorValues=vc;qe.printListItems=bc;qe.printObjectProperties=hc;const pc=(e,t)=>{const r=Object.keys(e).sort(t);return Object.getOwnPropertySymbols&&Object.getOwnPropertySymbols(e).forEach(a=>{Object.getOwnPropertyDescriptor(e,a).enumerable&&r.push(a)}),r};function mc(e,t,r,a,n,o,l=": "){let i="",u=e.next();if(!u.done){i+=t.spacingOuter;const s=r+t.indent;for(;!u.done;){const p=o(u.value[0],t,s,a,n),d=o(u.value[1],t,s,a,n);i+=s+p+l+d,u=e.next(),u.done?t.min||(i+=","):i+=","+t.spacingInner}i+=t.spacingOuter+r}return i}function vc(e,t,r,a,n,o){let l="",i=e.next();if(!i.done){l+=t.spacingOuter;const u=r+t.indent;for(;!i.done;)l+=u+o(i.value,t,u,a,n),i=e.next(),i.done?t.min||(l+=","):l+=","+t.spacingInner;l+=t.spacingOuter+r}return l}function bc(e,t,r,a,n,o){let l="";if(e.length){l+=t.spacingOuter;const i=r+t.indent;for(let u=0;u{const l=e.toString();return l==="ArrayContaining"||l==="ArrayNotContaining"?++a>t.maxDepth?"["+l+"]":l+zt+"["+(0,yo.printListItems)(e.sample,t,r,a,n,o)+"]":l==="ObjectContaining"||l==="ObjectNotContaining"?++a>t.maxDepth?"["+l+"]":l+zt+"{"+(0,yo.printObjectProperties)(e.sample,t,r,a,n,o)+"}":l==="StringMatching"||l==="StringNotMatching"||l==="StringContaining"||l==="StringNotContaining"?l+zt+o(e.sample,t,r,a,n):e.toAsymmetricMatcher()};Ie.serialize=Ai;const Si=e=>e&&e.$$typeof===yc;Ie.test=Si;const gc={serialize:Ai,test:Si};var _c=gc;Ie.default=_c;var je={},Ec=({onlyFirst:e=!1}={})=>{const t=["[\\u001B\\u009B][[\\]()#;?]*(?:(?:(?:(?:;[-a-zA-Z\\d\\/#&.:=?%@~_]+)*|[a-zA-Z\\d]+(?:;[-a-zA-Z\\d\\/#&.:=?%@~_]*)*)?\\u0007)","(?:(?:\\d{1,4}(?:;\\d{0,4})*)?[\\dA-PR-TZcf-ntqry=><~]))"].join("|");return new RegExp(t,e?void 0:"g")};Object.defineProperty(je,"__esModule",{value:!0});je.test=je.serialize=je.default=void 0;var xi=Ni(Ec),V=Ni(ar);function Ni(e){return e&&e.__esModule?e:{default:e}}const Rc=e=>e.replace((0,xi.default)(),t=>{switch(t){case V.default.red.close:case V.default.green.close:case V.default.cyan.close:case V.default.gray.close:case V.default.white.close:case V.default.yellow.close:case V.default.bgRed.close:case V.default.bgGreen.close:case V.default.bgYellow.close:case V.default.inverse.close:case V.default.dim.close:case V.default.bold.close:case V.default.reset.open:case V.default.reset.close:return"";case V.default.red.open:return"";case V.default.green.open:return"";case V.default.cyan.open:return"";case V.default.gray.open:return"";case V.default.white.open:return"";case V.default.yellow.open:return"";case V.default.bgRed.open:return"";case V.default.bgGreen.open:return"";case V.default.bgYellow.open:return"";case V.default.inverse.open:return"";case V.default.dim.open:return"";case V.default.bold.open:return"";default:return""}}),Ii=e=>typeof e=="string"&&!!e.match((0,xi.default)());je.test=Ii;const ji=(e,t,r,a,n,o)=>o(Rc(e),t,r,a,n);je.serialize=ji;const Cc={serialize:ji,test:Ii};var Pc=Cc;je.default=Pc;var Be={};Object.defineProperty(Be,"__esModule",{value:!0});Be.test=Be.serialize=Be.default=void 0;var go=qe;const wc=" ",Bi=["DOMStringMap","NamedNodeMap"],qc=/^(HTML\w*Collection|NodeList)$/,Tc=e=>Bi.indexOf(e)!==-1||qc.test(e),Li=e=>e&&e.constructor&&!!e.constructor.name&&Tc(e.constructor.name);Be.test=Li;const Oc=e=>e.constructor.name==="NamedNodeMap",ki=(e,t,r,a,n,o)=>{const l=e.constructor.name;return++a>t.maxDepth?"["+l+"]":(t.min?"":l+wc)+(Bi.indexOf(l)!==-1?"{"+(0,go.printObjectProperties)(Oc(e)?Array.from(e).reduce((i,u)=>(i[u.name]=u.value,i),{}):{...e},t,r,a,n,o)+"}":"["+(0,go.printListItems)(Array.from(e),t,r,a,n,o)+"]")};Be.serialize=ki;const Mc={serialize:ki,test:Li};var Ac=Mc;Be.default=Ac;var Le={},re={},kl={};Object.defineProperty(kl,"__esModule",{value:!0});kl.default=Sc;function Sc(e){return e.replace(//g,">")}Object.defineProperty(re,"__esModule",{value:!0});re.printText=re.printProps=re.printElementAsLeaf=re.printElement=re.printComment=re.printChildren=void 0;var Fi=xc(kl);function xc(e){return e&&e.__esModule?e:{default:e}}const Nc=(e,t,r,a,n,o,l)=>{const i=a+r.indent,u=r.colors;return e.map(s=>{const p=t[s];let d=l(p,r,i,n,o);return typeof p!="string"&&(d.indexOf(` -`)!==-1&&(d=r.spacingOuter+i+d+r.spacingOuter+a),d="{"+d+"}"),r.spacingInner+a+u.prop.open+s+u.prop.close+"="+u.value.open+d+u.value.close}).join("")};re.printProps=Nc;const Ic=(e,t,r,a,n,o)=>e.map(l=>t.spacingOuter+r+(typeof l=="string"?Di(l,t):o(l,t,r,a,n))).join("");re.printChildren=Ic;const Di=(e,t)=>{const r=t.colors.content;return r.open+(0,Fi.default)(e)+r.close};re.printText=Di;const jc=(e,t)=>{const r=t.colors.comment;return r.open+""+r.close};re.printComment=jc;const Bc=(e,t,r,a,n)=>{const o=a.colors.tag;return o.open+"<"+e+(t&&o.close+t+a.spacingOuter+n+o.open)+(r?">"+o.close+r+a.spacingOuter+n+o.open+""+o.close};re.printElement=Bc;const Lc=(e,t)=>{const r=t.colors.tag;return r.open+"<"+e+r.close+" …"+r.open+" />"+r.close};re.printElementAsLeaf=Lc;Object.defineProperty(Le,"__esModule",{value:!0});Le.test=Le.serialize=Le.default=void 0;var rt=re;const kc=1,$i=3,Ui=8,Hi=11,Fc=/^((HTML|SVG)\w*)?Element$/,Dc=e=>{try{return typeof e.hasAttribute=="function"&&e.hasAttribute("is")}catch{return!1}},$c=e=>{const t=e.constructor.name,{nodeType:r,tagName:a}=e,n=typeof a=="string"&&a.includes("-")||Dc(e);return r===kc&&(Fc.test(t)||n)||r===$i&&t==="Text"||r===Ui&&t==="Comment"||r===Hi&&t==="DocumentFragment"},Vi=e=>{var t;return(e==null||(t=e.constructor)===null||t===void 0?void 0:t.name)&&$c(e)};Le.test=Vi;function Uc(e){return e.nodeType===$i}function Hc(e){return e.nodeType===Ui}function al(e){return e.nodeType===Hi}const zi=(e,t,r,a,n,o)=>{if(Uc(e))return(0,rt.printText)(e.data,t);if(Hc(e))return(0,rt.printComment)(e.data,t);const l=al(e)?"DocumentFragment":e.tagName.toLowerCase();return++a>t.maxDepth?(0,rt.printElementAsLeaf)(l,t):(0,rt.printElement)(l,(0,rt.printProps)(al(e)?[]:Array.from(e.attributes).map(i=>i.name).sort(),al(e)?{}:Array.from(e.attributes).reduce((i,u)=>(i[u.name]=u.value,i),{}),t,r+t.indent,a,n,o),(0,rt.printChildren)(Array.prototype.slice.call(e.childNodes||e.children),t,r+t.indent,a,n,o),t,r)};Le.serialize=zi;const Vc={serialize:zi,test:Vi};var zc=Vc;Le.default=zc;var ke={};Object.defineProperty(ke,"__esModule",{value:!0});ke.test=ke.serialize=ke.default=void 0;var gt=qe;const Wc="@@__IMMUTABLE_ITERABLE__@@",Gc="@@__IMMUTABLE_LIST__@@",Qc="@@__IMMUTABLE_KEYED__@@",Xc="@@__IMMUTABLE_MAP__@@",_o="@@__IMMUTABLE_ORDERED__@@",Kc="@@__IMMUTABLE_RECORD__@@",Jc="@@__IMMUTABLE_SEQ__@@",Yc="@@__IMMUTABLE_SET__@@",Zc="@@__IMMUTABLE_STACK__@@",dt=e=>"Immutable."+e,dr=e=>"["+e+"]",_t=" ",Eo="…",ed=(e,t,r,a,n,o,l)=>++a>t.maxDepth?dr(dt(l)):dt(l)+_t+"{"+(0,gt.printIteratorEntries)(e.entries(),t,r,a,n,o)+"}";function td(e){let t=0;return{next(){if(t{const l=dt(e._name||"Record");return++a>t.maxDepth?dr(l):l+_t+"{"+(0,gt.printIteratorEntries)(td(e),t,r,a,n,o)+"}"},ad=(e,t,r,a,n,o)=>{const l=dt("Seq");return++a>t.maxDepth?dr(l):e[Qc]?l+_t+"{"+(e._iter||e._object?(0,gt.printIteratorEntries)(e.entries(),t,r,a,n,o):Eo)+"}":l+_t+"["+(e._iter||e._array||e._collection||e._iterable?(0,gt.printIteratorValues)(e.values(),t,r,a,n,o):Eo)+"]"},nl=(e,t,r,a,n,o,l)=>++a>t.maxDepth?dr(dt(l)):dt(l)+_t+"["+(0,gt.printIteratorValues)(e.values(),t,r,a,n,o)+"]",Wi=(e,t,r,a,n,o)=>e[Xc]?ed(e,t,r,a,n,o,e[_o]?"OrderedMap":"Map"):e[Gc]?nl(e,t,r,a,n,o,"List"):e[Yc]?nl(e,t,r,a,n,o,e[_o]?"OrderedSet":"Set"):e[Zc]?nl(e,t,r,a,n,o,"Stack"):e[Jc]?ad(e,t,r,a,n,o):rd(e,t,r,a,n,o);ke.serialize=Wi;const Gi=e=>e&&(e[Wc]===!0||e[Kc]===!0);ke.test=Gi;const nd={serialize:Wi,test:Gi};var ld=nd;ke.default=ld;var Fe={},hl={},od={get exports(){return hl},set exports(e){hl=e}},H={};/** @license React v17.0.2 - * react-is.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var fr=60103,pr=60106,Ct=60107,Pt=60108,wt=60114,qt=60109,Tt=60110,Ot=60112,Mt=60113,Fl=60120,At=60115,St=60116,Qi=60121,Xi=60122,Ki=60117,Ji=60129,Yi=60131;if(typeof Symbol=="function"&&Symbol.for){var J=Symbol.for;fr=J("react.element"),pr=J("react.portal"),Ct=J("react.fragment"),Pt=J("react.strict_mode"),wt=J("react.profiler"),qt=J("react.provider"),Tt=J("react.context"),Ot=J("react.forward_ref"),Mt=J("react.suspense"),Fl=J("react.suspense_list"),At=J("react.memo"),St=J("react.lazy"),Qi=J("react.block"),Xi=J("react.server.block"),Ki=J("react.fundamental"),Ji=J("react.debug_trace_mode"),Yi=J("react.legacy_hidden")}function Ee(e){if(typeof e=="object"&&e!==null){var t=e.$$typeof;switch(t){case fr:switch(e=e.type,e){case Ct:case wt:case Pt:case Mt:case Fl:return e;default:switch(e=e&&e.$$typeof,e){case Tt:case Ot:case St:case At:case qt:return e;default:return t}}case pr:return t}}}var id=qt,ud=fr,sd=Ot,cd=Ct,dd=St,fd=At,pd=pr,md=wt,vd=Pt,bd=Mt;H.ContextConsumer=Tt;H.ContextProvider=id;H.Element=ud;H.ForwardRef=sd;H.Fragment=cd;H.Lazy=dd;H.Memo=fd;H.Portal=pd;H.Profiler=md;H.StrictMode=vd;H.Suspense=bd;H.isAsyncMode=function(){return!1};H.isConcurrentMode=function(){return!1};H.isContextConsumer=function(e){return Ee(e)===Tt};H.isContextProvider=function(e){return Ee(e)===qt};H.isElement=function(e){return typeof e=="object"&&e!==null&&e.$$typeof===fr};H.isForwardRef=function(e){return Ee(e)===Ot};H.isFragment=function(e){return Ee(e)===Ct};H.isLazy=function(e){return Ee(e)===St};H.isMemo=function(e){return Ee(e)===At};H.isPortal=function(e){return Ee(e)===pr};H.isProfiler=function(e){return Ee(e)===wt};H.isStrictMode=function(e){return Ee(e)===Pt};H.isSuspense=function(e){return Ee(e)===Mt};H.isValidElementType=function(e){return typeof e=="string"||typeof e=="function"||e===Ct||e===wt||e===Ji||e===Pt||e===Mt||e===Fl||e===Yi||typeof e=="object"&&e!==null&&(e.$$typeof===St||e.$$typeof===At||e.$$typeof===qt||e.$$typeof===Tt||e.$$typeof===Ot||e.$$typeof===Ki||e.$$typeof===Qi||e[0]===Xi)};H.typeOf=Ee;(function(e){e.exports=H})(od);Object.defineProperty(Fe,"__esModule",{value:!0});Fe.test=Fe.serialize=Fe.default=void 0;var Qe=hd(hl),Wt=re;function Zi(e){if(typeof WeakMap!="function")return null;var t=new WeakMap,r=new WeakMap;return(Zi=function(a){return a?r:t})(e)}function hd(e,t){if(!t&&e&&e.__esModule)return e;if(e===null||typeof e!="object"&&typeof e!="function")return{default:e};var r=Zi(t);if(r&&r.has(e))return r.get(e);var a={},n=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var o in e)if(o!=="default"&&Object.prototype.hasOwnProperty.call(e,o)){var l=n?Object.getOwnPropertyDescriptor(e,o):null;l&&(l.get||l.set)?Object.defineProperty(a,o,l):a[o]=e[o]}return a.default=e,r&&r.set(e,a),a}const eu=(e,t=[])=>(Array.isArray(e)?e.forEach(r=>{eu(r,t)}):e!=null&&e!==!1&&t.push(e),t),Ro=e=>{const t=e.type;if(typeof t=="string")return t;if(typeof t=="function")return t.displayName||t.name||"Unknown";if(Qe.isFragment(e))return"React.Fragment";if(Qe.isSuspense(e))return"React.Suspense";if(typeof t=="object"&&t!==null){if(Qe.isContextProvider(e))return"Context.Provider";if(Qe.isContextConsumer(e))return"Context.Consumer";if(Qe.isForwardRef(e)){if(t.displayName)return t.displayName;const r=t.render.displayName||t.render.name||"";return r!==""?"ForwardRef("+r+")":"ForwardRef"}if(Qe.isMemo(e)){const r=t.displayName||t.type.displayName||t.type.name||"";return r!==""?"Memo("+r+")":"Memo"}}return"UNDEFINED"},yd=e=>{const{props:t}=e;return Object.keys(t).filter(r=>r!=="children"&&t[r]!==void 0).sort()},tu=(e,t,r,a,n,o)=>++a>t.maxDepth?(0,Wt.printElementAsLeaf)(Ro(e),t):(0,Wt.printElement)(Ro(e),(0,Wt.printProps)(yd(e),e.props,t,r+t.indent,a,n,o),(0,Wt.printChildren)(eu(e.props.children),t,r+t.indent,a,n,o),t,r);Fe.serialize=tu;const ru=e=>e!=null&&Qe.isElement(e);Fe.test=ru;const gd={serialize:tu,test:ru};var _d=gd;Fe.default=_d;var De={};Object.defineProperty(De,"__esModule",{value:!0});De.test=De.serialize=De.default=void 0;var Gt=re,lr=function(){return typeof globalThis<"u"?globalThis:typeof lr<"u"?lr:typeof self<"u"?self:typeof window<"u"?window:Function("return this")()}(),ll=lr["jest-symbol-do-not-touch"]||lr.Symbol;const Ed=typeof ll=="function"&&ll.for?ll.for("react.test.json"):245830487,Rd=e=>{const{props:t}=e;return t?Object.keys(t).filter(r=>t[r]!==void 0).sort():[]},au=(e,t,r,a,n,o)=>++a>t.maxDepth?(0,Gt.printElementAsLeaf)(e.type,t):(0,Gt.printElement)(e.type,e.props?(0,Gt.printProps)(Rd(e),e.props,t,r+t.indent,a,n,o):"",e.children?(0,Gt.printChildren)(e.children,t,r+t.indent,a,n,o):"",t,r);De.serialize=au;const nu=e=>e&&e.$$typeof===Ed;De.test=nu;const Cd={serialize:au,test:nu};var Pd=Cd;De.default=Pd;Object.defineProperty(we,"__esModule",{value:!0});var lu=we.default=mu=we.DEFAULT_OPTIONS=void 0,ou=we.format=yu,Dl=we.plugins=void 0,wd=We(ar),yt=qe,qd=We(Ie),Td=We(je),Od=We(Be),Md=We(Le),Ad=We(ke),Sd=We(Fe),xd=We(De);function We(e){return e&&e.__esModule?e:{default:e}}const iu=Object.prototype.toString,Nd=Date.prototype.toISOString,Id=Error.prototype.toString,Co=RegExp.prototype.toString,ol=e=>typeof e.constructor=="function"&&e.constructor.name||"Object",jd=e=>typeof window<"u"&&e===window,Bd=/^Symbol\((.*)\)(.*)$/,Ld=/\n/gi;class uu extends Error{constructor(t,r){super(t),this.stack=r,this.name=this.constructor.name}}function kd(e){return e==="[object Array]"||e==="[object ArrayBuffer]"||e==="[object DataView]"||e==="[object Float32Array]"||e==="[object Float64Array]"||e==="[object Int8Array]"||e==="[object Int16Array]"||e==="[object Int32Array]"||e==="[object Uint8Array]"||e==="[object Uint8ClampedArray]"||e==="[object Uint16Array]"||e==="[object Uint32Array]"}function Fd(e){return Object.is(e,-0)?"-0":String(e)}function Dd(e){return`${e}n`}function Po(e,t){return t?"[Function "+(e.name||"anonymous")+"]":"[Function]"}function wo(e){return String(e).replace(Bd,"Symbol($1)")}function qo(e){return"["+Id.call(e)+"]"}function su(e,t,r,a){if(e===!0||e===!1)return""+e;if(e===void 0)return"undefined";if(e===null)return"null";const n=typeof e;if(n==="number")return Fd(e);if(n==="bigint")return Dd(e);if(n==="string")return a?'"'+e.replace(/"|\\/g,"\\$&")+'"':'"'+e+'"';if(n==="function")return Po(e,t);if(n==="symbol")return wo(e);const o=iu.call(e);return o==="[object WeakMap]"?"WeakMap {}":o==="[object WeakSet]"?"WeakSet {}":o==="[object Function]"||o==="[object GeneratorFunction]"?Po(e,t):o==="[object Symbol]"?wo(e):o==="[object Date]"?isNaN(+e)?"Date { NaN }":Nd.call(e):o==="[object Error]"?qo(e):o==="[object RegExp]"?r?Co.call(e).replace(/[\\^$*+?.()|[\]{}]/g,"\\$&"):Co.call(e):e instanceof Error?qo(e):null}function cu(e,t,r,a,n,o){if(n.indexOf(e)!==-1)return"[Circular]";n=n.slice(),n.push(e);const l=++a>t.maxDepth,i=t.min;if(t.callToJSON&&!l&&e.toJSON&&typeof e.toJSON=="function"&&!o)return Ne(e.toJSON(),t,r,a,n,!0);const u=iu.call(e);return u==="[object Arguments]"?l?"[Arguments]":(i?"":"Arguments ")+"["+(0,yt.printListItems)(e,t,r,a,n,Ne)+"]":kd(u)?l?"["+e.constructor.name+"]":(i||!t.printBasicPrototype&&e.constructor.name==="Array"?"":e.constructor.name+" ")+"["+(0,yt.printListItems)(e,t,r,a,n,Ne)+"]":u==="[object Map]"?l?"[Map]":"Map {"+(0,yt.printIteratorEntries)(e.entries(),t,r,a,n,Ne," => ")+"}":u==="[object Set]"?l?"[Set]":"Set {"+(0,yt.printIteratorValues)(e.values(),t,r,a,n,Ne)+"}":l||jd(e)?"["+ol(e)+"]":(i||!t.printBasicPrototype&&ol(e)==="Object"?"":ol(e)+" ")+"{"+(0,yt.printObjectProperties)(e,t,r,a,n,Ne)+"}"}function $d(e){return e.serialize!=null}function du(e,t,r,a,n,o){let l;try{l=$d(e)?e.serialize(t,r,a,n,o,Ne):e.print(t,i=>Ne(i,r,a,n,o),i=>{const u=a+r.indent;return u+i.replace(Ld,` -`+u)},{edgeSpacing:r.spacingOuter,min:r.min,spacing:r.spacingInner},r.colors)}catch(i){throw new uu(i.message,i.stack)}if(typeof l!="string")throw new Error(`pretty-format: Plugin must return type "string" but instead returned "${typeof l}".`);return l}function fu(e,t){for(let r=0;r{if(!he.hasOwnProperty(t))throw new Error(`pretty-format: Unknown option "${t}".`)}),e.min&&e.indent!==void 0&&e.indent!==0)throw new Error('pretty-format: Options "min" and "indent" cannot be used together.');if(e.theme!==void 0){if(e.theme===null)throw new Error('pretty-format: Option "theme" must not be null.');if(typeof e.theme!="object")throw new Error(`pretty-format: Option "theme" must be of type "object" but instead received "${typeof e.theme}".`)}}const Hd=e=>pu.reduce((t,r)=>{const a=e.theme&&e.theme[r]!==void 0?e.theme[r]:$l[r],n=a&&wd.default[a];if(n&&typeof n.close=="string"&&typeof n.open=="string")t[r]=n;else throw new Error(`pretty-format: Option "theme" has a key "${r}" whose value "${a}" is undefined in ansi-styles.`);return t},Object.create(null)),Vd=()=>pu.reduce((e,t)=>(e[t]={close:"",open:""},e),Object.create(null)),vu=e=>e&&e.printFunctionName!==void 0?e.printFunctionName:he.printFunctionName,bu=e=>e&&e.escapeRegex!==void 0?e.escapeRegex:he.escapeRegex,hu=e=>e&&e.escapeString!==void 0?e.escapeString:he.escapeString,To=e=>{var t;return{callToJSON:e&&e.callToJSON!==void 0?e.callToJSON:he.callToJSON,colors:e&&e.highlight?Hd(e):Vd(),compareKeys:e&&typeof e.compareKeys=="function"?e.compareKeys:he.compareKeys,escapeRegex:bu(e),escapeString:hu(e),indent:e&&e.min?"":zd(e&&e.indent!==void 0?e.indent:he.indent),maxDepth:e&&e.maxDepth!==void 0?e.maxDepth:he.maxDepth,min:e&&e.min!==void 0?e.min:he.min,plugins:e&&e.plugins!==void 0?e.plugins:he.plugins,printBasicPrototype:(t=e?.printBasicPrototype)!==null&&t!==void 0?t:!0,printFunctionName:vu(e),spacingInner:e&&e.min?" ":` -`,spacingOuter:e&&e.min?"":` -`}};function zd(e){return new Array(e+1).join(" ")}function yu(e,t){if(t&&(Ud(t),t.plugins)){const a=fu(t.plugins,e);if(a!==null)return du(a,e,To(t),"",0,[])}const r=su(e,vu(t),bu(t),hu(t));return r!==null?r:cu(e,To(t),"",0,[])}const Wd={AsymmetricMatcher:qd.default,ConvertAnsi:Td.default,DOMCollection:Od.default,DOMElement:Md.default,Immutable:Ad.default,ReactElement:Sd.default,ReactTestComponent:xd.default};Dl=we.plugins=Wd;var Gd=yu;lu=we.default=Gd;const Qd=sc({__proto__:null,get DEFAULT_OPTIONS(){return mu},get default(){return lu},format:ou,get plugins(){return Dl}},[we]);var Xd=Object.prototype.toString;function Oo(e){return typeof e=="function"||Xd.call(e)==="[object Function]"}function Kd(e){var t=Number(e);return isNaN(t)?0:t===0||!isFinite(t)?t:(t>0?1:-1)*Math.floor(Math.abs(t))}var Jd=Math.pow(2,53)-1;function Yd(e){var t=Kd(e);return Math.min(Math.max(t,0),Jd)}function ye(e,t){var r=Array,a=Object(e);if(e==null)throw new TypeError("Array.from requires an array-like object - not null or undefined");if(typeof t<"u"&&!Oo(t))throw new TypeError("Array.from: when provided, the second argument must be a function");for(var n=Yd(a.length),o=Oo(r)?Object(new r(n)):new Array(n),l=0,i;l0&&arguments[0]!==void 0?arguments[0]:[];Zd(this,e),tf(this,"items",void 0),this.items=t}return ef(e,[{key:"add",value:function(r){return this.has(r)===!1&&this.items.push(r),this}},{key:"clear",value:function(){this.items=[]}},{key:"delete",value:function(r){var a=this.items.length;return this.items=this.items.filter(function(n){return n!==r}),a!==this.items.length}},{key:"forEach",value:function(r){var a=this;this.items.forEach(function(n){r(n,n,a)})}},{key:"has",value:function(r){return this.items.indexOf(r)!==-1}},{key:"size",get:function(){return this.items.length}}]),e}();const af=typeof Set>"u"?Set:rf;function ne(e){var t;return(t=e.localName)!==null&&t!==void 0?t:e.tagName.toLowerCase()}var nf={article:"article",aside:"complementary",button:"button",datalist:"listbox",dd:"definition",details:"group",dialog:"dialog",dt:"term",fieldset:"group",figure:"figure",form:"form",footer:"contentinfo",h1:"heading",h2:"heading",h3:"heading",h4:"heading",h5:"heading",h6:"heading",header:"banner",hr:"separator",html:"document",legend:"legend",li:"listitem",math:"math",main:"main",menu:"list",nav:"navigation",ol:"list",optgroup:"group",option:"option",output:"status",progress:"progressbar",section:"region",summary:"button",table:"table",tbody:"rowgroup",textarea:"textbox",tfoot:"rowgroup",td:"cell",th:"columnheader",thead:"rowgroup",tr:"row",ul:"list"},lf={caption:new Set(["aria-label","aria-labelledby"]),code:new Set(["aria-label","aria-labelledby"]),deletion:new Set(["aria-label","aria-labelledby"]),emphasis:new Set(["aria-label","aria-labelledby"]),generic:new Set(["aria-label","aria-labelledby","aria-roledescription"]),insertion:new Set(["aria-label","aria-labelledby"]),paragraph:new Set(["aria-label","aria-labelledby"]),presentation:new Set(["aria-label","aria-labelledby"]),strong:new Set(["aria-label","aria-labelledby"]),subscript:new Set(["aria-label","aria-labelledby"]),superscript:new Set(["aria-label","aria-labelledby"])};function of(e,t){return["aria-atomic","aria-busy","aria-controls","aria-current","aria-describedby","aria-details","aria-dropeffect","aria-flowto","aria-grabbed","aria-hidden","aria-keyshortcuts","aria-label","aria-labelledby","aria-live","aria-owns","aria-relevant","aria-roledescription"].some(function(r){var a;return e.hasAttribute(r)&&!((a=lf[t])!==null&&a!==void 0&&a.has(r))})}function gu(e,t){return of(e,t)}function uf(e){var t=cf(e);if(t===null||t==="presentation"){var r=sf(e);if(t!=="presentation"||gu(e,r||""))return r}return t}function sf(e){var t=nf[ne(e)];if(t!==void 0)return t;switch(ne(e)){case"a":case"area":case"link":if(e.hasAttribute("href"))return"link";break;case"img":return e.getAttribute("alt")===""&&!gu(e,"img")?"presentation":"img";case"input":{var r=e,a=r.type;switch(a){case"button":case"image":case"reset":case"submit":return"button";case"checkbox":case"radio":return a;case"range":return"slider";case"email":case"tel":case"text":case"url":return e.hasAttribute("list")?"combobox":"textbox";case"search":return e.hasAttribute("list")?"combobox":"searchbox";case"number":return"spinbutton";default:return null}}case"select":return e.hasAttribute("multiple")||e.size>1?"listbox":"combobox"}return null}function cf(e){var t=e.getAttribute("role");if(t!==null){var r=t.trim().split(" ")[0];if(r.length>0)return r}return null}function G(e){return e!==null&&e.nodeType===e.ELEMENT_NODE}function _u(e){return G(e)&&ne(e)==="caption"}function Jt(e){return G(e)&&ne(e)==="input"}function df(e){return G(e)&&ne(e)==="optgroup"}function ff(e){return G(e)&&ne(e)==="select"}function pf(e){return G(e)&&ne(e)==="table"}function mf(e){return G(e)&&ne(e)==="textarea"}function vf(e){var t=e.ownerDocument===null?e:e.ownerDocument,r=t.defaultView;if(r===null)throw new TypeError("no window available");return r}function bf(e){return G(e)&&ne(e)==="fieldset"}function hf(e){return G(e)&&ne(e)==="legend"}function yf(e){return G(e)&&ne(e)==="slot"}function gf(e){return G(e)&&e.ownerSVGElement!==void 0}function _f(e){return G(e)&&ne(e)==="svg"}function Ef(e){return gf(e)&&ne(e)==="title"}function yl(e,t){if(G(e)&&e.hasAttribute(t)){var r=e.getAttribute(t).split(" ");return r.map(function(a){return e.ownerDocument.getElementById(a)}).filter(function(a){return a!==null})}return[]}function Pe(e,t){return G(e)?t.indexOf(uf(e))!==-1:!1}function Rf(e){return e.trim().replace(/\s\s+/g," ")}function Cf(e,t){if(!G(e))return!1;if(e.hasAttribute("hidden")||e.getAttribute("aria-hidden")==="true")return!0;var r=t(e);return r.getPropertyValue("display")==="none"||r.getPropertyValue("visibility")==="hidden"}function Pf(e){return Pe(e,["button","combobox","listbox","textbox"])||Eu(e,"range")}function Eu(e,t){if(!G(e))return!1;switch(t){case"range":return Pe(e,["meter","progressbar","scrollbar","slider","spinbutton"]);default:throw new TypeError("No knowledge about abstract role '".concat(t,"'. This is likely a bug :("))}}function Ao(e,t){var r=ye(e.querySelectorAll(t));return yl(e,"aria-owns").forEach(function(a){r.push.apply(r,ye(a.querySelectorAll(t)))}),r}function wf(e){return ff(e)?e.selectedOptions||Ao(e,"[selected]"):Ao(e,'[aria-selected="true"]')}function qf(e){return Pe(e,["none","presentation"])}function Tf(e){return _u(e)}function Of(e){return Pe(e,["button","cell","checkbox","columnheader","gridcell","heading","label","legend","link","menuitem","menuitemcheckbox","menuitemradio","option","radio","row","rowheader","switch","tab","tooltip","treeitem"])}function Mf(e){return!1}function Af(e){return Jt(e)||mf(e)?e.value:e.textContent||""}function So(e){var t=e.getPropertyValue("content");return/^["'].*["']$/.test(t)?t.slice(1,-1):""}function Ru(e){var t=ne(e);return t==="button"||t==="input"&&e.getAttribute("type")!=="hidden"||t==="meter"||t==="output"||t==="progress"||t==="select"||t==="textarea"}function Cu(e){if(Ru(e))return e;var t=null;return e.childNodes.forEach(function(r){if(t===null&&G(r)){var a=Cu(r);a!==null&&(t=a)}}),t}function Sf(e){if(e.control!==void 0)return e.control;var t=e.getAttribute("for");return t!==null?e.ownerDocument.getElementById(t):Cu(e)}function xf(e){var t=e.labels;if(t===null)return t;if(t!==void 0)return ye(t);if(!Ru(e))return null;var r=e.ownerDocument;return ye(r.querySelectorAll("label")).filter(function(a){return Sf(a)===e})}function Nf(e){var t=e.assignedNodes();return t.length===0?ye(e.childNodes):t}function If(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{},r=new af,a=vf(e),n=t.compute,o=n===void 0?"name":n,l=t.computedStyleSupportsPseudoElements,i=l===void 0?t.getComputedStyle!==void 0:l,u=t.getComputedStyle,s=u===void 0?a.getComputedStyle.bind(a):u,p=t.hidden,d=p===void 0?!1:p;function m(f,R){var E="";if(G(f)&&i){var T=s(f,"::before"),O=So(T);E="".concat(O," ").concat(E)}var A=yf(f)?Nf(f):ye(f.childNodes).concat(yl(f,"aria-owns"));if(A.forEach(function(w){var q=y(w,{isEmbeddedInLabel:R.isEmbeddedInLabel,isReferenced:!1,recursion:!0}),z=G(w)?s(w).getPropertyValue("display"):"inline",c=z!=="inline"?" ":"";E+="".concat(c).concat(q).concat(c)}),G(f)&&i){var S=s(f,"::after"),b=So(S);E="".concat(E," ").concat(b)}return E.trim()}function v(f){if(!G(f))return null;function R(x,_){var h=x.getAttributeNode(_);return h!==null&&!r.has(h)&&h.value.trim()!==""?(r.add(h),h.value):null}if(bf(f)){r.add(f);for(var E=ye(f.childNodes),T=0;T0}).join(" ");if(Jt(f)&&f.type==="image"){var oe=R(f,"alt");if(oe!==null)return oe;var K=R(f,"title");return K!==null?K:"Submit Query"}if(Pe(f,["button"])){var ue=m(f,{isEmbeddedInLabel:!1,isReferenced:!1});return ue!==""?ue:R(f,"title")}return R(f,"title")}function y(f,R){if(r.has(f))return"";if(!d&&Cf(f,s)&&!R.isReferenced)return r.add(f),"";var E=yl(f,"aria-labelledby");if(o==="name"&&!R.isReferenced&&E.length>0)return E.map(function(b){return y(b,{isEmbeddedInLabel:R.isEmbeddedInLabel,isReferenced:!0,recursion:!1})}).join(" ");var T=R.recursion&&Pf(f)&&o==="name";if(!T){var O=(G(f)&&f.getAttribute("aria-label")||"").trim();if(O!==""&&o==="name")return r.add(f),O;if(!qf(f)){var A=v(f);if(A!==null)return r.add(f),A}}if(Pe(f,["menu"]))return r.add(f),"";if(T||R.isEmbeddedInLabel||R.isReferenced){if(Pe(f,["combobox","listbox"])){r.add(f);var S=wf(f);return S.length===0?Jt(f)?f.value:"":ye(S).map(function(b){return y(b,{isEmbeddedInLabel:R.isEmbeddedInLabel,isReferenced:!1,recursion:!0})}).join(" ")}if(Eu(f,"range"))return r.add(f),f.hasAttribute("aria-valuetext")?f.getAttribute("aria-valuetext"):f.hasAttribute("aria-valuenow")?f.getAttribute("aria-valuenow"):f.getAttribute("value")||"";if(Pe(f,["textbox"]))return r.add(f),Af(f)}return Of(f)||G(f)&&R.isReferenced||Tf(f)||Mf()?(r.add(f),m(f,{isEmbeddedInLabel:R.isEmbeddedInLabel,isReferenced:!1})):f.nodeType===f.TEXT_NODE?(r.add(f),f.textContent||""):R.recursion?(r.add(f),m(f,{isEmbeddedInLabel:R.isEmbeddedInLabel,isReferenced:!1})):(r.add(f),"")}return Rf(y(e,{isEmbeddedInLabel:!1,isReferenced:o==="description",recursion:!1}))}function jf(e){return Pe(e,["caption","code","deletion","emphasis","generic","insertion","paragraph","presentation","strong","subscript","superscript"])}function Ul(e){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{};return jf(e)?"":If(e,t)}var ge={},mr={};Object.defineProperty(mr,"__esModule",{value:!0});mr.default=void 0;function xo(e,t){return Ff(e)||kf(e,t)||Lf(e,t)||Bf()}function Bf(){throw new TypeError(`Invalid attempt to destructure non-iterable instance. -In order to be iterable, non-array objects must have a [Symbol.iterator]() method.`)}function Lf(e,t){if(e){if(typeof e=="string")return No(e,t);var r=Object.prototype.toString.call(e).slice(8,-1);if(r==="Object"&&e.constructor&&(r=e.constructor.name),r==="Map"||r==="Set")return Array.from(e);if(r==="Arguments"||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r))return No(e,t)}}function No(e,t){(t==null||t>e.length)&&(t=e.length);for(var r=0,a=new Array(t);re.length)&&(t=e.length);for(var r=0,a=new Array(t);r1"],name:"size"},{name:"multiple"}],name:"select"},module:"HTML"},{concept:{attributes:[{constraints:[">1"],name:"size"}],name:"select"},module:"HTML"},{concept:{attributes:[{name:"multiple"}],name:"select"},module:"HTML"},{concept:{name:"datalist"},module:"HTML"},{concept:{name:"list"},module:"ARIA"},{concept:{name:"select"},module:"XForms"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["option","group"],["option"]],requiredProps:{},superClass:[["roletype","widget","composite","select"],["roletype","structure","section","group","select"]]},zm=Vm;sa.default=zm;var ca={};Object.defineProperty(ca,"__esModule",{value:!0});ca.default=void 0;var Wm={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-level":null,"aria-posinset":null,"aria-setsize":null},relatedConcepts:[{concept:{constraints:["direct descendant of ol, ul or menu"],name:"li"},module:"HTML"},{concept:{name:"item"},module:"XForms"}],requireContextRole:["directory","list"],requiredContextRole:["directory","list"],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Gm=Wm;ca.default=Gm;var da={};Object.defineProperty(da,"__esModule",{value:!0});da.default=void 0;var Qm={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-live":"polite"},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Xm=Qm;da.default=Xm;var fa={};Object.defineProperty(fa,"__esModule",{value:!0});fa.default=void 0;var Km={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"main"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Jm=Km;fa.default=Jm;var pa={};Object.defineProperty(pa,"__esModule",{value:!0});pa.default=void 0;var Ym={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Zm=Ym;pa.default=Zm;var ma={};Object.defineProperty(ma,"__esModule",{value:!0});ma.default=void 0;var ev={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"math"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},tv=ev;ma.default=tv;var va={};Object.defineProperty(va,"__esModule",{value:!0});va.default=void 0;var rv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-orientation":"vertical"},relatedConcepts:[{concept:{name:"MENU"},module:"JAPI"},{concept:{name:"list"},module:"ARIA"},{concept:{name:"select"},module:"XForms"},{concept:{name:"sidebar"},module:"DTB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["menuitem","group"],["menuitemradio","group"],["menuitemcheckbox","group"],["menuitem"],["menuitemcheckbox"],["menuitemradio"]],requiredProps:{},superClass:[["roletype","widget","composite","select"],["roletype","structure","section","group","select"]]},av=rv;va.default=av;var ba={};Object.defineProperty(ba,"__esModule",{value:!0});ba.default=void 0;var nv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-orientation":"horizontal"},relatedConcepts:[{concept:{name:"toolbar"},module:"ARIA"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["menuitem","group"],["menuitemradio","group"],["menuitemcheckbox","group"],["menuitem"],["menuitemcheckbox"],["menuitemradio"]],requiredProps:{},superClass:[["roletype","widget","composite","select","menu"],["roletype","structure","section","group","select","menu"]]},lv=nv;ba.default=lv;var ha={};Object.defineProperty(ha,"__esModule",{value:!0});ha.default=void 0;var ov={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-disabled":null,"aria-expanded":null,"aria-haspopup":null,"aria-posinset":null,"aria-setsize":null},relatedConcepts:[{concept:{name:"MENU_ITEM"},module:"JAPI"},{concept:{name:"listitem"},module:"ARIA"},{concept:{name:"menuitem"},module:"HTML"},{concept:{name:"option"},module:"ARIA"}],requireContextRole:["group","menu","menubar"],requiredContextRole:["group","menu","menubar"],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","command"]]},iv=ov;ha.default=iv;var ya={};Object.defineProperty(ya,"__esModule",{value:!0});ya.default=void 0;var uv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author","contents"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"menuitem"},module:"ARIA"}],requireContextRole:["group","menu","menubar"],requiredContextRole:["group","menu","menubar"],requiredOwnedElements:[],requiredProps:{"aria-checked":null},superClass:[["roletype","widget","input","checkbox"],["roletype","widget","command","menuitem"]]},sv=uv;ya.default=sv;var ga={};Object.defineProperty(ga,"__esModule",{value:!0});ga.default=void 0;var cv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author","contents"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"menuitem"},module:"ARIA"}],requireContextRole:["group","menu","menubar"],requiredContextRole:["group","menu","menubar"],requiredOwnedElements:[],requiredProps:{"aria-checked":null},superClass:[["roletype","widget","input","checkbox","menuitemcheckbox"],["roletype","widget","command","menuitem","menuitemcheckbox"],["roletype","widget","input","radio"]]},dv=cv;ga.default=dv;var _a={};Object.defineProperty(_a,"__esModule",{value:!0});_a.default=void 0;var fv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author"],prohibitedProps:[],props:{"aria-valuetext":null,"aria-valuemax":"100","aria-valuemin":"0"},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{"aria-valuenow":null},superClass:[["roletype","structure","range"]]},pv=fv;_a.default=pv;var Ea={};Object.defineProperty(Ea,"__esModule",{value:!0});Ea.default=void 0;var mv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"nav"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},vv=mv;Ea.default=vv;var Ra={};Object.defineProperty(Ra,"__esModule",{value:!0});Ra.default=void 0;var bv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:[],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[]},hv=bv;Ra.default=hv;var Ca={};Object.defineProperty(Ca,"__esModule",{value:!0});Ca.default=void 0;var yv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},gv=yv;Ca.default=gv;var Pa={};Object.defineProperty(Pa,"__esModule",{value:!0});Pa.default=void 0;var _v={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-checked":null,"aria-posinset":null,"aria-setsize":null,"aria-selected":"false"},relatedConcepts:[{concept:{name:"item"},module:"XForms"},{concept:{name:"listitem"},module:"ARIA"},{concept:{name:"option"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{"aria-selected":"false"},superClass:[["roletype","widget","input"]]},Ev=_v;Pa.default=Ev;var wa={};Object.defineProperty(wa,"__esModule",{value:!0});wa.default=void 0;var Rv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["prohibited"],prohibitedProps:["aria-label","aria-labelledby"],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Cv=Rv;wa.default=Cv;var qa={};Object.defineProperty(qa,"__esModule",{value:!0});qa.default=void 0;var Pv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["prohibited"],prohibitedProps:["aria-label","aria-labelledby"],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure"]]},wv=Pv;qa.default=wv;var Ta={};Object.defineProperty(Ta,"__esModule",{value:!0});Ta.default=void 0;var qv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author"],prohibitedProps:[],props:{"aria-valuetext":null},relatedConcepts:[{concept:{name:"progress"},module:"HTML"},{concept:{name:"status"},module:"ARIA"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","range"],["roletype","widget"]]},Tv=qv;Ta.default=Tv;var Oa={};Object.defineProperty(Oa,"__esModule",{value:!0});Oa.default=void 0;var Ov={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-checked":null,"aria-posinset":null,"aria-setsize":null},relatedConcepts:[{concept:{attributes:[{name:"type",value:"radio"}],name:"input"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{"aria-checked":null},superClass:[["roletype","widget","input"]]},Mv=Ov;Oa.default=Mv;var Ma={};Object.defineProperty(Ma,"__esModule",{value:!0});Ma.default=void 0;var Av={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-invalid":null,"aria-readonly":null,"aria-required":null},relatedConcepts:[{concept:{name:"list"},module:"ARIA"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["radio"]],requiredProps:{},superClass:[["roletype","widget","composite","select"],["roletype","structure","section","group","select"]]},Sv=Av;Ma.default=Sv;var Aa={};Object.defineProperty(Aa,"__esModule",{value:!0});Aa.default=void 0;var xv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{attributes:[{constraints:["set"],name:"aria-label"}],name:"section"},module:"HTML"},{concept:{attributes:[{constraints:["set"],name:"aria-labelledby"}],name:"section"},module:"HTML"},{concept:{name:"Device Independence Glossart perceivable unit"}},{concept:{name:"frame"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Nv=xv;Aa.default=Nv;var Sa={};Object.defineProperty(Sa,"__esModule",{value:!0});Sa.default=void 0;var Iv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-colindex":null,"aria-expanded":null,"aria-level":null,"aria-posinset":null,"aria-rowindex":null,"aria-selected":null,"aria-setsize":null},relatedConcepts:[{concept:{name:"tr"},module:"HTML"}],requireContextRole:["grid","rowgroup","table","treegrid"],requiredContextRole:["grid","rowgroup","table","treegrid"],requiredOwnedElements:[["cell"],["columnheader"],["gridcell"],["rowheader"]],requiredProps:{},superClass:[["roletype","structure","section","group"],["roletype","widget"]]},jv=Iv;Sa.default=jv;var xa={};Object.defineProperty(xa,"__esModule",{value:!0});xa.default=void 0;var Bv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"tbody"},module:"HTML"},{concept:{name:"tfoot"},module:"HTML"},{concept:{name:"thead"},module:"HTML"}],requireContextRole:["grid","table","treegrid"],requiredContextRole:["grid","table","treegrid"],requiredOwnedElements:[["row"]],requiredProps:{},superClass:[["roletype","structure"]]},Lv=Bv;xa.default=Lv;var Na={};Object.defineProperty(Na,"__esModule",{value:!0});Na.default=void 0;var kv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-sort":null},relatedConcepts:[{concept:{attributes:[{name:"scope",value:"row"}],name:"th"},module:"HTML"}],requireContextRole:["row"],requiredContextRole:["row"],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","cell"],["roletype","structure","section","cell","gridcell"],["roletype","widget","gridcell"],["roletype","structure","sectionhead"]]},Fv=kv;Na.default=Fv;var Ia={};Object.defineProperty(Ia,"__esModule",{value:!0});Ia.default=void 0;var Dv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!0,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-valuetext":null,"aria-orientation":"vertical","aria-valuemax":"100","aria-valuemin":"0"},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{"aria-controls":null,"aria-valuenow":null},superClass:[["roletype","structure","range"],["roletype","widget"]]},$v=Dv;Ia.default=$v;var ja={};Object.defineProperty(ja,"__esModule",{value:!0});ja.default=void 0;var Uv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Hv=Uv;ja.default=Hv;var Ba={};Object.defineProperty(Ba,"__esModule",{value:!0});Ba.default=void 0;var Vv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{attributes:[{constraints:["undefined"],name:"list"},{name:"type",value:"search"}],name:"input"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","input","textbox"]]},zv=Vv;Ba.default=zv;var La={};Object.defineProperty(La,"__esModule",{value:!0});La.default=void 0;var Wv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!0,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-orientation":"horizontal","aria-valuemax":"100","aria-valuemin":"0","aria-valuenow":null,"aria-valuetext":null},relatedConcepts:[{concept:{name:"hr"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure"]]},Gv=Wv;La.default=Gv;var ka={};Object.defineProperty(ka,"__esModule",{value:!0});ka.default=void 0;var Qv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-haspopup":null,"aria-invalid":null,"aria-readonly":null,"aria-valuetext":null,"aria-orientation":"horizontal","aria-valuemax":"100","aria-valuemin":"0"},relatedConcepts:[{concept:{attributes:[{name:"type",value:"range"}],name:"input"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{"aria-valuenow":null},superClass:[["roletype","widget","input"],["roletype","structure","range"]]},Xv=Qv;ka.default=Xv;var Fa={};Object.defineProperty(Fa,"__esModule",{value:!0});Fa.default=void 0;var Kv={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-invalid":null,"aria-readonly":null,"aria-required":null,"aria-valuetext":null,"aria-valuenow":"0"},relatedConcepts:[{concept:{attributes:[{name:"type",value:"number"}],name:"input"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","composite"],["roletype","widget","input"],["roletype","structure","range"]]},Jv=Kv;Fa.default=Jv;var Da={};Object.defineProperty(Da,"__esModule",{value:!0});Da.default=void 0;var Yv={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-atomic":"true","aria-live":"polite"},relatedConcepts:[{concept:{name:"output"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Zv=Yv;Da.default=Zv;var $a={};Object.defineProperty($a,"__esModule",{value:!0});$a.default=void 0;var eb={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["prohibited"],prohibitedProps:["aria-label","aria-labelledby"],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},tb=eb;$a.default=tb;var Ua={};Object.defineProperty(Ua,"__esModule",{value:!0});Ua.default=void 0;var rb={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["prohibited"],prohibitedProps:["aria-label","aria-labelledby"],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},ab=rb;Ua.default=ab;var Ha={};Object.defineProperty(Ha,"__esModule",{value:!0});Ha.default=void 0;var nb={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["prohibited"],prohibitedProps:["aria-label","aria-labelledby"],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},lb=nb;Ha.default=lb;var Va={};Object.defineProperty(Va,"__esModule",{value:!0});Va.default=void 0;var ob={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author","contents"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"button"},module:"ARIA"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{"aria-checked":null},superClass:[["roletype","widget","input","checkbox"]]},ib=ob;Va.default=ib;var za={};Object.defineProperty(za,"__esModule",{value:!0});za.default=void 0;var ub={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!0,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-disabled":null,"aria-expanded":null,"aria-haspopup":null,"aria-posinset":null,"aria-setsize":null,"aria-selected":"false"},relatedConcepts:[],requireContextRole:["tablist"],requiredContextRole:["tablist"],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","sectionhead"],["roletype","widget"]]},sb=ub;za.default=sb;var Wa={};Object.defineProperty(Wa,"__esModule",{value:!0});Wa.default=void 0;var cb={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-colcount":null,"aria-rowcount":null},relatedConcepts:[{concept:{name:"table"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["row"],["row","rowgroup"]],requiredProps:{},superClass:[["roletype","structure","section"]]},db=cb;Wa.default=db;var Ga={};Object.defineProperty(Ga,"__esModule",{value:!0});Ga.default=void 0;var fb={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-level":null,"aria-multiselectable":null,"aria-orientation":"horizontal"},relatedConcepts:[{module:"DAISY",concept:{name:"guide"}}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["tab"]],requiredProps:{},superClass:[["roletype","widget","composite"]]},pb=fb;Ga.default=pb;var Qa={};Object.defineProperty(Qa,"__esModule",{value:!0});Qa.default=void 0;var mb={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},vb=mb;Qa.default=vb;var Xa={};Object.defineProperty(Xa,"__esModule",{value:!0});Xa.default=void 0;var bb={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"dfn"},module:"HTML"},{concept:{name:"dt"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},hb=bb;Xa.default=hb;var Ka={};Object.defineProperty(Ka,"__esModule",{value:!0});Ka.default=void 0;var yb={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-activedescendant":null,"aria-autocomplete":null,"aria-errormessage":null,"aria-haspopup":null,"aria-invalid":null,"aria-multiline":null,"aria-placeholder":null,"aria-readonly":null,"aria-required":null},relatedConcepts:[{concept:{attributes:[{constraints:["undefined"],name:"type"},{constraints:["undefined"],name:"list"}],name:"input"},module:"HTML"},{concept:{attributes:[{constraints:["undefined"],name:"list"},{name:"type",value:"email"}],name:"input"},module:"HTML"},{concept:{attributes:[{constraints:["undefined"],name:"list"},{name:"type",value:"tel"}],name:"input"},module:"HTML"},{concept:{attributes:[{constraints:["undefined"],name:"list"},{name:"type",value:"text"}],name:"input"},module:"HTML"},{concept:{attributes:[{constraints:["undefined"],name:"list"},{name:"type",value:"url"}],name:"input"},module:"HTML"},{concept:{name:"input"},module:"XForms"},{concept:{name:"textarea"},module:"HTML"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","input"]]},gb=yb;Ka.default=gb;var Ja={};Object.defineProperty(Ja,"__esModule",{value:!0});Ja.default=void 0;var _b={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Eb=_b;Ja.default=Eb;var Ya={};Object.defineProperty(Ya,"__esModule",{value:!0});Ya.default=void 0;var Rb={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","status"]]},Cb=Rb;Ya.default=Cb;var Za={};Object.defineProperty(Za,"__esModule",{value:!0});Za.default=void 0;var Pb={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-orientation":"horizontal"},relatedConcepts:[{concept:{name:"menubar"},module:"ARIA"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","group"]]},wb=Pb;Za.default=wb;var en={};Object.defineProperty(en,"__esModule",{value:!0});en.default=void 0;var qb={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Tb=qb;en.default=Tb;var tn={};Object.defineProperty(tn,"__esModule",{value:!0});tn.default=void 0;var Ob={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-invalid":null,"aria-multiselectable":null,"aria-required":null,"aria-orientation":"vertical"},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["treeitem","group"],["treeitem"]],requiredProps:{},superClass:[["roletype","widget","composite","select"],["roletype","structure","section","group","select"]]},Mb=Ob;tn.default=Mb;var rn={};Object.defineProperty(rn,"__esModule",{value:!0});rn.default=void 0;var Ab={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["row"],["row","rowgroup"]],requiredProps:{},superClass:[["roletype","widget","composite","grid"],["roletype","structure","section","table","grid"],["roletype","widget","composite","select","tree"],["roletype","structure","section","group","select","tree"]]},Sb=Ab;rn.default=Sb;var an={};Object.defineProperty(an,"__esModule",{value:!0});an.default=void 0;var xb={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-expanded":null,"aria-haspopup":null},relatedConcepts:[],requireContextRole:["group","tree"],requiredContextRole:["group","tree"],requiredOwnedElements:[],requiredProps:{"aria-selected":null},superClass:[["roletype","structure","section","listitem"],["roletype","widget","input","option"]]},Nb=xb;an.default=Nb;Object.defineProperty(Mr,"__esModule",{value:!0});Mr.default=void 0;var Ib=P(Ar),jb=P(Sr),Bb=P(xr),Lb=P(Nr),kb=P(Ir),Fb=P(jr),Db=P(Br),$b=P(Lr),Ub=P(kr),Hb=P(Fr),Vb=P(Dr),zb=P($r),Wb=P(Ur),Gb=P(Hr),Qb=P(Vr),Xb=P(zr),Kb=P(Wr),Jb=P(Gr),Yb=P(Qr),Zb=P(Xr),eh=P(Kr),th=P(Jr),rh=P(Yr),ah=P(Zr),nh=P(ea),lh=P(ta),oh=P(ra),ih=P(aa),uh=P(na),sh=P(la),ch=P(oa),dh=P(ia),fh=P(ua),ph=P(sa),mh=P(ca),vh=P(da),bh=P(fa),hh=P(pa),yh=P(ma),gh=P(va),_h=P(ba),Eh=P(ha),Rh=P(ya),Ch=P(ga),Ph=P(_a),wh=P(Ea),qh=P(Ra),Th=P(Ca),Oh=P(Pa),Mh=P(wa),Ah=P(qa),Sh=P(Ta),xh=P(Oa),Nh=P(Ma),Ih=P(Aa),jh=P(Sa),Bh=P(xa),Lh=P(Na),kh=P(Ia),Fh=P(ja),Dh=P(Ba),$h=P(La),Uh=P(ka),Hh=P(Fa),Vh=P(Da),zh=P($a),Wh=P(Ua),Gh=P(Ha),Qh=P(Va),Xh=P(za),Kh=P(Wa),Jh=P(Ga),Yh=P(Qa),Zh=P(Xa),ey=P(Ka),ty=P(Ja),ry=P(Ya),ay=P(Za),ny=P(en),ly=P(tn),oy=P(rn),iy=P(an);function P(e){return e&&e.__esModule?e:{default:e}}var uy=[["alert",Ib.default],["alertdialog",jb.default],["application",Bb.default],["article",Lb.default],["banner",kb.default],["blockquote",Fb.default],["button",Db.default],["caption",$b.default],["cell",Ub.default],["checkbox",Hb.default],["code",Vb.default],["columnheader",zb.default],["combobox",Wb.default],["complementary",Gb.default],["contentinfo",Qb.default],["definition",Xb.default],["deletion",Kb.default],["dialog",Jb.default],["directory",Yb.default],["document",Zb.default],["emphasis",eh.default],["feed",th.default],["figure",rh.default],["form",ah.default],["generic",nh.default],["grid",lh.default],["gridcell",oh.default],["group",ih.default],["heading",uh.default],["img",sh.default],["insertion",ch.default],["link",dh.default],["list",fh.default],["listbox",ph.default],["listitem",mh.default],["log",vh.default],["main",bh.default],["marquee",hh.default],["math",yh.default],["menu",gh.default],["menubar",_h.default],["menuitem",Eh.default],["menuitemcheckbox",Rh.default],["menuitemradio",Ch.default],["meter",Ph.default],["navigation",wh.default],["none",qh.default],["note",Th.default],["option",Oh.default],["paragraph",Mh.default],["presentation",Ah.default],["progressbar",Sh.default],["radio",xh.default],["radiogroup",Nh.default],["region",Ih.default],["row",jh.default],["rowgroup",Bh.default],["rowheader",Lh.default],["scrollbar",kh.default],["search",Fh.default],["searchbox",Dh.default],["separator",$h.default],["slider",Uh.default],["spinbutton",Hh.default],["status",Vh.default],["strong",zh.default],["subscript",Wh.default],["superscript",Gh.default],["switch",Qh.default],["tab",Xh.default],["table",Kh.default],["tablist",Jh.default],["tabpanel",Yh.default],["term",Zh.default],["textbox",ey.default],["time",ty.default],["timer",ry.default],["toolbar",ay.default],["tooltip",ny.default],["tree",ly.default],["treegrid",oy.default],["treeitem",iy.default]],sy=uy;Mr.default=sy;var nn={},ln={};Object.defineProperty(ln,"__esModule",{value:!0});ln.default=void 0;var cy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"abstract [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},dy=cy;ln.default=dy;var on={};Object.defineProperty(on,"__esModule",{value:!0});on.default=void 0;var fy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"acknowledgments [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},py=fy;on.default=py;var un={};Object.defineProperty(un,"__esModule",{value:!0});un.default=void 0;var my={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"afterword [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},vy=my;un.default=vy;var sn={};Object.defineProperty(sn,"__esModule",{value:!0});sn.default=void 0;var by={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"appendix [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},hy=by;sn.default=hy;var cn={};Object.defineProperty(cn,"__esModule",{value:!0});cn.default=void 0;var yy={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","content"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"referrer [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","command","link"]]},gy=yy;cn.default=gy;var dn={};Object.defineProperty(dn,"__esModule",{value:!0});dn.default=void 0;var _y={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"EPUB biblioentry [EPUB-SSV]"},module:"EPUB"}],requireContextRole:["doc-bibliography"],requiredContextRole:["doc-bibliography"],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","listitem"]]},Ey=_y;dn.default=Ey;var fn={};Object.defineProperty(fn,"__esModule",{value:!0});fn.default=void 0;var Ry={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"bibliography [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["doc-biblioentry"]],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Cy=Ry;fn.default=Cy;var pn={};Object.defineProperty(pn,"__esModule",{value:!0});pn.default=void 0;var Py={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"biblioref [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","command","link"]]},wy=Py;pn.default=wy;var mn={};Object.defineProperty(mn,"__esModule",{value:!0});mn.default=void 0;var qy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"chapter [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Ty=qy;mn.default=Ty;var vn={};Object.defineProperty(vn,"__esModule",{value:!0});vn.default=void 0;var Oy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"colophon [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},My=Oy;vn.default=My;var bn={};Object.defineProperty(bn,"__esModule",{value:!0});bn.default=void 0;var Ay={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"conclusion [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Sy=Ay;bn.default=Sy;var hn={};Object.defineProperty(hn,"__esModule",{value:!0});hn.default=void 0;var xy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"cover [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","img"]]},Ny=xy;hn.default=Ny;var yn={};Object.defineProperty(yn,"__esModule",{value:!0});yn.default=void 0;var Iy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"credit [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},jy=Iy;yn.default=jy;var gn={};Object.defineProperty(gn,"__esModule",{value:!0});gn.default=void 0;var By={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"credits [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Ly=By;gn.default=Ly;var _n={};Object.defineProperty(_n,"__esModule",{value:!0});_n.default=void 0;var ky={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"dedication [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Fy=ky;_n.default=Fy;var En={};Object.defineProperty(En,"__esModule",{value:!0});En.default=void 0;var Dy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"rearnote [EPUB-SSV]"},module:"EPUB"}],requireContextRole:["doc-endnotes"],requiredContextRole:["doc-endnotes"],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","listitem"]]},$y=Dy;En.default=$y;var Rn={};Object.defineProperty(Rn,"__esModule",{value:!0});Rn.default=void 0;var Uy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"rearnotes [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["doc-endnote"]],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Hy=Uy;Rn.default=Hy;var Cn={};Object.defineProperty(Cn,"__esModule",{value:!0});Cn.default=void 0;var Vy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"epigraph [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},zy=Vy;Cn.default=zy;var Pn={};Object.defineProperty(Pn,"__esModule",{value:!0});Pn.default=void 0;var Wy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"epilogue [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Gy=Wy;Pn.default=Gy;var wn={};Object.defineProperty(wn,"__esModule",{value:!0});wn.default=void 0;var Qy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"errata [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Xy=Qy;wn.default=Xy;var qn={};Object.defineProperty(qn,"__esModule",{value:!0});qn.default=void 0;var Ky={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Jy=Ky;qn.default=Jy;var Tn={};Object.defineProperty(Tn,"__esModule",{value:!0});Tn.default=void 0;var Yy={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"footnote [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Zy=Yy;Tn.default=Zy;var On={};Object.defineProperty(On,"__esModule",{value:!0});On.default=void 0;var eg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"foreword [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},tg=eg;On.default=tg;var Mn={};Object.defineProperty(Mn,"__esModule",{value:!0});Mn.default=void 0;var rg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"glossary [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[["definition"],["term"]],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},ag=rg;Mn.default=ag;var An={};Object.defineProperty(An,"__esModule",{value:!0});An.default=void 0;var ng={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"glossref [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","command","link"]]},lg=ng;An.default=lg;var Sn={};Object.defineProperty(Sn,"__esModule",{value:!0});Sn.default=void 0;var og={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"index [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark","navigation"]]},ig=og;Sn.default=ig;var xn={};Object.defineProperty(xn,"__esModule",{value:!0});xn.default=void 0;var ug={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"introduction [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},sg=ug;xn.default=sg;var Nn={};Object.defineProperty(Nn,"__esModule",{value:!0});Nn.default=void 0;var cg={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author","contents"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"noteref [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","widget","command","link"]]},dg=cg;Nn.default=dg;var In={};Object.defineProperty(In,"__esModule",{value:!0});In.default=void 0;var fg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"notice [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","note"]]},pg=fg;In.default=pg;var jn={};Object.defineProperty(jn,"__esModule",{value:!0});jn.default=void 0;var mg={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!0,nameFrom:["author"],prohibitedProps:[],props:{"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"pagebreak [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","separator"]]},vg=mg;jn.default=vg;var Bn={};Object.defineProperty(Bn,"__esModule",{value:!0});Bn.default=void 0;var bg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"page-list [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark","navigation"]]},hg=bg;Bn.default=hg;var Ln={};Object.defineProperty(Ln,"__esModule",{value:!0});Ln.default=void 0;var yg={abstract:!1,accessibleNameRequired:!0,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"part [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},gg=yg;Ln.default=gg;var kn={};Object.defineProperty(kn,"__esModule",{value:!0});kn.default=void 0;var _g={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"preface [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Eg=_g;kn.default=Eg;var Fn={};Object.defineProperty(Fn,"__esModule",{value:!0});Fn.default=void 0;var Rg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"prologue [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark"]]},Cg=Rg;Fn.default=Cg;var Dn={};Object.defineProperty(Dn,"__esModule",{value:!0});Dn.default=void 0;var Pg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{},relatedConcepts:[{concept:{name:"pullquote [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["none"]]},wg=Pg;Dn.default=wg;var $n={};Object.defineProperty($n,"__esModule",{value:!0});$n.default=void 0;var qg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"qna [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section"]]},Tg=qg;$n.default=Tg;var Un={};Object.defineProperty(Un,"__esModule",{value:!0});Un.default=void 0;var Og={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"subtitle [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","sectionhead"]]},Mg=Og;Un.default=Mg;var Hn={};Object.defineProperty(Hn,"__esModule",{value:!0});Hn.default=void 0;var Ag={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"help [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","note"]]},Sg=Ag;Hn.default=Sg;var Vn={};Object.defineProperty(Vn,"__esModule",{value:!0});Vn.default=void 0;var xg={abstract:!1,accessibleNameRequired:!1,baseConcepts:[],childrenPresentational:!1,nameFrom:["author"],prohibitedProps:[],props:{"aria-disabled":null,"aria-errormessage":null,"aria-expanded":null,"aria-haspopup":null,"aria-invalid":null},relatedConcepts:[{concept:{name:"toc [EPUB-SSV]"},module:"EPUB"}],requireContextRole:[],requiredContextRole:[],requiredOwnedElements:[],requiredProps:{},superClass:[["roletype","structure","section","landmark","navigation"]]},Ng=xg;Vn.default=Ng;Object.defineProperty(nn,"__esModule",{value:!0});nn.default=void 0;var Ig=L(ln),jg=L(on),Bg=L(un),Lg=L(sn),kg=L(cn),Fg=L(dn),Dg=L(fn),$g=L(pn),Ug=L(mn),Hg=L(vn),Vg=L(bn),zg=L(hn),Wg=L(yn),Gg=L(gn),Qg=L(_n),Xg=L(En),Kg=L(Rn),Jg=L(Cn),Yg=L(Pn),Zg=L(wn),e_=L(qn),t_=L(Tn),r_=L(On),a_=L(Mn),n_=L(An),l_=L(Sn),o_=L(xn),i_=L(Nn),u_=L(In),s_=L(jn),c_=L(Bn),d_=L(Ln),f_=L(kn),p_=L(Fn),m_=L(Dn),v_=L($n),b_=L(Un),h_=L(Hn),y_=L(Vn);function L(e){return e&&e.__esModule?e:{default:e}}var g_=[["doc-abstract",Ig.default],["doc-acknowledgments",jg.default],["doc-afterword",Bg.default],["doc-appendix",Lg.default],["doc-backlink",kg.default],["doc-biblioentry",Fg.default],["doc-bibliography",Dg.default],["doc-biblioref",$g.default],["doc-chapter",Ug.default],["doc-colophon",Hg.default],["doc-conclusion",Vg.default],["doc-cover",zg.default],["doc-credit",Wg.default],["doc-credits",Gg.default],["doc-dedication",Qg.default],["doc-endnote",Xg.default],["doc-endnotes",Kg.default],["doc-epigraph",Jg.default],["doc-epilogue",Yg.default],["doc-errata",Zg.default],["doc-example",e_.default],["doc-footnote",t_.default],["doc-foreword",r_.default],["doc-glossary",a_.default],["doc-glossref",n_.default],["doc-index",l_.default],["doc-introduction",o_.default],["doc-noteref",i_.default],["doc-notice",u_.default],["doc-pagebreak",s_.default],["doc-pagelist",c_.default],["doc-part",d_.default],["doc-preface",f_.default],["doc-prologue",p_.default],["doc-pullquote",m_.default],["doc-qna",v_.default],["doc-subtitle",b_.default],["doc-tip",h_.default],["doc-toc",y_.default]],__=g_;nn.default=__;Object.defineProperty(vt,"__esModule",{value:!0});vt.default=void 0;var E_=Hl(br),R_=Hl(Mr),C_=Hl(nn);function Hl(e){return e&&e.__esModule?e:{default:e}}function P_(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}function Bo(e,t){var r=typeof Symbol<"u"&&e[Symbol.iterator]||e["@@iterator"];if(!r){if(Array.isArray(e)||(r=Pu(e))||t&&e&&typeof e.length=="number"){r&&(e=r);var a=0,n=function(){};return{s:n,n:function(){return a>=e.length?{done:!0}:{done:!1,value:e[a++]}},e:function(s){throw s},f:n}}throw new TypeError(`Invalid attempt to iterate non-iterable instance. -In order to be iterable, non-array objects must have a [Symbol.iterator]() method.`)}var o=!0,l=!1,i;return{s:function(){r=r.call(e)},n:function(){var s=r.next();return o=s.done,s},e:function(s){l=!0,i=s},f:function(){try{!o&&r.return!=null&&r.return()}finally{if(l)throw i}}}}function or(e,t){return T_(e)||q_(e,t)||Pu(e,t)||w_()}function w_(){throw new TypeError(`Invalid attempt to destructure non-iterable instance. -In order to be iterable, non-array objects must have a [Symbol.iterator]() method.`)}function Pu(e,t){if(e){if(typeof e=="string")return Lo(e,t);var r=Object.prototype.toString.call(e).slice(8,-1);if(r==="Object"&&e.constructor&&(r=e.constructor.name),r==="Map"||r==="Set")return Array.from(e);if(r==="Arguments"||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r))return Lo(e,t)}}function Lo(e,t){(t==null||t>e.length)&&(t=e.length);for(var r=0,a=new Array(t);re.length)&&(t=e.length);for(var r=0,a=new Array(t);re.length)&&(t=e.length);for(var r=0,a=new Array(t);r=0;--N){var M=this.tryEntries[N],D=M.completion;if(M.tryLoc==="root")return C("end");if(M.tryLoc<=this.prev){var te=n.call(M,"catchLoc"),se=n.call(M,"finallyLoc");if(te&&se){if(this.prev=0;--C){var N=this.tryEntries[C];if(N.tryLoc<=this.prev&&n.call(N,"finallyLoc")&&this.prev=0;--h){var C=this.tryEntries[h];if(C.finallyLoc===_)return this.complete(C.completion,C.afterLoc),oe(C),E}},catch:function(_){for(var h=this.tryEntries.length-1;h>=0;--h){var C=this.tryEntries[h];if(C.tryLoc===_){var N=C.completion;if(N.type==="throw"){var M=N.arg;oe(C)}return M}}throw new Error("illegal catch attempt")},delegateYield:function(_,h,C){return this.delegate={iterator:ue(_),resultName:h,nextLoc:C},this.method==="next"&&(this.arg=o),E}},r}(e.exports);try{regeneratorRuntime=t}catch{typeof globalThis=="object"?globalThis.regeneratorRuntime=t:Function("r","regeneratorRuntime = r")(t)}})(rE);(function(e){e.exports=El})(tE);const it=ac(_l);var Rl={},aE={get exports(){return Rl},set exports(e){Rl=e}};(function(e){var t=function(){var r=String.fromCharCode,a="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=",n="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-$",o={};function l(u,s){if(!o[u]){o[u]={};for(var p=0;p>>8,p[d*2+1]=v%256}return p},decompressFromUint8Array:function(u){if(u==null)return i.decompress(u);for(var s=new Array(u.length/2),p=0,d=s.length;p>1}else{for(m=1,d=0;d>1}T--,T==0&&(T=Math.pow(2,A),A++),delete y[E]}else for(m=v[E],d=0;d>1;T--,T==0&&(T=Math.pow(2,A),A++),v[R]=O++,E=String(f)}if(E!==""){if(Object.prototype.hasOwnProperty.call(y,E)){if(E.charCodeAt(0)<256){for(d=0;d>1}else{for(m=1,d=0;d>1}T--,T==0&&(T=Math.pow(2,A),A++),delete y[E]}else for(m=v[E],d=0;d>1;T--,T==0&&(T=Math.pow(2,A),A++)}for(m=2,d=0;d>1;for(;;)if(b=b<<1,w==s-1){S.push(p(b));break}else w++;return S.join("")},decompress:function(u){return u==null?"":u==""?null:i._decompress(u.length,32768,function(s){return u.charCodeAt(s)})},_decompress:function(u,s,p){var d=[],m=4,v=4,y=3,f="",R=[],E,T,O,A,S,b,w,q={val:p(0),position:s,index:1};for(E=0;E<3;E+=1)d[E]=E;for(O=0,S=Math.pow(2,2),b=1;b!=S;)A=q.val&q.position,q.position>>=1,q.position==0&&(q.position=s,q.val=p(q.index++)),O|=(A>0?1:0)*b,b<<=1;switch(O){case 0:for(O=0,S=Math.pow(2,8),b=1;b!=S;)A=q.val&q.position,q.position>>=1,q.position==0&&(q.position=s,q.val=p(q.index++)),O|=(A>0?1:0)*b,b<<=1;w=r(O);break;case 1:for(O=0,S=Math.pow(2,16),b=1;b!=S;)A=q.val&q.position,q.position>>=1,q.position==0&&(q.position=s,q.val=p(q.index++)),O|=(A>0?1:0)*b,b<<=1;w=r(O);break;case 2:return""}for(d[3]=w,T=w,R.push(w);;){if(q.index>u)return"";for(O=0,S=Math.pow(2,y),b=1;b!=S;)A=q.val&q.position,q.position>>=1,q.position==0&&(q.position=s,q.val=p(q.index++)),O|=(A>0?1:0)*b,b<<=1;switch(w=O){case 0:for(O=0,S=Math.pow(2,8),b=1;b!=S;)A=q.val&q.position,q.position>>=1,q.position==0&&(q.position=s,q.val=p(q.index++)),O|=(A>0?1:0)*b,b<<=1;d[v++]=r(O),w=v-1,m--;break;case 1:for(O=0,S=Math.pow(2,16),b=1;b!=S;)A=q.val&q.position,q.position>>=1,q.position==0&&(q.position=s,q.val=p(q.index++)),O|=(A>0?1:0)*b,b<<=1;d[v++]=r(O),w=v-1,m--;break;case 2:return R.join("")}if(m==0&&(m=Math.pow(2,y),y++),d[w])f=d[w];else if(w===v)f=T+T.charAt(0);else return null;R.push(f),d[v++]=T+f.charAt(0),m--,T=f,m==0&&(m=Math.pow(2,y),y++)}}};return i}();e!=null&&(e.exports=t)})(aE);function Au(e){return e.replace(//g,">")}var nE=function(t,r,a,n,o,l,i){var u=n+a.indent,s=a.colors;return t.map(function(p){var d=r[p],m=i(d,a,u,o,l);return typeof d!="string"&&(m.indexOf(` -`)!==-1&&(m=a.spacingOuter+u+m+a.spacingOuter+n),m="{"+m+"}"),a.spacingInner+n+s.prop.open+p+s.prop.close+"="+s.value.open+m+s.value.close}).join("")},lE=3,oE=function(t,r,a,n,o,l){return t.map(function(i){var u=typeof i=="string"?Su(i,r):l(i,r,a,n,o);return u===""&&typeof i=="object"&&i!==null&&i.nodeType!==lE?"":r.spacingOuter+a+u}).join("")},Su=function(t,r){var a=r.colors.content;return a.open+Au(t)+a.close},iE=function(t,r){var a=r.colors.comment;return a.open+""+a.close},uE=function(t,r,a,n,o){var l=n.colors.tag;return l.open+"<"+t+(r&&l.close+r+n.spacingOuter+o+l.open)+(a?">"+l.close+a+n.spacingOuter+o+l.open+""+l.close},sE=function(t,r){var a=r.colors.tag;return a.open+"<"+t+a.close+" …"+a.open+" />"+a.close},cE=1,xu=3,Nu=8,Iu=11,dE=/^((HTML|SVG)\w*)?Element$/,fE=function(t){var r=t.constructor.name,a=t.nodeType,n=t.tagName,o=typeof n=="string"&&n.includes("-")||typeof t.hasAttribute=="function"&&t.hasAttribute("is");return a===cE&&(dE.test(r)||o)||a===xu&&r==="Text"||a===Nu&&r==="Comment"||a===Iu&&r==="DocumentFragment"};function pE(e){return e.nodeType===xu}function mE(e){return e.nodeType===Nu}function pl(e){return e.nodeType===Iu}function vE(e){return{test:function(r){var a;return(r==null||(a=r.constructor)==null?void 0:a.name)&&fE(r)},serialize:function(r,a,n,o,l,i){if(pE(r))return Su(r.data,a);if(mE(r))return iE(r.data,a);var u=pl(r)?"DocumentFragment":r.tagName.toLowerCase();return++o>a.maxDepth?sE(u,a):uE(u,nE(pl(r)?[]:Array.from(r.attributes).map(function(s){return s.name}).sort(),pl(r)?{}:Array.from(r.attributes).reduce(function(s,p){return s[p.name]=p.value,s},{}),a,n+a.indent,o,l,i),oE(Array.prototype.slice.call(r.childNodes||r.children).filter(e),a,n+a.indent,o,l,i),a,n)}}}var ju=null,Vl=null,zl=null;try{var ml=module&&module.require;Vl=ml.call(module,"fs").readFileSync,zl=ml.call(module,"@babel/code-frame").codeFrameColumns,ju=ml.call(module,"chalk")}catch{}function bE(e){var t=e.indexOf("(")+1,r=e.indexOf(")"),a=e.slice(t,r),n=a.split(":"),o=[n[0],parseInt(n[1],10),parseInt(n[2],10)],l=o[0],i=o[1],u=o[2],s="";try{s=Vl(l,"utf-8")}catch{return""}var p=zl(s,{start:{line:i,column:u}},{highlightCode:!0,linesBelow:0});return ju.dim(a)+` -`+p+` -`}function hE(){if(!Vl||!zl)return"";var e=new Error,t=e.stack.split(` -`).slice(1).find(function(r){return!r.includes("node_modules/")});return bE(t)}var Bu=3;function vl(){return typeof jest<"u"&&jest!==null?setTimeout._isMockFunction===!0||Object.prototype.hasOwnProperty.call(setTimeout,"clock"):!1}function Wl(){if(typeof window>"u")throw new Error("Could not find default container");return window.document}function Lu(e){if(e.defaultView)return e.defaultView;if(e.ownerDocument&&e.ownerDocument.defaultView)return e.ownerDocument.defaultView;if(e.window)return e.window;throw e.ownerDocument&&e.ownerDocument.defaultView===null?new Error("It looks like the window object is not available for the provided node."):e.then instanceof Function?new Error("It looks like you passed a Promise object instead of a DOM node. Did you do something like `fireEvent.click(screen.findBy...` when you meant to use a `getBy` query `fireEvent.click(screen.getBy...`, or await the findBy query `fireEvent.click(await screen.findBy...`?"):Array.isArray(e)?new Error("It looks like you passed an Array instead of a DOM node. Did you do something like `fireEvent.click(screen.getAllBy...` when you meant to use a `getBy` query `fireEvent.click(screen.getBy...`?"):typeof e.debug=="function"&&typeof e.logTestingPlaygroundURL=="function"?new Error("It looks like you passed a `screen` object. Did you do something like `fireEvent.click(screen, ...` when you meant to use a query, e.g. `fireEvent.click(screen.getBy..., `?"):new Error("The given node is not an Element, the node type is: "+typeof e+".")}function Te(e){if(!e||typeof e.querySelector!="function"||typeof e.querySelectorAll!="function")throw new TypeError("Expected container to be an Element, a Document or a DocumentFragment but got "+t(e)+".");function t(r){return typeof r=="object"?r===null?"null":r.constructor.name:typeof r}}var Gl="script, style",yE=["filterNode"],gE=function(){return typeof process<"u"&&process.versions!==void 0&&process.versions.node!==void 0},_E=Dl.DOMCollection,EE=1,RE=8;function CE(e){return e.nodeType!==RE&&(e.nodeType!==EE||!e.matches(Gl))}function Et(e,t,r){if(r===void 0&&(r={}),e||(e=Wl().body),typeof t!="number"&&(t=typeof process<"u"&&{}.DEBUG_PRINT_LIMIT||7e3),t===0)return"";e.documentElement&&(e=e.documentElement);var a=typeof e;if(a==="object"?a=e.constructor.name:e={},!("outerHTML"in e))throw new TypeError("Expected an element or document but got "+a);var n=r,o=n.filterNode,l=o===void 0?CE:o,i=bl(n,yE),u=ou(e,Ce({plugins:[vE(l),_E],printFunctionName:!1,highlight:gE()},i));return t!==void 0&&e.outerHTML.length>t?u.slice(0,t)+"...":u}var Cl=function(){var t=hE();console.log(t?Et.apply(void 0,arguments)+` - -`+t:Et.apply(void 0,arguments))},ut={testIdAttribute:"data-testid",asyncUtilTimeout:1e3,asyncWrapper:function(t){return t()},unstable_advanceTimersWrapper:function(t){return t()},eventWrapper:function(t){return t()},defaultHidden:!1,showOriginalStackTrace:!1,throwSuggestions:!1,getElementError:function(t,r){var a=Et(r),n=new Error([t,`Ignored nodes: comments, - - - `; -}; diff --git a/spaces/mayuri120/anime-remove-background/app.py b/spaces/mayuri120/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/mayuri120/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/mdj1412/stock_news_summaries_AI/static/js/chartjs-chart-financial.js b/spaces/mdj1412/stock_news_summaries_AI/static/js/chartjs-chart-financial.js deleted file mode 100644 index c3719fbfc9fed28fe33ddfe420da8a7c51b5a2df..0000000000000000000000000000000000000000 --- a/spaces/mdj1412/stock_news_summaries_AI/static/js/chartjs-chart-financial.js +++ /dev/null @@ -1,525 +0,0 @@ -/*! - * @license - * chartjs-chart-financial - * http://chartjs.org/ - * Version: 0.1.0 - * - * Copyright 2021 Chart.js Contributors - * Released under the MIT license - * https://github.com/chartjs/chartjs-chart-financial/blob/master/LICENSE.md - */ -(function (global, factory) { -typeof exports === 'object' && typeof module !== 'undefined' ? factory(require('chart.js'), require('chart.js/helpers')) : -typeof define === 'function' && define.amd ? define(['chart.js', 'chart.js/helpers'], factory) : -(global = typeof globalThis !== 'undefined' ? globalThis : global || self, factory(global.Chart, global.Chart.helpers)); -}(this, (function (chart_js, helpers) { 'use strict'; - -/** - * Computes the "optimal" sample size to maintain bars equally sized while preventing overlap. - * @private - */ -function computeMinSampleSize(scale, pixels) { - let min = scale._length; - let prev, curr, i, ilen; - - for (i = 1, ilen = pixels.length; i < ilen; ++i) { - min = Math.min(min, Math.abs(pixels[i] - pixels[i - 1])); - } - - for (i = 0, ilen = scale.ticks.length; i < ilen; ++i) { - curr = scale.getPixelForTick(i); - min = i > 0 ? Math.min(min, Math.abs(curr - prev)) : min; - prev = curr; - } - - return min; -} - -/** - * This class is based off controller.bar.js from the upstream Chart.js library - */ -class FinancialController extends chart_js.BarController { - - getLabelAndValue(index) { - const me = this; - const parsed = me.getParsed(index); - const axis = me._cachedMeta.iScale.axis; - - const {o, h, l, c} = parsed; - const value = `O: ${o} H: ${h} L: ${l} C: ${c}`; - - return { - label: `${me._cachedMeta.iScale.getLabelForValue(parsed[axis])}`, - value - }; - } - - getAllParsedValues() { - const meta = this._cachedMeta; - const axis = meta.iScale.axis; - const parsed = meta._parsed; - const values = []; - for (let i = 0; i < parsed.length; ++i) { - values.push(parsed[i][axis]); - } - return values; - } - - /** - * Implement this ourselves since it doesn't handle high and low values - * https://github.com/chartjs/Chart.js/issues/7328 - * @protected - */ - getMinMax(scale) { - const meta = this._cachedMeta; - const _parsed = meta._parsed; - const axis = meta.iScale.axis; - - if (_parsed.length < 2) { - return {min: 0, max: 1}; - } - - if (scale === meta.iScale) { - return {min: _parsed[0][axis], max: _parsed[_parsed.length - 1][axis]}; - } - - let min = Number.POSITIVE_INFINITY; - let max = Number.NEGATIVE_INFINITY; - for (let i = 0; i < _parsed.length; i++) { - const data = _parsed[i]; - min = Math.min(min, data.l); - max = Math.max(max, data.h); - } - return {min, max}; - } - - _getRuler() { - const me = this; - const opts = me.options; - const meta = me._cachedMeta; - const iScale = meta.iScale; - const axis = iScale.axis; - const pixels = []; - for (let i = 0; i < meta.data.length; ++i) { - pixels.push(iScale.getPixelForValue(me.getParsed(i)[axis])); - } - const barThickness = opts.barThickness; - const min = computeMinSampleSize(iScale, pixels); - return { - min, - pixels, - start: iScale._startPixel, - end: iScale._endPixel, - stackCount: me._getStackCount(), - scale: iScale, - ratio: barThickness ? 1 : opts.categoryPercentage * opts.barPercentage - }; - } - - /** - * @protected - */ - calculateElementProperties(index, ruler, reset, options) { - const me = this; - const vscale = me._cachedMeta.vScale; - const base = vscale.getBasePixel(); - const ipixels = me._calculateBarIndexPixels(index, ruler, options); - const data = me.chart.data.datasets[me.index].data[index]; - const open = vscale.getPixelForValue(data.o); - const high = vscale.getPixelForValue(data.h); - const low = vscale.getPixelForValue(data.l); - const close = vscale.getPixelForValue(data.c); - - return { - base: reset ? base : low, - x: ipixels.center, - y: (low + high) / 2, - width: ipixels.size, - open, - high, - low, - close - }; - } - - draw() { - const me = this; - const chart = me.chart; - const rects = me._cachedMeta.data; - helpers.clipArea(chart.ctx, chart.chartArea); - for (let i = 0; i < rects.length; ++i) { - rects[i].draw(me._ctx); - } - helpers.unclipArea(chart.ctx); - } - -} - -FinancialController.overrides = { - label: '', - - parsing: false, - - hover: { - mode: 'label' - }, - - datasets: { - categoryPercentage: 0.8, - barPercentage: 0.9, - animation: { - numbers: { - type: 'number', - properties: ['x', 'y', 'base', 'width', 'open', 'high', 'low', 'close'] - } - } - }, - - scales: { - x: { - type: 'timeseries', - offset: true, - ticks: { - major: { - enabled: true, - }, - fontStyle: context => context.tick.major ? 'bold' : undefined, - source: 'data', - maxRotation: 0, - autoSkip: true, - autoSkipPadding: 75, - sampleSize: 100 - }, - afterBuildTicks: scale => { - const DateTime = window && window.luxon && window.luxon.DateTime; - if (!DateTime) { - return; - } - const majorUnit = scale._majorUnit; - const ticks = scale.ticks; - const firstTick = ticks[0]; - if (!firstTick) { - return; - } - - let val = DateTime.fromMillis(firstTick.value); - if ((majorUnit === 'minute' && val.second === 0) - || (majorUnit === 'hour' && val.minute === 0) - || (majorUnit === 'day' && val.hour === 9) - || (majorUnit === 'month' && val.day <= 3 && val.weekday === 1) - || (majorUnit === 'year' && val.month === 1)) { - firstTick.major = true; - } else { - firstTick.major = false; - } - let lastMajor = val.get(majorUnit); - - for (let i = 1; i < ticks.length; i++) { - const tick = ticks[i]; - val = DateTime.fromMillis(tick.value); - const currMajor = val.get(majorUnit); - tick.major = currMajor !== lastMajor; - lastMajor = currMajor; - } - scale.ticks = ticks; - } - }, - y: { - type: 'linear' - } - }, - - plugins: { - tooltip: { - intersect: false, - mode: 'index', - callbacks: { - label(ctx) { - const point = ctx.parsed; - - if (!helpers.isNullOrUndef(point.y)) { - return chart_js.defaults.plugins.tooltip.callbacks.label(ctx); - } - - const {o, h, l, c} = point; - - return `O: ${o} H: ${h} L: ${l} C: ${c}`; - } - } - } - } -}; - -const globalOpts$2 = chart_js.Chart.defaults; - -globalOpts$2.elements.financial = { - color: { - up: 'rgba(255, 0, 0, 1)', - down: 'rgba(0, 0, 255, 1)', - unchanged: 'rgba(90, 90, 90, 1)', - } -}; - -/** - * Helper function to get the bounds of the bar regardless of the orientation - * @param {Rectangle} bar the bar - * @param {boolean} [useFinalPosition] - * @return {object} bounds of the bar - * @private - */ -function getBarBounds(bar, useFinalPosition) { - const {x, y, base, width, height} = bar.getProps(['x', 'low', 'high', 'width', 'height'], useFinalPosition); - - let left, right, top, bottom, half; - - if (bar.horizontal) { - half = height / 2; - left = Math.min(x, base); - right = Math.max(x, base); - top = y - half; - bottom = y + half; - } else { - half = width / 2; - left = x - half; - right = x + half; - top = Math.min(y, base); // use min because 0 pixel at top of screen - bottom = Math.max(y, base); - } - - return {left, top, right, bottom}; -} - -function inRange(bar, x, y, useFinalPosition) { - const skipX = x === null; - const skipY = y === null; - const bounds = !bar || (skipX && skipY) ? false : getBarBounds(bar, useFinalPosition); - - return bounds - && (skipX || x >= bounds.left && x <= bounds.right) - && (skipY || y >= bounds.top && y <= bounds.bottom); -} - -class FinancialElement extends chart_js.Element { - - height() { - return this.base - this.y; - } - - inRange(mouseX, mouseY, useFinalPosition) { - return inRange(this, mouseX, mouseY, useFinalPosition); - } - - inXRange(mouseX, useFinalPosition) { - return inRange(this, mouseX, null, useFinalPosition); - } - - inYRange(mouseY, useFinalPosition) { - return inRange(this, null, mouseY, useFinalPosition); - } - - getRange(axis) { - return axis === 'x' ? this.width / 2 : this.height / 2; - } - - getCenterPoint(useFinalPosition) { - const {x, low, high} = this.getProps(['x', 'low', 'high'], useFinalPosition); - return { - x, - y: (high + low) / 2 - }; - } - - tooltipPosition(useFinalPosition) { - const {x, open, close} = this.getProps(['x', 'open', 'close'], useFinalPosition); - return { - x, - y: (open + close) / 2 - }; - } -} - -const globalOpts$1 = chart_js.Chart.defaults; - -class CandlestickElement extends FinancialElement { - draw(ctx) { - const me = this; - - const {x, open, high, low, close} = me; - - let borderColors = me.borderColor; - if (typeof borderColors === 'string') { - borderColors = { - up: borderColors, - down: borderColors, - unchanged: borderColors - }; - } - - let borderColor; - if (close < open) { - borderColor = helpers.valueOrDefault(borderColors ? borderColors.up : undefined, globalOpts$1.elements.candlestick.borderColor); - ctx.fillStyle = helpers.valueOrDefault(me.color ? me.color.up : undefined, globalOpts$1.elements.candlestick.color.up); - } else if (close > open) { - borderColor = helpers.valueOrDefault(borderColors ? borderColors.down : undefined, globalOpts$1.elements.candlestick.borderColor); - ctx.fillStyle = helpers.valueOrDefault(me.color ? me.color.down : undefined, globalOpts$1.elements.candlestick.color.down); - } else { - borderColor = helpers.valueOrDefault(borderColors ? borderColors.unchanged : undefined, globalOpts$1.elements.candlestick.borderColor); - ctx.fillStyle = helpers.valueOrDefault(me.color ? me.color.unchanged : undefined, globalOpts$1.elements.candlestick.color.unchanged); - } - - ctx.lineWidth = helpers.valueOrDefault(me.borderWidth, globalOpts$1.elements.candlestick.borderWidth); - ctx.strokeStyle = helpers.valueOrDefault(borderColor, globalOpts$1.elements.candlestick.borderColor); - - ctx.beginPath(); - ctx.moveTo(x, high); - ctx.lineTo(x, Math.min(open, close)); - ctx.moveTo(x, low); - ctx.lineTo(x, Math.max(open, close)); - ctx.stroke(); - ctx.fillRect(x - me.width / 2, close, me.width, open - close); - ctx.strokeRect(x - me.width / 2, close, me.width, open - close); - ctx.closePath(); - } -} - -CandlestickElement.id = 'candlestick'; -CandlestickElement.defaults = helpers.merge({}, [globalOpts$1.elements.financial, { - borderColor: globalOpts$1.elements.financial.color.unchanged, - borderWidth: 1, -}]); - -class CandlestickController extends FinancialController { - - updateElements(elements, start, count, mode) { - const me = this; - const dataset = me.getDataset(); - const ruler = me._ruler || me._getRuler(); - const firstOpts = me.resolveDataElementOptions(start, mode); - const sharedOptions = me.getSharedOptions(firstOpts); - const includeOptions = me.includeOptions(mode, sharedOptions); - - me.updateSharedOptions(sharedOptions, mode, firstOpts); - - for (let i = start; i < count; i++) { - const options = sharedOptions || me.resolveDataElementOptions(i, mode); - - const lineColor = (elements[i]['close'] - elements[i]['open'] < 0)? "rgb(255, 0, 0, 1)" : "rgb(0, 0, 255, 1)"; - - const baseProperties = me.calculateElementProperties(i, ruler, mode === 'reset', options); - const properties = { - ...baseProperties, - datasetLabel: dataset.label || '', - // label: '', // to get label value please use dataset.data[index].label - - // Appearance - color: dataset.color, - borderColor: lineColor, - // borderColor: dataset.borderColor, - borderWidth: dataset.borderWidth, - }; - - if (includeOptions) { - properties.options = options; - } - me.updateElement(elements[i], i, properties, mode); - } - } - -} - -CandlestickController.id = 'candlestick'; -CandlestickController.defaults = helpers.merge({ - dataElementType: CandlestickElement.id -}, chart_js.Chart.defaults.financial); - -const globalOpts = chart_js.Chart.defaults; - -class OhlcElement extends FinancialElement { - draw(ctx) { - const me = this; - - const {x, open, high, low, close} = me; - - const armLengthRatio = helpers.valueOrDefault(me.armLengthRatio, globalOpts.elements.ohlc.armLengthRatio); - let armLength = helpers.valueOrDefault(me.armLength, globalOpts.elements.ohlc.armLength); - if (armLength === null) { - // The width of an ohlc is affected by barPercentage and categoryPercentage - // This behavior is caused by extending controller.financial, which extends controller.bar - // barPercentage and categoryPercentage are now set to 1.0 (see controller.ohlc) - // and armLengthRatio is multipled by 0.5, - // so that when armLengthRatio=1.0, the arms from neighbour ohcl touch, - // and when armLengthRatio=0.0, ohcl are just vertical lines. - armLength = me.width * armLengthRatio * 0.5; - } - - if (close < open) { - ctx.strokeStyle = helpers.valueOrDefault(me.color ? me.color.up : undefined, globalOpts.elements.ohlc.color.up); - } else if (close > open) { - ctx.strokeStyle = helpers.valueOrDefault(me.color ? me.color.down : undefined, globalOpts.elements.ohlc.color.down); - } else { - ctx.strokeStyle = helpers.valueOrDefault(me.color ? me.color.unchanged : undefined, globalOpts.elements.ohlc.color.unchanged); - } - ctx.lineWidth = helpers.valueOrDefault(me.lineWidth, globalOpts.elements.ohlc.lineWidth); - - ctx.beginPath(); - ctx.moveTo(x, high); - ctx.lineTo(x, low); - ctx.moveTo(x - armLength, open); - ctx.lineTo(x, open); - ctx.moveTo(x + armLength, close); - ctx.lineTo(x, close); - ctx.stroke(); - } -} - -OhlcElement.id = 'ohlc'; -OhlcElement.defaults = helpers.merge({}, [globalOpts.elements.financial, { - lineWidth: 2, - armLength: null, - armLengthRatio: 0.8, -}]); - -class OhlcController extends FinancialController { - - updateElements(elements, start, count, mode) { - const me = this; - const dataset = me.getDataset(); - const ruler = me._ruler || me._getRuler(); - const firstOpts = me.resolveDataElementOptions(start, mode); - const sharedOptions = me.getSharedOptions(firstOpts); - const includeOptions = me.includeOptions(mode, sharedOptions); - - for (let i = 0; i < count; i++) { - const options = sharedOptions || me.resolveDataElementOptions(i, mode); - - const baseProperties = me.calculateElementProperties(i, ruler, mode === 'reset', options); - const properties = { - ...baseProperties, - datasetLabel: dataset.label || '', - lineWidth: dataset.lineWidth, - armLength: dataset.armLength, - armLengthRatio: dataset.armLengthRatio, - color: dataset.color, - }; - - if (includeOptions) { - properties.options = options; - } - me.updateElement(elements[i], i, properties, mode); - } - } - -} - -OhlcController.id = 'ohlc'; -OhlcController.defaults = helpers.merge({ - dataElementType: OhlcElement.id, - datasets: { - barPercentage: 1.0, - categoryPercentage: 1.0 - } -}, chart_js.Chart.defaults.financial); - -chart_js.Chart.register(CandlestickController, OhlcController, CandlestickElement, OhlcElement); - -}))); diff --git a/spaces/mehdidc/text_to_image_ddgan/score_sde/models/projected_discriminator.py b/spaces/mehdidc/text_to_image_ddgan/score_sde/models/projected_discriminator.py deleted file mode 100644 index 2e1548f2da252a7a950550ecbd2a81a27b7db639..0000000000000000000000000000000000000000 --- a/spaces/mehdidc/text_to_image_ddgan/score_sde/models/projected_discriminator.py +++ /dev/null @@ -1,783 +0,0 @@ -from functools import partial -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - - -#from pg_modules.blocks import DownBlock, DownBlockPatch, conv2d -import functools -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.utils import spectral_norm -from . import layers -from .layers import CondAttnBlock -from .discriminator import * - - -def conv2d(*args, **kwargs): - return spectral_norm(nn.Conv2d(*args, **kwargs)) - - -def convTranspose2d(*args, **kwargs): - return spectral_norm(nn.ConvTranspose2d(*args, **kwargs)) - - -def embedding(*args, **kwargs): - return spectral_norm(nn.Embedding(*args, **kwargs)) - - -def linear(*args, **kwargs): - return spectral_norm(nn.Linear(*args, **kwargs)) - - -def NormLayer(c, mode='batch'): - if mode == 'group': - return nn.GroupNorm(c//2, c) - elif mode == 'batch': - return nn.BatchNorm2d(c) - - -### Activations - - -class GLU(nn.Module): - def forward(self, x): - nc = x.size(1) - assert nc % 2 == 0, 'channels dont divide 2!' - nc = int(nc/2) - return x[:, :nc] * torch.sigmoid(x[:, nc:]) - - -class Swish(nn.Module): - def forward(self, feat): - return feat * torch.sigmoid(feat) - - -### Upblocks - - -class InitLayer(nn.Module): - def __init__(self, nz, channel, sz=4): - super().__init__() - - self.init = nn.Sequential( - convTranspose2d(nz, channel*2, sz, 1, 0, bias=False), - NormLayer(channel*2), - GLU(), - ) - - def forward(self, noise): - noise = noise.view(noise.shape[0], -1, 1, 1) - return self.init(noise) - - -def UpBlockSmall(in_planes, out_planes): - block = nn.Sequential( - nn.Upsample(scale_factor=2, mode='nearest'), - conv2d(in_planes, out_planes*2, 3, 1, 1, bias=False), - NormLayer(out_planes*2), GLU()) - return block - - -class UpBlockSmallCond(nn.Module): - def __init__(self, in_planes, out_planes, z_dim): - super().__init__() - self.in_planes = in_planes - self.out_planes = out_planes - self.up = nn.Upsample(scale_factor=2, mode='nearest') - self.conv = conv2d(in_planes, out_planes*2, 3, 1, 1, bias=False) - - which_bn = functools.partial(CCBN, which_linear=linear, input_size=z_dim) - self.bn = which_bn(2*out_planes) - self.act = GLU() - - def forward(self, x, c): - x = self.up(x) - x = self.conv(x) - x = self.bn(x, c) - x = self.act(x) - return x - - -def UpBlockBig(in_planes, out_planes): - block = nn.Sequential( - nn.Upsample(scale_factor=2, mode='nearest'), - conv2d(in_planes, out_planes*2, 3, 1, 1, bias=False), - NoiseInjection(), - NormLayer(out_planes*2), GLU(), - conv2d(out_planes, out_planes*2, 3, 1, 1, bias=False), - NoiseInjection(), - NormLayer(out_planes*2), GLU() - ) - return block - - -class UpBlockBigCond(nn.Module): - def __init__(self, in_planes, out_planes, z_dim): - super().__init__() - self.in_planes = in_planes - self.out_planes = out_planes - self.up = nn.Upsample(scale_factor=2, mode='nearest') - self.conv1 = conv2d(in_planes, out_planes*2, 3, 1, 1, bias=False) - self.conv2 = conv2d(out_planes, out_planes*2, 3, 1, 1, bias=False) - - which_bn = functools.partial(CCBN, which_linear=linear, input_size=z_dim) - self.bn1 = which_bn(2*out_planes) - self.bn2 = which_bn(2*out_planes) - self.act = GLU() - self.noise = NoiseInjection() - - def forward(self, x, c): - # block 1 - x = self.up(x) - x = self.conv1(x) - x = self.noise(x) - x = self.bn1(x, c) - x = self.act(x) - - # block 2 - x = self.conv2(x) - x = self.noise(x) - x = self.bn2(x, c) - x = self.act(x) - - return x - - -class SEBlock(nn.Module): - def __init__(self, ch_in, ch_out): - super().__init__() - self.main = nn.Sequential( - nn.AdaptiveAvgPool2d(4), - conv2d(ch_in, ch_out, 4, 1, 0, bias=False), - Swish(), - conv2d(ch_out, ch_out, 1, 1, 0, bias=False), - nn.Sigmoid(), - ) - - def forward(self, feat_small, feat_big): - return feat_big * self.main(feat_small) - - -### Downblocks - - -class SeparableConv2d(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, bias=False): - super(SeparableConv2d, self).__init__() - self.depthwise = conv2d(in_channels, in_channels, kernel_size=kernel_size, - groups=in_channels, bias=bias, padding=1) - self.pointwise = conv2d(in_channels, out_channels, - kernel_size=1, bias=bias) - - def forward(self, x): - out = self.depthwise(x) - out = self.pointwise(out) - return out - - -class DownBlock(nn.Module): - def __init__(self, in_planes, out_planes, separable=False): - super().__init__() - if not separable: - self.main = nn.Sequential( - conv2d(in_planes, out_planes, 4, 2, 1), - NormLayer(out_planes), - nn.LeakyReLU(0.2, inplace=True), - ) - else: - self.main = nn.Sequential( - SeparableConv2d(in_planes, out_planes, 3), - NormLayer(out_planes), - nn.LeakyReLU(0.2, inplace=True), - nn.AvgPool2d(2, 2), - ) - - def forward(self, feat): - return self.main(feat) - - -class DownBlockPatch(nn.Module): - def __init__(self, in_planes, out_planes, separable=False): - super().__init__() - self.main = nn.Sequential( - DownBlock(in_planes, out_planes, separable), - conv2d(out_planes, out_planes, 1, 1, 0, bias=False), - NormLayer(out_planes), - nn.LeakyReLU(0.2, inplace=True), - ) - - def forward(self, feat): - return self.main(feat) - - -### CSM - - -class ResidualConvUnit(nn.Module): - def __init__(self, cin, activation, bn): - super().__init__() - self.conv = nn.Conv2d(cin, cin, kernel_size=3, stride=1, padding=1, bias=True) - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - return self.skip_add.add(self.conv(x), x) - - -class FeatureFusionBlock(nn.Module): - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True, lowest=False): - super().__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - output = xs[0] - - if len(xs) == 2: - output = self.skip_add.add(output, xs[1]) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - - -### Misc - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - self.weight = nn.Parameter(torch.zeros(1), requires_grad=True) - - def forward(self, feat, noise=None): - if noise is None: - batch, _, height, width = feat.shape - noise = torch.randn(batch, 1, height, width).to(feat.device) - - return feat + self.weight * noise - - -class CCBN(nn.Module): - ''' conditional batchnorm ''' - def __init__(self, output_size, input_size, which_linear, eps=1e-5, momentum=0.1): - super().__init__() - self.output_size, self.input_size = output_size, input_size - - # Prepare gain and bias layers - self.gain = which_linear(input_size, output_size) - self.bias = which_linear(input_size, output_size) - - # epsilon to avoid dividing by 0 - self.eps = eps - # Momentum - self.momentum = momentum - - self.register_buffer('stored_mean', torch.zeros(output_size)) - self.register_buffer('stored_var', torch.ones(output_size)) - - def forward(self, x, y): - # Calculate class-conditional gains and biases - gain = (1 + self.gain(y)).view(y.size(0), -1, 1, 1) - bias = self.bias(y).view(y.size(0), -1, 1, 1) - out = F.batch_norm(x, self.stored_mean, self.stored_var, None, None, - self.training, 0.1, self.eps) - return out * gain + bias - - -class Interpolate(nn.Module): - """Interpolation module.""" - - def __init__(self, size, mode='bilinear', align_corners=False): - """Init. - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.size = size - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - Args: - x (tensor): input - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, - size=self.size, - mode=self.mode, - align_corners=self.align_corners, - ) - - return x - - - -#from pg_modules.projector import F_RandomProj - -import torch -import torch.nn as nn -import timm -#from pg_modules.blocks import FeatureFusionBlock - - -def _make_scratch_ccm(scratch, in_channels, cout, expand=False): - # shapes - out_channels = [cout, cout*2, cout*4, cout*8] if expand else [cout]*4 - - scratch.layer0_ccm = nn.Conv2d(in_channels[0], out_channels[0], kernel_size=1, stride=1, padding=0, bias=True) - scratch.layer1_ccm = nn.Conv2d(in_channels[1], out_channels[1], kernel_size=1, stride=1, padding=0, bias=True) - scratch.layer2_ccm = nn.Conv2d(in_channels[2], out_channels[2], kernel_size=1, stride=1, padding=0, bias=True) - scratch.layer3_ccm = nn.Conv2d(in_channels[3], out_channels[3], kernel_size=1, stride=1, padding=0, bias=True) - - scratch.CHANNELS = out_channels - - return scratch - - -def _make_scratch_csm(scratch, in_channels, cout, expand): - scratch.layer3_csm = FeatureFusionBlock(in_channels[3], nn.ReLU(False), expand=expand, lowest=True) - scratch.layer2_csm = FeatureFusionBlock(in_channels[2], nn.ReLU(False), expand=expand) - scratch.layer1_csm = FeatureFusionBlock(in_channels[1], nn.ReLU(False), expand=expand) - scratch.layer0_csm = FeatureFusionBlock(in_channels[0], nn.ReLU(False)) - - # last refinenet does not expand to save channels in higher dimensions - scratch.CHANNELS = [cout, cout, cout*2, cout*4] if expand else [cout]*4 - - return scratch - - -def _make_efficientnet(model): - pretrained = nn.Module() - pretrained.layer0 = nn.Sequential(model.conv_stem, model.bn1, model.act1, *model.blocks[0:2]) - pretrained.layer1 = nn.Sequential(*model.blocks[2:3]) - pretrained.layer2 = nn.Sequential(*model.blocks[3:5]) - pretrained.layer3 = nn.Sequential(*model.blocks[5:9]) - return pretrained - - -def calc_channels(pretrained, inp_res=224): - channels = [] - tmp = torch.zeros(1, 3, inp_res, inp_res) - - # forward pass - tmp = pretrained.layer0(tmp) - channels.append(tmp.shape[1]) - tmp = pretrained.layer1(tmp) - channels.append(tmp.shape[1]) - tmp = pretrained.layer2(tmp) - channels.append(tmp.shape[1]) - tmp = pretrained.layer3(tmp) - channels.append(tmp.shape[1]) - - return channels - - -def _make_projector(im_res, cout, proj_type, expand=False): - assert proj_type in [0, 1, 2], "Invalid projection type" - - ### Build pretrained feature network - model = timm.create_model('tf_efficientnet_lite0', pretrained=True) - pretrained = _make_efficientnet(model) - - # determine resolution of feature maps, this is later used to calculate the number - # of down blocks in the discriminators. Interestingly, the best results are achieved - # by fixing this to 256, ie., we use the same number of down blocks per discriminator - # independent of the dataset resolution - im_res = 256 - pretrained.RESOLUTIONS = [im_res//4, im_res//8, im_res//16, im_res//32] - pretrained.CHANNELS = calc_channels(pretrained) - - if proj_type == 0: return pretrained, None - - ### Build CCM - scratch = nn.Module() - scratch = _make_scratch_ccm(scratch, in_channels=pretrained.CHANNELS, cout=cout, expand=expand) - pretrained.CHANNELS = scratch.CHANNELS - - if proj_type == 1: return pretrained, scratch - - ### build CSM - scratch = _make_scratch_csm(scratch, in_channels=scratch.CHANNELS, cout=cout, expand=expand) - - # CSM upsamples x2 so the feature map resolution doubles - pretrained.RESOLUTIONS = [res*2 for res in pretrained.RESOLUTIONS] - pretrained.CHANNELS = scratch.CHANNELS - - return pretrained, scratch - - -class F_RandomProj(nn.Module): - def __init__( - self, - im_res=256, - cout=64, - expand=True, - proj_type=2, # 0 = no projection, 1 = cross channel mixing, 2 = cross scale mixing - **kwargs, - ): - super().__init__() - self.proj_type = proj_type - self.cout = cout - self.expand = expand - - # build pretrained feature network and random decoder (scratch) - self.pretrained, self.scratch = _make_projector(im_res=im_res, cout=self.cout, proj_type=self.proj_type, expand=self.expand) - self.CHANNELS = self.pretrained.CHANNELS - self.RESOLUTIONS = self.pretrained.RESOLUTIONS - - def forward(self, x): - # predict feature maps - out0 = self.pretrained.layer0(x) - out1 = self.pretrained.layer1(out0) - out2 = self.pretrained.layer2(out1) - out3 = self.pretrained.layer3(out2) - - # start enumerating at the lowest layer (this is where we put the first discriminator) - out = { - '0': out0, - '1': out1, - '2': out2, - '3': out3, - } - - if self.proj_type == 0: return out - - out0_channel_mixed = self.scratch.layer0_ccm(out['0']) - out1_channel_mixed = self.scratch.layer1_ccm(out['1']) - out2_channel_mixed = self.scratch.layer2_ccm(out['2']) - out3_channel_mixed = self.scratch.layer3_ccm(out['3']) - - out = { - '0': out0_channel_mixed, - '1': out1_channel_mixed, - '2': out2_channel_mixed, - '3': out3_channel_mixed, - } - - if self.proj_type == 1: return out - - # from bottom to top - out3_scale_mixed = self.scratch.layer3_csm(out3_channel_mixed) - out2_scale_mixed = self.scratch.layer2_csm(out3_scale_mixed, out2_channel_mixed) - out1_scale_mixed = self.scratch.layer1_csm(out2_scale_mixed, out1_channel_mixed) - out0_scale_mixed = self.scratch.layer0_csm(out1_scale_mixed, out0_channel_mixed) - - out = { - '0': out0_scale_mixed, - '1': out1_scale_mixed, - '2': out2_scale_mixed, - '3': out3_scale_mixed, - } - - return out - - -#from pg_modules.diffaug import DiffAugment -# Differentiable Augmentation for Data-Efficient GAN Training -# Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han -# https://arxiv.org/pdf/2006.10738 - -import torch -import torch.nn.functional as F - - -def DiffAugment(x, policy='', channels_first=True): - if policy: - if not channels_first: - x = x.permute(0, 3, 1, 2) - for p in policy.split(','): - for f in AUGMENT_FNS[p]: - x = f(x) - if not channels_first: - x = x.permute(0, 2, 3, 1) - x = x.contiguous() - return x - - -def rand_brightness(x): - x = x + (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) - 0.5) - return x - - -def rand_saturation(x): - x_mean = x.mean(dim=1, keepdim=True) - x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) * 2) + x_mean - return x - - -def rand_contrast(x): - x_mean = x.mean(dim=[1, 2, 3], keepdim=True) - x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) + 0.5) + x_mean - return x - - -def rand_translation(x, ratio=0.125): - shift_x, shift_y = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5) - translation_x = torch.randint(-shift_x, shift_x + 1, size=[x.size(0), 1, 1], device=x.device) - translation_y = torch.randint(-shift_y, shift_y + 1, size=[x.size(0), 1, 1], device=x.device) - grid_batch, grid_x, grid_y = torch.meshgrid( - torch.arange(x.size(0), dtype=torch.long, device=x.device), - torch.arange(x.size(2), dtype=torch.long, device=x.device), - torch.arange(x.size(3), dtype=torch.long, device=x.device), - ) - grid_x = torch.clamp(grid_x + translation_x + 1, 0, x.size(2) + 1) - grid_y = torch.clamp(grid_y + translation_y + 1, 0, x.size(3) + 1) - x_pad = F.pad(x, [1, 1, 1, 1, 0, 0, 0, 0]) - x = x_pad.permute(0, 2, 3, 1).contiguous()[grid_batch, grid_x, grid_y].permute(0, 3, 1, 2) - return x - - -def rand_cutout(x, ratio=0.2): - cutout_size = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5) - offset_x = torch.randint(0, x.size(2) + (1 - cutout_size[0] % 2), size=[x.size(0), 1, 1], device=x.device) - offset_y = torch.randint(0, x.size(3) + (1 - cutout_size[1] % 2), size=[x.size(0), 1, 1], device=x.device) - grid_batch, grid_x, grid_y = torch.meshgrid( - torch.arange(x.size(0), dtype=torch.long, device=x.device), - torch.arange(cutout_size[0], dtype=torch.long, device=x.device), - torch.arange(cutout_size[1], dtype=torch.long, device=x.device), - ) - grid_x = torch.clamp(grid_x + offset_x - cutout_size[0] // 2, min=0, max=x.size(2) - 1) - grid_y = torch.clamp(grid_y + offset_y - cutout_size[1] // 2, min=0, max=x.size(3) - 1) - mask = torch.ones(x.size(0), x.size(2), x.size(3), dtype=x.dtype, device=x.device) - mask[grid_batch, grid_x, grid_y] = 0 - x = x * mask.unsqueeze(1) - return x - - -AUGMENT_FNS = { - 'color': [rand_brightness, rand_saturation, rand_contrast], - 'translation': [rand_translation], - 'cutout': [rand_cutout], -} - - - -class SingleDisc(nn.Module): - def __init__(self, nc=None, ndf=None, start_sz=256, end_sz=8, head=None, separable=False, patch=False): - super().__init__() - channel_dict = {4: 512, 8: 512, 16: 256, 32: 128, 64: 64, 128: 64, - 256: 32, 512: 16, 1024: 8} - - # interpolate for start sz that are not powers of two - if start_sz not in channel_dict.keys(): - sizes = np.array(list(channel_dict.keys())) - start_sz = sizes[np.argmin(abs(sizes - start_sz))] - self.start_sz = start_sz - - # if given ndf, allocate all layers with the same ndf - if ndf is None: - nfc = channel_dict - else: - nfc = {k: ndf for k, v in channel_dict.items()} - - # for feature map discriminators with nfc not in channel_dict - # this is the case for the pretrained backbone (midas.pretrained) - if nc is not None and head is None: - nfc[start_sz] = nc - - layers = [] - - # Head if the initial input is the full modality - if head: - layers += [conv2d(nc, nfc[256], 3, 1, 1, bias=False), - nn.LeakyReLU(0.2, inplace=True)] - - # Down Blocks - DB = partial(DownBlockPatch, separable=separable) if patch else partial(DownBlock, separable=separable) - while start_sz > end_sz: - layers.append(DB(nfc[start_sz], nfc[start_sz//2])) - start_sz = start_sz // 2 - - layers.append(conv2d(nfc[end_sz], 1, 4, 1, 0, bias=False)) - self.main = nn.Sequential(*layers) - - def forward(self, x, c): - return self.main(x) - - -class SingleDiscCond(nn.Module): - def __init__(self, nc=None, ndf=None, start_sz=256, end_sz=8, head=None, separable=False, patch=False, c_dim=1000, cmap_dim=64, embedding_dim=128, cond_size=128): - super().__init__() - self.cmap_dim = cmap_dim - self.cond_attn = CondAttnBlock(cmap_dim, cond_size, dim_head=64, heads=8, norm_context=False, cosine_sim_attn=False) - # midas channels - channel_dict = {4: 512, 8: 512, 16: 256, 32: 128, 64: 64, 128: 64, - 256: 32, 512: 16, 1024: 8} - - # interpolate for start sz that are not powers of two - if start_sz not in channel_dict.keys(): - sizes = np.array(list(channel_dict.keys())) - start_sz = sizes[np.argmin(abs(sizes - start_sz))] - self.start_sz = start_sz - - # if given ndf, allocate all layers with the same ndf - if ndf is None: - nfc = channel_dict - else: - nfc = {k: ndf for k, v in channel_dict.items()} - - # for feature map discriminators with nfc not in channel_dict - # this is the case for the pretrained backbone (midas.pretrained) - if nc is not None and head is None: - nfc[start_sz] = nc - - layers = [] - - # Head if the initial input is the full modality - if head: - layers += [conv2d(nc, nfc[256], 3, 1, 1, bias=False), - nn.LeakyReLU(0.2, inplace=True)] - - # Down Blocks - DB = partial(DownBlockPatch, separable=separable) if patch else partial(DownBlock, separable=separable) - while start_sz > end_sz: - layers.append(DB(nfc[start_sz], nfc[start_sz//2])) - start_sz = start_sz // 2 - self.main = nn.Sequential(*layers) - - # additions for conditioning on class information - self.cls = conv2d(nfc[end_sz], self.cmap_dim, 4, 1, 0, bias=False) - #self.embed = nn.Embedding(num_embeddings=c_dim, embedding_dim=embedding_dim) - #self.embed_proj = nn.Sequential( - # nn.Linear(self.embed.embedding_dim, self.cmap_dim), - # nn.LeakyReLU(0.2, inplace=True), - #) - - def forward(self, x, c): - h = self.main(x) - out = self.cls(h) - cond_pooled, cond, cond_mask = c - #print("COND", out.shape, cond.shape, cond_mask.shape, self.cond_sie) - cmap = self.cond_attn(out, cond, cond_mask) - # conditioning via projection - #cmap = self.embed_proj(self.embed(c)).unsqueeze(-1).unsqueeze(-1) - #cmap = 1 - out = (out * cmap).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.cmap_dim)) - return out - - -class MultiScaleD(nn.Module): - def __init__( - self, - channels, - resolutions, - num_discs=1, - proj_type=2, # 0 = no projection, 1 = cross channel mixing, 2 = cross scale mixing - cond=1, - separable=False, - patch=False, - cond_size=128, - **kwargs, - ): - super().__init__() - - assert num_discs in [1, 2, 3, 4] - - # the first disc is on the lowest level of the backbone - self.disc_in_channels = channels[:num_discs] - self.disc_in_res = resolutions[:num_discs] - - Disc = SingleDiscCond if cond else SingleDisc - mini_discs = [] - for i, (cin, res) in enumerate(zip(self.disc_in_channels, self.disc_in_res)): - start_sz = res if not patch else 16 - mini_discs += [str(i), Disc(nc=cin, start_sz=start_sz, end_sz=8, separable=separable, patch=patch, cond_size=cond_size)], - self.mini_discs = nn.ModuleDict(mini_discs) - - def forward(self, features, c): - all_logits = [] - for k, disc in self.mini_discs.items(): - all_logits.append(disc(features[k], c).view(features[k].size(0), -1)) - - all_logits = torch.cat(all_logits, dim=1) - return all_logits - - -class ProjectedDiscriminator(torch.nn.Module): - def __init__( - self, - diffaug=False, - interp224=False, - t_emb_dim = 128, - out_dim=64, - backbone_kwargs={}, - act=torch.nn.LeakyReLU(0.2), - num_discs=1, - **kwargs - ): - super().__init__() - self.diffaug = diffaug - self.act = act - self.interp224 = interp224 - self.num_discs = num_discs - self.feature_network = F_RandomProj(**backbone_kwargs) - self.discriminator = MultiScaleD( - channels=[c*2+out_dim for c in self.feature_network.CHANNELS], - resolutions=self.feature_network.RESOLUTIONS, - **backbone_kwargs, - ) - self.t_embed = torch.nn.ModuleList([TimestepEmbedding( - embedding_dim=t_emb_dim, - hidden_dim=t_emb_dim, - output_dim=out_dim, - act=act, - ) for _ in range(num_discs)]) - - - def train(self, mode=True): - self.feature_network = self.feature_network.train(False) - self.discriminator = self.discriminator.train(mode) - return self - - def eval(self): - return self.train(False) - - def forward(self, x, t, xprev, cond=None): - #t_embed = self.t_embed(t) - #t_embed = self.act(t_embed) - - if self.diffaug: - x = DiffAugment(x, policy='color,translation,cutout') - - if self.interp224: - x = F.interpolate(x, 256, mode='bilinear', align_corners=False) - - features1 = self.feature_network(x) - features2 = self.feature_network(xprev) - features = {} - for k in features1.keys(): - if int(k) >= self.num_discs: - continue - tcat = self.t_embed[int(k)](t) - #print(tcat.shape) - h, w = features1[k].shape[2:] - tcat = tcat.view(tcat.shape[0], tcat.shape[1], 1, 1).repeat(1,1, h, w) - #print(x.shape, xprev.shape, features1[k].shape, features2[k].shape, tcat.shape) - features[k] = torch.cat((features1[k], features2[k], tcat), dim=1) - #print(features[k].shape) - logits = self.discriminator(features, cond) - - return logits - diff --git a/spaces/meraih/English-Japanese-Anime-TTS/text/japanese.py b/spaces/meraih/English-Japanese-Anime-TTS/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/meraih/English-Japanese-Anime-TTS/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/merve/fill-in-the-blank/public/anonymization/style.css b/spaces/merve/fill-in-the-blank/public/anonymization/style.css deleted file mode 100644 index c20c6ed13484b78e2cc2128cd255f4d3b4cda152..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/anonymization/style.css +++ /dev/null @@ -1,344 +0,0 @@ - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; - font-size: 14px; - width: 267px; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - - -.domain{ - display: none; -} - -text{ - /*pointer-events: none;*/ - /*text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff;*/ -} - - - -.note{ - font-size: 12px; - color: #999; - margin-top: 60px; -} - -h1{ - font-weight: 100; - font-size: 34px; - margin-bottom: .5em; - line-height: 1.3em; - margin-top: 1.4em; - text-align: center; - font-family: "Google Sans", sans-serif; -} - -.mono{ - font-family: monospace; -} - - -svg{ - overflow: visible; -} - - - - -.axis{ - font-size: 12px; - pointer-events: none; -} -.axis{ - color: #888; - -} -.axis text, .slider-label-container{ - fill: #888; - color: #888; - font-family: 'Roboto', Helvetica, sans-serif; - font-size: 12px; -} - -.axis text.bold, .slider-label-container{ - color: #3C4043; - fill: #3C4043; - font-weight: 500; - -} -.axis line{ - stroke: #ccc; -} - -div.axis b{ - margin-bottom: -10px; - display: block; -} - -.init-hidden{ - opacity: 0; -} - -.slider-label-container{ - font-weight: 500; -} - - - -.highlight{ - color: #fff; - padding-left: 3px; - padding-right: 3px; - padding-top: 1px; - padding-bottom: 1px; - border-radius: 3px; -} - -.highlight.blue{ background: blue; } -.highlight.orange{ background: #ffd890; } -.highlight.yellow{ background: #ff0; color: #000; } -.highlight.purple{ background: #CB10CB; } -.highlight.purple{ background: #FF7AFF; color: #000;} -.highlight.grey{ background: #ccc; color: #000;} -.highlight.box{ - border: 1px solid #ff6200; - border-radius: 5px; - color: #000; - padding-bottom: 2px; - white-space: nowrap; -} -.highlight.purple-box{ - border: 1px solid #b0b; -} -.highlight.grey-box{ - border: 1px solid #ccc; -} -.highlight.box.square{ - border-radius: 0px; -} -.highlight.blue-box{ border: 2px solid #007276; } - - - -.circle{ - background: #eee; - border: 1px solid #ccc; - font-family: monospace; - padding-left: 4px; - padding-right: 4px; - padding-top: 1px; - padding-bottom: 1px; - - border-radius: 100px; -} - - -.strikethrough{ - text-decoration: line-through; - color: #000; -} - - -.annotations path{ - fill: none; - stroke: black; -} - - - -rect.unique{ - stroke: #ff6200; - stroke-width: 1px; - fill: #ffd890; - - animation-duration: 1s; - animation-name: xstrokeblink; - display: inline-block; - animation-iteration-count: infinite; - animation-direction: alternate; -} - - -@keyframes strokeblink { - from { - /*fill: black;*/ - stroke-width: 1px; - } - - to { - /*fill: green;*/ - stroke-width: 1px; - } -} - - - - - -.inline-line{ - border: 1px #f0f solid; - width: 20px; - display: inline-block; - position: relative; - top: -5px; -} - -.slider-label-container{ - width: 240px; -} -.slider-label{ - font-size: smaller; - margin-left: 2px; -} - -.slider-text-label{ - margin-left: 5px; - white-space: nowrap; -} - - -g.student:hover circle{ - stroke-width: 2px; -} - -g{ - /*opacity: 1 !important;*/ -} - -.inactive{ - opacity: 0 !important; - pointer-events: none; -} - -input[type="range" i] { - background-color:#def5ef; - -webkit-appearance: none; - height:20px; - width:240px; - overflow: hidden; -} - -input[type='range']::-webkit-slider-thumb { - -webkit-appearance: none; - width: 16px; - height: 20px; - cursor: ew-resize; - background: #007276; - box-shadow: -200px 0 0 200px #7ed3c9; - border: 1px solid #333; -} - -input:focus { - outline-width: 0; -} - - - - -.estimate{ - opacity: 0; - pointer-events: none -} - -.estimate.active{ - opacity: .70; - pointer-events: all; -} - -.est-text{ - text-shadow: 0 2px 0 rgba(255,255,255,1), 2px 0 0 rgba(255,255,255,1), 0 -2px 0 rgba(255,255,255,1), -2px 0 0 rgba(255,255,255,1); -} - - - - -@media (max-width: 590px){ - text{ - font-size: 120% !important; - } -} - - -.slider{ - user-select: none; - -webkit-tap-highlight-color: transparent; -} - -.button-container{ - border: 1px solid #888; - display: inline-block; - padding: 10px 20px; - cursor: pointer; - text-align: center; - border-radius: 10px; - user-select: none; - -webkit-tap-highlight-color: transparent; - margin: 0px auto; -/* color: #888; - font-family: 'Roboto', Helvetica, sans-serif; - font-size: 12px; - font-weight: 500;*/ - position: relative; - left: -20px; -} - -.button-container:hover{ - background: #ddd; -} - -.button-outer{ - text-align: center; - margin-top: 20px; -} - -.pointer{ - height: 0px; - position: relative; -} -.pointer div { - overflow: visible; - content: ""; - background-image: url(https://pair-code.github.io/interpretability/bert-tree/pointer.svg); - width: 27px; - height: 27px; - position: absolute; - left: 165px; - top: -35px; -} - -a{ - color: rgb(60, 64, 67); -} -a:hover{ - color: #000; -} - - - - - - - - - diff --git a/spaces/merve/uncertainty-calibration/public/measuring-diversity/image-layout.js b/spaces/merve/uncertainty-calibration/public/measuring-diversity/image-layout.js deleted file mode 100644 index 7a06cc4399043f317e81c28da4139599a84f58da..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/measuring-diversity/image-layout.js +++ /dev/null @@ -1,73 +0,0 @@ - - -var lURLs = ` -img/green_doctor.png -img/blue_doctor.jpg -img/green0.png -img/bright_blue.png -img/blue0.png -img/blue1.png -`.trim().split('\n') - - -var rURLs = ` -img/white0.png -img/white1.png -img/white2.png -img/white3.png -img/white4.png -img/white5.png -`.trim().split('\n') - - -var constructionSel = d3.select('#construction') - .html('') - -// constructionSel.append('div.top').each(function(){ -// var metrics = [{str: 'Male', key: 'Male', target: .5}] -// var active ={ percents: {Male: .5}} -// addMetrics(metrics, {topSel: d3.select(this).append('div.top'), active, isSmall: true})() -// }) - -constructionSel.append('img') - .at({src: 'img/construction.jpg', width: 900}) - -constructionSel.append('div') - .st({fontWeight: 500, fontSize: 14}) - .text('Stock “construction worker” images') - - - - -var width = 400 -var coatDivs = d3.select('#coat-v-gender').html('').st({marginBottom: 40}) - .appendMany('div', [lURLs, rURLs]) - .st({width: width, display: 'inline-block', marginRight: 20}) - - -coatDivs.each(function(d, i){ - var metrics = [ - {str: 'Blue', key: 'Blue', target: .5}, - {str: 'Male', key: 'Male', target: .5}, - ] - - var active = !i ? {percents: {Blue: .5, Male: 1}} : {percents: {Blue: 0, Male: .5}} - - addMetrics(metrics, {topSel: d3.select(this).append('div.top'), active, isSmall: true})() -}) - -coatDivs - .st({fontWeight: 500, fontSize: 14}) - .appendMany('div', d => d.slice(0, 6)) - .st({backgroundImage: d => 'url(' + d + ')', width: width/3 - 10, height: 100, display: 'inline-block'}) - .st({marginRight: 8, outline: '1px solid #000'}) - -coatDivs - .append('div') - .text((d, i) => d == lURLs ? 'Male-presenting doctors wearing different colored clothes' : 'Doctor of different genders wearing white clothes') - - - - - -// https://t3.gstatic.com/images?q=tbn:ANd9GcRziJdedqu58HeAlI9xtWhrVtCjVo6xO_uSHdQkxAI0q41XozLWT3xKd36S1NbuSoIOVvV4Huw26zAvdM_374qKuN9J88E \ No newline at end of file diff --git a/spaces/mfernezir/VanillaChatbot/README.md b/spaces/mfernezir/VanillaChatbot/README.md deleted file mode 100644 index dd04782a884bb8b34906ccaa6af1b3b95b187574..0000000000000000000000000000000000000000 --- a/spaces/mfernezir/VanillaChatbot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: VanillaChatbot -emoji: 🔥 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.47.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mfrashad/CharacterGAN/netdissect/workerpool.py b/spaces/mfrashad/CharacterGAN/netdissect/workerpool.py deleted file mode 100644 index fe79124ddc86d0e7251d9e1a5d1012e7165249e3..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/netdissect/workerpool.py +++ /dev/null @@ -1,158 +0,0 @@ -''' -WorkerPool and WorkerBase for handling the common problems in managing -a multiprocess pool of workers that aren't done by multiprocessing.Pool, -including setup with per-process state, debugging by putting the worker -on the main thread, and correct handling of unexpected errors, and ctrl-C. - -To use it, -1. Put the per-process setup and the per-task work in the - setup() and work() methods of your own WorkerBase subclass. -2. To prepare the process pool, instantiate a WorkerPool, passing your - subclass type as the first (worker) argument, as well as any setup keyword - arguments. The WorkerPool will instantiate one of your workers in each - worker process (passing in the setup arguments in those processes). - If debugging, the pool can have process_count=0 to force all the work - to be done immediately on the main thread; otherwise all the work - will be passed to other processes. -3. Whenever there is a new piece of work to distribute, call pool.add(*args). - The arguments will be queued and passed as worker.work(*args) to the - next available worker. -4. When all the work has been distributed, call pool.join() to wait for all - the work to complete and to finish and terminate all the worker processes. - When pool.join() returns, all the work will have been done. - -No arrangement is made to collect the results of the work: for example, -the return value of work() is ignored. If you need to collect the -results, use your own mechanism (filesystem, shared memory object, queue) -which can be distributed using setup arguments. -''' - -from multiprocessing import Process, Queue, cpu_count -import signal -import atexit -import sys - -class WorkerBase(Process): - ''' - Subclass this class and override its work() method (and optionally, - setup() as well) to define the units of work to be done in a process - worker in a woker pool. - ''' - def __init__(self, i, process_count, queue, initargs): - if process_count > 0: - # Make sure we ignore ctrl-C if we are not on main process. - signal.signal(signal.SIGINT, signal.SIG_IGN) - self.process_id = i - self.process_count = process_count - self.queue = queue - super(WorkerBase, self).__init__() - self.setup(**initargs) - def run(self): - # Do the work until None is dequeued - while True: - try: - work_batch = self.queue.get() - except (KeyboardInterrupt, SystemExit): - print('Exiting...') - break - if work_batch is None: - self.queue.put(None) # for another worker - return - self.work(*work_batch) - def setup(self, **initargs): - ''' - Override this method for any per-process initialization. - Keywoard args are passed from WorkerPool constructor. - ''' - pass - def work(self, *args): - ''' - Override this method for one-time initialization. - Args are passed from WorkerPool.add() arguments. - ''' - raise NotImplementedError('worker subclass needed') - -class WorkerPool(object): - ''' - Instantiate this object (passing a WorkerBase subclass type - as its first argument) to create a worker pool. Then call - pool.add(*args) to queue args to distribute to worker.work(*args), - and call pool.join() to wait for all the workers to complete. - ''' - def __init__(self, worker=WorkerBase, process_count=None, **initargs): - global active_pools - if process_count is None: - process_count = cpu_count() - if process_count == 0: - # zero process_count uses only main process, for debugging. - self.queue = None - self.processes = None - self.worker = worker(None, 0, None, initargs) - return - # Ctrl-C strategy: worker processes should ignore ctrl-C. Set - # this up to be inherited by child processes before forking. - original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN) - active_pools[id(self)] = self - self.queue = Queue(maxsize=(process_count * 3)) - self.processes = None # Initialize before trying to construct workers - self.processes = [worker(i, process_count, self.queue, initargs) - for i in range(process_count)] - for p in self.processes: - p.start() - # The main process should handle ctrl-C. Restore this now. - signal.signal(signal.SIGINT, original_sigint_handler) - def add(self, *work_batch): - if self.queue is None: - if hasattr(self, 'worker'): - self.worker.work(*work_batch) - else: - print('WorkerPool shutting down.', file=sys.stderr) - else: - try: - # The queue can block if the work is so slow it gets full. - self.queue.put(work_batch) - except (KeyboardInterrupt, SystemExit): - # Handle ctrl-C if done while waiting for the queue. - self.early_terminate() - def join(self): - # End the queue, and wait for all worker processes to complete nicely. - if self.queue is not None: - self.queue.put(None) - for p in self.processes: - p.join() - self.queue = None - # Remove myself from the set of pools that need cleanup on shutdown. - try: - del active_pools[id(self)] - except: - pass - def early_terminate(self): - # When shutting down unexpectedly, first end the queue. - if self.queue is not None: - try: - self.queue.put_nowait(None) # Nonblocking put throws if full. - self.queue = None - except: - pass - # But then don't wait: just forcibly terminate workers. - if self.processes is not None: - for p in self.processes: - p.terminate() - self.processes = None - try: - del active_pools[id(self)] - except: - pass - def __del__(self): - if self.queue is not None: - print('ERROR: workerpool.join() not called!', file=sys.stderr) - self.join() - -# Error and ctrl-C handling: kill worker processes if the main process ends. -active_pools = {} -def early_terminate_pools(): - for _, pool in list(active_pools.items()): - pool.early_terminate() - -atexit.register(early_terminate_pools) - diff --git a/spaces/mithril-security/blind_chat/src/lib/utils/sha256.ts b/spaces/mithril-security/blind_chat/src/lib/utils/sha256.ts deleted file mode 100644 index 43059b518fc5a4da6ed08ab36aeb6c289007f6aa..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/lib/utils/sha256.ts +++ /dev/null @@ -1,7 +0,0 @@ -export async function sha256(input: string): Promise { - const utf8 = new TextEncoder().encode(input); - const hashBuffer = await crypto.subtle.digest("SHA-256", utf8); - const hashArray = Array.from(new Uint8Array(hashBuffer)); - const hashHex = hashArray.map((bytes) => bytes.toString(16).padStart(2, "0")).join(""); - return hashHex; -} diff --git a/spaces/mkutarna/audiobook_gen/tests/test_output.py b/spaces/mkutarna/audiobook_gen/tests/test_output.py deleted file mode 100644 index 2f243cec793658c041b4faaccbff2af38e6139d1..0000000000000000000000000000000000000000 --- a/spaces/mkutarna/audiobook_gen/tests/test_output.py +++ /dev/null @@ -1,50 +0,0 @@ -import pytest - -from src import output, config -import test_config - - -def test_write_audio(): - """ - Tests write_audio function, takes in an audio tensor with a file path and writes the audio to a file. - """ - import torch - - test_path = test_config.data_path / 'test_audio.wav' - audio_path = test_config.data_path / 'test_audio.pt' - audio_list = torch.load(audio_path) - - output.write_audio(audio_list, test_path) - - assert test_path.is_file() - assert test_path.stat().st_size == 592858 - - test_path.unlink() - - -def test_assemble_zip(): - """ - Tests assemble_zip function, which collects all the audio files from the output directory, - and zips them up into a zip directory. - """ - from shutil import copy2 - - if not config.output_path.exists(): - config.output_path.mkdir() - - title = "speaker_samples" - zip_path = config.output_path / 'speaker_samples.zip' - wav1_path = config.output_path / 'speaker_en_0.wav' - wav2_path = config.output_path / 'speaker_en_110.wav' - - for file_path in config.resource_path.iterdir(): - if file_path.suffix == '.wav': - copy2(file_path, config.output_path) - - _ = output.assemble_zip(title) - - assert zip_path.is_file() - assert not wav1_path.is_file() - assert not wav2_path.is_file() - - zip_path.unlink() diff --git a/spaces/monra/freegpt-webui-chimera/server/bp.py b/spaces/monra/freegpt-webui-chimera/server/bp.py deleted file mode 100644 index 61d416797039dababd9e8222b4fc910ef65c40b9..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui-chimera/server/bp.py +++ /dev/null @@ -1,6 +0,0 @@ -from flask import Blueprint - -bp = Blueprint('bp', __name__, - template_folder='./../client/html', - static_folder='./../client', - static_url_path='assets') diff --git a/spaces/monra/freegpt-webui/client/css/message-input.css b/spaces/monra/freegpt-webui/client/css/message-input.css deleted file mode 100644 index de5f58388133bd3b2b2333dd99cecf0110002367..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/client/css/message-input.css +++ /dev/null @@ -1,27 +0,0 @@ -#message-input { - margin-right: 30px; - height: 64px; -} - -#message-input::-webkit-scrollbar { - width: 5px; -} - -#message-input::-webkit-scrollbar-track { - background: #f1f1f1; -} - -#message-input::-webkit-scrollbar-thumb { - background: #c7a2ff; -} - -#message-input::-webkit-scrollbar-thumb:hover { - background: #8b3dff; -} - -@media screen and (max-width: 360px) { - #message-input { - margin: 0; - } -} - diff --git a/spaces/mthsk/sovits-models-misc/utils.py b/spaces/mthsk/sovits-models-misc/utils.py deleted file mode 100644 index e19cac39c57f213bbf6f1435ab48fe7948a1b17b..0000000000000000000000000000000000000000 --- a/spaces/mthsk/sovits-models-misc/utils.py +++ /dev/null @@ -1,501 +0,0 @@ -import os -import glob -import re -import sys -import argparse -import logging -import json -import subprocess -import random - -import librosa -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from modules.commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -# def normalize_f0(f0, random_scale=True): -# f0_norm = f0.clone() # create a copy of the input Tensor -# batch_size, _, frame_length = f0_norm.shape -# for i in range(batch_size): -# means = torch.mean(f0_norm[i, 0, :]) -# if random_scale: -# factor = random.uniform(0.8, 1.2) -# else: -# factor = 1 -# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor -# return f0_norm -# def normalize_f0(f0, random_scale=True): -# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True) -# if random_scale: -# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device) -# else: -# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device) -# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) -# return f0_norm -def normalize_f0(f0, x_mask, uv, random_scale=True): - # calculate means based on x_mask - uv_sum = torch.sum(uv, dim=1, keepdim=True) - uv_sum[uv_sum == 0] = 9999 - means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum - - if random_scale: - factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device) - else: - factor = torch.ones(f0.shape[0], 1).to(f0.device) - # normalize f0 based on means and factor - f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) - if torch.isnan(f0_norm).any(): - exit(0) - return f0_norm * x_mask - - -def plot_data_to_numpy(x, y): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - plt.plot(x) - plt.plot(y) - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - - -def interpolate_f0(f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - -def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import parselmouth - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - time_step = hop_length / sampling_rate * 1000 - f0_min = 50 - f0_max = 1100 - f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - return f0 - -def resize_f0(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import pyworld - if p_len is None: - p_len = wav_numpy.shape[0]//hop_length - f0, t = pyworld.dio( - wav_numpy.astype(np.double), - fs=sampling_rate, - f0_ceil=800, - frame_period=1000 * hop_length / sampling_rate, - ) - f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return resize_f0(f0, p_len) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(): - vec_path = "hubert/checkpoint_best_legacy_500.pt" - print("load model(s) from {}".format(vec_path)) - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - model = models[0] - model.eval() - return model - -def get_hubert_content(hmodel, wav_16k_tensor): - feats = wav_16k_tensor - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav_16k_tensor.device), - "padding_mask": padding_mask.to(wav_16k_tensor.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = hmodel.extract_features(**inputs) - feats = hmodel.final_proj(logits[0]) - return feats.transpose(1, 2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "dec" in k or "disc" in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -def repeat_expand_2d(content, target_len): - # content : [h, t] - - src_len = content.shape[-1] - target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device) - temp = torch.arange(src_len+1) * target_len / src_len - current_pos = 0 - for i in range(target_len): - if i < temp[current_pos+1]: - target[:, i] = content[:, current_pos] - else: - current_pos += 1 - target[:, i] = content[:, current_pos] - - return target - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/multimodalart/pix2pix-zero/src/make_edit_direction.py b/spaces/multimodalart/pix2pix-zero/src/make_edit_direction.py deleted file mode 100644 index d6307694847e6f98749390ccb02c0b5ef6e2b67f..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/pix2pix-zero/src/make_edit_direction.py +++ /dev/null @@ -1,61 +0,0 @@ -import os, pdb - -import argparse -import numpy as np -import torch -import requests -from PIL import Image - -from diffusers import DDIMScheduler -from utils.edit_pipeline import EditingPipeline - - -## convert sentences to sentence embeddings -def load_sentence_embeddings(l_sentences, tokenizer, text_encoder, device="cuda"): - with torch.no_grad(): - l_embeddings = [] - for sent in l_sentences: - text_inputs = tokenizer( - sent, - padding="max_length", - max_length=tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] - l_embeddings.append(prompt_embeds) - return torch.concatenate(l_embeddings, dim=0).mean(dim=0).unsqueeze(0) - - -if __name__=="__main__": - parser = argparse.ArgumentParser() - parser.add_argument('--file_source_sentences', required=True) - parser.add_argument('--file_target_sentences', required=True) - parser.add_argument('--output_folder', required=True) - parser.add_argument('--model_path', type=str, default='CompVis/stable-diffusion-v1-4') - args = parser.parse_args() - - # load the model - pipe = EditingPipeline.from_pretrained(args.model_path, torch_dtype=torch.float16).to("cuda") - bname_src = os.path.basename(args.file_source_sentences).strip(".txt") - outf_src = os.path.join(args.output_folder, bname_src+".pt") - if os.path.exists(outf_src): - print(f"Skipping source file {outf_src} as it already exists") - else: - with open(args.file_source_sentences, "r") as f: - l_sents = [x.strip() for x in f.readlines()] - mean_emb = load_sentence_embeddings(l_sents, pipe.tokenizer, pipe.text_encoder, device="cuda") - print(mean_emb.shape) - torch.save(mean_emb, outf_src) - - bname_tgt = os.path.basename(args.file_target_sentences).strip(".txt") - outf_tgt = os.path.join(args.output_folder, bname_tgt+".pt") - if os.path.exists(outf_tgt): - print(f"Skipping target file {outf_tgt} as it already exists") - else: - with open(args.file_target_sentences, "r") as f: - l_sents = [x.strip() for x in f.readlines()] - mean_emb = load_sentence_embeddings(l_sents, pipe.tokenizer, pipe.text_encoder, device="cuda") - print(mean_emb.shape) - torch.save(mean_emb, outf_tgt) diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/vocoder/gen_wavernn.py b/spaces/mygyasir/Real-Time-Voice-Cloning/vocoder/gen_wavernn.py deleted file mode 100644 index 2036737f805f6055893812e48f99d524624aab07..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/Real-Time-Voice-Cloning/vocoder/gen_wavernn.py +++ /dev/null @@ -1,31 +0,0 @@ -from vocoder.models.fatchord_version import WaveRNN -from vocoder.audio import * - - -def gen_testset(model: WaveRNN, test_set, samples, batched, target, overlap, save_path): - k = model.get_step() // 1000 - - for i, (m, x) in enumerate(test_set, 1): - if i > samples: - break - - print('\n| Generating: %i/%i' % (i, samples)) - - x = x[0].numpy() - - bits = 16 if hp.voc_mode == 'MOL' else hp.bits - - if hp.mu_law and hp.voc_mode != 'MOL' : - x = decode_mu_law(x, 2**bits, from_labels=True) - else : - x = label_2_float(x, bits) - - save_wav(x, save_path.joinpath("%dk_steps_%d_target.wav" % (k, i))) - - batch_str = "gen_batched_target%d_overlap%d" % (target, overlap) if batched else \ - "gen_not_batched" - save_str = save_path.joinpath("%dk_steps_%d_%s.wav" % (k, i, batch_str)) - - wav = model.generate(m, batched, target, overlap, hp.mu_law) - save_wav(wav, save_str) - diff --git a/spaces/nateraw/dino-clips/dino/vision_transformer.py b/spaces/nateraw/dino-clips/dino/vision_transformer.py deleted file mode 100644 index f69a7ad0522500ca2a85305a789be5ca6ac474d0..0000000000000000000000000000000000000000 --- a/spaces/nateraw/dino-clips/dino/vision_transformer.py +++ /dev/null @@ -1,291 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Mostly copy-paste from timm library. -https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py -""" -import math -from functools import partial - -import torch -import torch.nn as nn - -from utils import trunc_normal_ - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x, attn - - -class Block(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x, return_attention=False): - y, attn = self.attn(self.norm1(x)) - if return_attention: - return attn - x = x + self.drop_path(y) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - num_patches = (img_size // patch_size) * (img_size // patch_size) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, C, H, W = x.shape - x = self.proj(x).flatten(2).transpose(1, 2) - return x - - -class VisionTransformer(nn.Module): - """ Vision Transformer """ - def __init__(self, img_size=[224], patch_size=16, in_chans=3, num_classes=0, embed_dim=768, depth=12, - num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0., - drop_path_rate=0., norm_layer=nn.LayerNorm, **kwargs): - super().__init__() - self.num_features = self.embed_dim = embed_dim - - self.patch_embed = PatchEmbed( - img_size=img_size[0], patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim) - num_patches = self.patch_embed.num_patches - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim)) - self.pos_drop = nn.Dropout(p=drop_rate) - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule - self.blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(depth)]) - self.norm = norm_layer(embed_dim) - - # Classifier head - self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def interpolate_pos_encoding(self, x, w, h): - npatch = x.shape[1] - 1 - N = self.pos_embed.shape[1] - 1 - if npatch == N and w == h: - return self.pos_embed - class_pos_embed = self.pos_embed[:, 0] - patch_pos_embed = self.pos_embed[:, 1:] - dim = x.shape[-1] - w0 = w // self.patch_embed.patch_size - h0 = h // self.patch_embed.patch_size - # we add a small number to avoid floating point error in the interpolation - # see discussion at https://github.com/facebookresearch/dino/issues/8 - w0, h0 = w0 + 0.1, h0 + 0.1 - patch_pos_embed = nn.functional.interpolate( - patch_pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2), - scale_factor=(w0 / math.sqrt(N), h0 / math.sqrt(N)), - mode='bicubic', - ) - assert int(w0) == patch_pos_embed.shape[-2] and int(h0) == patch_pos_embed.shape[-1] - patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) - return torch.cat((class_pos_embed.unsqueeze(0), patch_pos_embed), dim=1) - - def prepare_tokens(self, x): - B, nc, w, h = x.shape - x = self.patch_embed(x) # patch linear embedding - - # add the [CLS] token to the embed patch tokens - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - - # add positional encoding to each token - x = x + self.interpolate_pos_encoding(x, w, h) - - return self.pos_drop(x) - - def forward(self, x): - x = self.prepare_tokens(x) - for blk in self.blocks: - x = blk(x) - x = self.norm(x) - return x[:, 0] - - def get_last_selfattention(self, x): - x = self.prepare_tokens(x) - for i, blk in enumerate(self.blocks): - if i < len(self.blocks) - 1: - x = blk(x) - else: - # return attention of the last block - return blk(x, return_attention=True) - - def get_intermediate_layers(self, x, n=1): - x = self.prepare_tokens(x) - # we return the output tokens from the `n` last blocks - output = [] - for i, blk in enumerate(self.blocks): - x = blk(x) - if len(self.blocks) - i <= n: - output.append(self.norm(x)) - return output - - -def vit_tiny(patch_size=16, **kwargs): - model = VisionTransformer( - patch_size=patch_size, embed_dim=192, depth=12, num_heads=3, mlp_ratio=4, - qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -def vit_small(patch_size=16, **kwargs): - model = VisionTransformer( - patch_size=patch_size, embed_dim=384, depth=12, num_heads=6, mlp_ratio=4, - qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -def vit_base(patch_size=16, **kwargs): - model = VisionTransformer( - patch_size=patch_size, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4, - qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -class DINOHead(nn.Module): - def __init__(self, in_dim, out_dim, use_bn=False, norm_last_layer=True, nlayers=3, hidden_dim=2048, bottleneck_dim=256): - super().__init__() - nlayers = max(nlayers, 1) - if nlayers == 1: - self.mlp = nn.Linear(in_dim, bottleneck_dim) - else: - layers = [nn.Linear(in_dim, hidden_dim)] - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - for _ in range(nlayers - 2): - layers.append(nn.Linear(hidden_dim, hidden_dim)) - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - layers.append(nn.Linear(hidden_dim, bottleneck_dim)) - self.mlp = nn.Sequential(*layers) - self.apply(self._init_weights) - self.last_layer = nn.utils.weight_norm(nn.Linear(bottleneck_dim, out_dim, bias=False)) - self.last_layer.weight_g.data.fill_(1) - if norm_last_layer: - self.last_layer.weight_g.requires_grad = False - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x = self.mlp(x) - x = nn.functional.normalize(x, dim=-1, p=2) - x = self.last_layer(x) - return x diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/FontExplorer X Pro 7.0.0 Build 20457 Crack MacOS.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/FontExplorer X Pro 7.0.0 Build 20457 Crack MacOS.md deleted file mode 100644 index 1fae14c1edfe9632ba3d310a534ac0cfb6abba8b..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/FontExplorer X Pro 7.0.0 Build 20457 Crack MacOS.md +++ /dev/null @@ -1,65 +0,0 @@ - -

          FontExplorer X Pro 7.0.0 Build 20457 Crack macOS: A Comprehensive Review

          -

          If you are a graphic designer, web developer, or typography enthusiast, you know how important fonts are for your projects. Fonts can make or break your design, convey your message, and express your personality. But managing fonts can be a hassle. You may have hundreds or thousands of fonts installed on your computer, but finding the right one for your needs can be time-consuming and frustrating. You may also encounter font conflicts, duplicates, or missing fonts that can ruin your work.

          -

          FontExplorer X Pro 7.0.0 Build 20457 Crack macOS


          DOWNLOADhttps://urlcod.com/2uIclb



          -

          That's why you need a powerful font manager like FontExplorer X Pro. FontExplorer X Pro is a software that lets you organize, preview, activate, deactivate, and find fonts with ease. You can also view and edit font information, classify fonts by smart sets, keywords, ratings, and categories, access and purchase fonts from leading foundries, integrate fonts with Adobe Creative Cloud and other applications, and much more.

          -

          In this article, we will review FontExplorer X Pro 7.0.0 Build 20457, the latest version of this software for macOS. We will

          explore the features, benefits, and installation process of FontExplorer X Pro, and show you how to crack it to get the full version for free. By the end of this article, you will have a clear idea of why FontExplorer X Pro is the best font manager for macOS and how to use it to enhance your design skills.

          -

          Features of FontExplorer X Pro

          -

          FontExplorer X Pro is packed with features that make font management easy and fun. Here are some of the main features of FontExplorer X Pro:

          -

          -

          Font Management

          -

          With FontExplorer X Pro, you can organize your fonts by folders, libraries, or sets. You can also create smart sets that automatically update based on criteria such as font name, format, style, language, or classification. You can preview fonts in various sizes, colors, and backgrounds, and compare fonts side by side. You can activate or deactivate fonts individually or in groups, and avoid font conflicts with the Font Conflict Resolver. You can also find fonts by using the powerful search and filter options, or by using the Font Discovery feature that suggests fonts based on your input.

          -

          Font Information

          -

          FontExplorer X Pro lets you view and edit font metadata, such as font name, family, version, designer, license, and description. You can also view font properties, such as kerning pairs, ligatures, alternates, and OpenType features. You can also view font character sets, and copy and paste glyphs into your documents. You can also print font samples or export them as PDF files.

          -

          Font Classification

          -

          FontExplorer X Pro helps you classify your fonts by using smart sets, keywords, ratings, and categories. You can assign keywords and ratings to your fonts to make them easier to find and sort. You can also use categories to group your fonts by style, genre, or theme. You can choose from predefined categories or create your own custom ones.

          -

          Font Store

          -

          FontExplorer X Pro gives you access to thousands of fonts from leading foundries, such as Monotype, Adobe, Linotype, FontFont, ITC, and more. You can browse fonts by category, foundry, designer, or popularity. You can also preview fonts in various settings and purchase them directly from the Font Store. You can also sync your purchased fonts with your FontExplorer X Pro library.

          -

          Font Plug-ins

          -

          FontExplorer X Pro integrates seamlessly with Adobe Creative Cloud and other applications, such as Microsoft Office, QuarkXPress, Sketch, Affinity Designer, and more. You can activate fonts for specific applications or documents with the Auto-Activation feature. You can also use the Font Plug-ins to access FontExplorer X Pro features from within your applications.

          -

          Benefits of FontExplorer X Pro

          -

          FontExplorer X Pro is not just a font manager; it is also a tool that can help you improve your workflow, productivity, and creativity. Here are some of the benefits of FontExplorer X Pro:

          -

          Save Time and Space

          -

          FontExplorer X Pro can help you save time and space by reducing font clutter and conflicts. You can activate fonts only when you need them, and deactivate them when you don't. This way, you can avoid loading unnecessary fonts that can slow down your system or cause compatibility issues. You can also use the Font Cleaner feature to remove corrupt, duplicate, or unwanted fonts from your computer.

          -

          Enhance Your Design Skills

          -

          FontExplorer X Pro can help you enhance your design skills by discovering new fonts and styles. You can use the Font Discovery feature to find fonts that match your input, such as a word, a phrase, or an image. You can also use the Font Store to browse and purchase fonts from leading foundries, and get inspired by their quality and variety. You can also use the Font Plug-ins to access FontExplorer X Pro features from within your applications, and experiment with different fonts and settings.

          -

          Customize Your Experience

          -

          FontExplorer X Pro can help you customize your experience by tailoring your font preferences and settings. You can choose from different font preview modes, such as list, grid, waterfall, or character set. You can also adjust the font size, color, and background to suit your needs. You can also create custom keyboard shortcuts, menus, and toolbars to access your favorite features quickly and easily.

          -

          How to Install and Crack FontExplorer X Pro on macOS

          -

          If you want to try FontExplorer X Pro for yourself, you can download, install, and crack it on your macOS device with these simple steps:

          -

          Downloading FontExplorer X Pro

          -

          To download FontExplorer X Pro for macOS, you can visit the official website of FontExplorer X Pro at https://www.fontexplorerx.com/mac/. There, you can find the latest version of FontExplorer X Pro for macOS, which is 7.0.0 Build 20457 as of this writing. You can also find the system requirements, release notes, and user manual of FontExplorer X Pro on the website.

          -

          To download FontExplorer X Pro for macOS, you need to fill out a short form with your name and email address. Then, you will receive a download link in your email inbox. You can click on the link to start downloading the installer file of FontExplorer X Pro for macOS.

          -

          Installing FontExplorer X Pro

          -

          To install FontExplorer X Pro for macOS, you need to run the installer file that you downloaded from the website. The installer file is named FontExplorerXPro-7.0.0-20457.dmg. You can double-click on the file to open it.

          -

          Then, you will see a window with the FontExplorer X Pro icon and a shortcut to the Applications folder. You can drag and drop the FontExplorer X Pro icon to the Applications folder to install it on your device. Alternatively, you can double-click on the FontExplorer X Pro icon to launch it directly from the installer window.

          -

          After installing FontExplorer X Pro for macOS, you can open it from the Applications folder or from the Launchpad. You will see a welcome screen with some information about FontExplorer X Pro and its features. You can click on the Continue button to proceed.

          -

          Cracking FontExplorer X Pro

          -

          To crack FontExplorer X Pro for macOS, you need to apply a crack file that will activate the full version of FontExplorer X Pro for free. The crack file is named FontExplorerXPro-7.x.x-Crack.zip. You can download it from this link: https://mega.nz/file/9Zwz2QyR#6FJLsQmY4fWnUOJgq8f9ZdQ4lGv5i8wJ6L1nY6YyqoE.

          -

          To apply the crack file, you need to unzip it first. You can use any unzip tool such as WinZip or The Unarchiver to extract the contents of the zip file. You will see a folder named FontExplorerXPro-7.x.x-Crack. Inside the folder, you will see two files: FontAgent.framework and Readme.txt.

          -

          The Readme.txt file contains some instructions on how to use the crack file. You can open it with any text editor such as TextEdit or Notepad to read it. The instructions are as follows:

          - Quit FontExplorer X Pro if it is running.

          -

          - Copy the FontAgent.framework file and paste it to the following location: /Applications/FontExplorer X Pro.app/Contents/Frameworks/

          -

          - Replace the original FontAgent.framework file with the cracked one.

          -

          - Launch FontExplorer X Pro and enjoy the full version.

          -

          That's it! You have successfully cracked FontExplorer X Pro for macOS and unlocked all its features. You can now use FontExplorer X Pro without any limitations or restrictions.

          -

          Conclusion

          -

          FontExplorer X Pro is a powerful font manager that can help you organize, preview, activate, deactivate, and find fonts with ease. You can also view and edit font information, classify fonts by smart sets, keywords, ratings, and categories, access and purchase fonts from leading foundries, integrate fonts with Adobe Creative Cloud and other applications, and much more.

          -

          In this article, we have reviewed FontExplorer X Pro 7.0.0 Build 20457, the latest version of this software for macOS. We have explored the features, benefits, and installation process of FontExplorer X Pro, and showed you how to crack it to get the full version for free.

          -

          If you want to try FontExplorer X Pro for yourself, you can download it from the official website of FontExplorer X Pro at https://www.fontexplorerx.com/mac/. You can also download the crack file from this link: https://mega.nz/file/9Zwz2QyR#6FJLsQmY4fWnUOJgq8f9ZdQ4lGv5i8wJ6L1nY6YyqoE.

          -

          We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

          -

          FAQs

          -

          Here are some frequently asked questions about FontExplorer X Pro:

          -
            -
          1. Is FontExplorer X Pro compatible with macOS Big Sur?
          2. -

            Yes, FontExplorer X Pro is compatible with macOS Big Sur. However, you may need to grant some permissions to FontExplorer X Pro to access your fonts and applications. You can find more information on how to do that on the official website of FontExplorer X Pro.

            -
          3. How can I backup my fonts and settings in FontExplorer X Pro?
          4. -

            You can backup your fonts and settings in FontExplorer X Pro by using the Backup feature. You can find it under the File menu in FontExplorer X Pro. You can choose to backup your fonts, your settings, or both. You can also choose the destination folder for your backup files.

            -
          5. How can I update FontExplorer X Pro to the latest version?
          6. -

            You can update FontExplorer X Pro to the latest version by using the Check for Updates feature. You can find it under the FontExplorer X menu in FontExplorer X Pro. You can also enable automatic updates in the Preferences menu of FontExplorer X Pro.

            -
          7. How can I uninstall FontExplorer X Pro from my device?
          8. -

            You can uninstall FontExplorer X Pro from your device by using the Uninstall feature. You can find it under the Help menu in FontExplorer X Pro. You can choose to uninstall only the application or also remove all your fonts and settings.

            -
          9. How can I contact the support team of FontExplorer X Pro?
          10. -

            You can contact the support team of FontExplorer X Pro by using the Contact Support feature. You can find it under the Help menu in FontExplorer X Pro. You can also visit the support page of FontExplorer X Pro at https://www.fontexplorerx.com/support/.

            -

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Na4hzvuxzlbenx7u Thread.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Na4hzvuxzlbenx7u Thread.md deleted file mode 100644 index c056c70a8d582fab696e44bfe0a396eb093e023c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Na4hzvuxzlbenx7u Thread.md +++ /dev/null @@ -1,28 +0,0 @@ - -

          What is na4hzvuxzlbenx7u thread and why you should care

          -

          Na4hzvuxzlbenx7u thread is a mysterious code that has been circulating on the internet for years. Some people claim that it is a secret message, a password, or a link to a hidden website. Others say that it is just a random string of letters and numbers that has no meaning at all.

          -

          na4hzvuxzlbenx7u thread


          Download Filehttps://urlcod.com/2uIaZX



          -

          But what if we told you that na4hzvuxzlbenx7u thread is actually a key to unlock the secrets of advanced thread calculation? That's right, na4hzvuxzlbenx7u thread is not just a meaningless code, but a powerful tool that can help you optimize your threading operations for any type of screw thread.

          -

          In this article, we will show you how to use na4hzvuxzlbenx7u thread to calculate the most important parameters of any thread, such as pitch diameter, minor diameter, allowance, tolerance, lead angle, over wire measurement, and cutting conditions. We will also explain how na4hzvuxzlbenx7u thread can help you improve your SEO ranking by making your web pages more relevant and user-friendly.

          -

          How to use na4hzvuxzlbenx7u thread for thread calculation

          -

          Thread calculation is the process of determining the dimensions and characteristics of a screw thread. Screw threads are helical structures that are used to fasten or join two or more components together. They are also used to transmit power or motion between rotating parts.

          -

          -

          There are many types of screw threads, such as UN (Unified Inch), M (Metric), NPT (National Pipe Tapered), BSP (British Standard Pipe), and so on. Each type of thread has its own standard specifications and applications. To ensure the proper fit and function of threaded components, you need to know how to calculate the correct values for each parameter of the thread.

          -

          That's where na4hzvuxzlbenx7u thread comes in handy. Na4hzvuxzlbenx7u thread is a universal code that can be used to calculate any type of thread with any unit of measurement. All you need to do is to enter na4hzvuxzlbenx7u thread into a web browser and follow these steps:

          -
            -
          1. Select the family, pitch type, threading standard, and thread type of the thread you want to calculate.
          2. -
          3. Select the definition method (standard or special) and the thread series (default or custom).
          4. -
          5. Select the display unit (inch or mm) and click on "Calculate".
          6. -
          7. You will see a table with all the basic thread dimensions, such as major diameter, pitch diameter, minor diameter, thread depth, addendum height, and crest width.
          8. -
          9. You will also see tabs for allowance and tolerances, lead angles, over wire measurements, and cutting conditions. Click on each tab to see more detailed information about each parameter.
          10. -
          11. You can also download or print the results for future reference.
          12. -
          -

          How to use na4hzvuxzlbenx7u thread for SEO optimization

          -

          SEO optimization is the process of improving the visibility and relevance of your web pages in search engines. Search engines use algorithms to crawl, index, and rank web pages based on various factors, such as keywords, content quality, user experience, site speed, and so on.

          -

          To rank higher in search results, you need to optimize your web pages for both search engines and users. You need to use HTML tags that tell search engines important information about your web page, such as how they should display it in search results. You also need to use HTML tags that tell web browsers how to display it to visitors.

          -

          Some of the most important HTML tags for SEO optimization are:

          -
            -
          • Title tag: This is the page title that search engines show in search results. It should be unique, descriptive, concise, click-worthy, and match the search intent of your target audience.
          • -
          • Meta description tag: This is the snippet of text that search engines show below the title tag in search results

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rihanna Loud Deluxe Album Free Download Zip.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rihanna Loud Deluxe Album Free Download Zip.md deleted file mode 100644 index e93e1ed43125cecf2705ac5b82702f878bcf2411..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rihanna Loud Deluxe Album Free Download Zip.md +++ /dev/null @@ -1,18 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Rihanna Loud Deluxe Album Free Download Zip": - -

            Rihanna's Loud Deluxe Album: A Pop Masterpiece

            -

            Rihanna is one of the most successful and influential pop stars of the 21st century. She has sold over 250 million records worldwide, won nine Grammy Awards, and launched her own fashion and beauty empire. But among her nine studio albums, one stands out as a pop masterpiece: Loud.

            -

            Loud is Rihanna's fifth album, released in 2010 by Def Jam Recordings and SRP Records. It was recorded in six months, during her Last Girl on Earth tour and the filming of her first feature film Battleship. The album marked a departure from her previous dark and edgy sound, and embraced a more upbeat and colorful style. Rihanna wanted to make an album that reflected her personality and mood at the time: fun, happy, and confident.

            -

            Rihanna Loud Deluxe Album Free Download Zip


            Download Zip https://urlcod.com/2uIaRe



            -

            Loud features 11 tracks (15 on the deluxe edition), spanning various genres such as dance-pop, R&B, reggae, rock, and hip hop. The album showcases Rihanna's versatility as a vocalist and a performer, as well as her collaborations with some of the biggest names in music, such as Drake, Nicki Minaj, Eminem, and David Guetta. The album spawned seven singles, three of which topped the Billboard Hot 100 chart: Only Girl (In the World), What's My Name?, and S&M. The album also received critical acclaim and commercial success, selling over eight million copies worldwide and earning three Grammy nominations.

            -

            Loud is a testament to Rihanna's artistic vision and evolution. It is an album that celebrates life, love, and sexuality with boldness and flair. It is an album that makes you want to dance, sing, and feel good. It is an album that deserves to be heard loud.

            -

            If you are a fan of Rihanna or pop music in general, you should not miss this opportunity to download Loud deluxe edition for free. This zip file contains all 15 tracks of the album in high-quality mp3 format, plus the cover art and lyrics. Just click on the link below and enjoy this pop masterpiece.

            -Download Rihanna Loud Deluxe Album Free ZipHere are a few more paragraphs with html formatting for the article: - -

            Loud is not only a pop masterpiece, but also a personal statement from Rihanna. The album reflects her growth as an artist and a woman, after overcoming a highly publicized abusive relationship with Chris Brown in 2009. Rihanna does not shy away from addressing the trauma and the healing process in some of the songs, such as Love the Way You Lie (Part II), a sequel to her hit collaboration with Eminem, and Complicated, a midtempo ballad about the difficulties of moving on. But she also shows her resilience and empowerment, as she reclaims her sexuality and identity in songs like S&M, a provocative anthem about bondage and fetishes, and Skin, a sensual slow jam that invites her lover to touch her.

            -

            Loud is also a showcase of Rihanna's musical diversity and experimentation. The album explores different sounds and influences, from the dancehall-infused Man Down, a reggae song that tells a story of revenge and murder, to the rock-inspired California King Bed, a power ballad that expresses the distance between two lovers. Rihanna also collaborates with some of the most innovative producers and songwriters in the industry, such as Stargate, The-Dream, Tricky Stewart, Ester Dean, and Alex da Kid. The result is a cohesive and dynamic album that never gets boring or predictable.

            -

            -

            Loud is one of Rihanna's best albums to date, and one of the most influential pop albums of the 2010s. It cemented her status as a global superstar and a pop icon. It inspired countless artists and fans to embrace their individuality and express themselves freely. It proved that Rihanna is not just a singer, but an artist with a vision and a voice. Loud is an album that deserves to be celebrated and remembered.

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/neuraldeepnet/NeuraldeepAI/README.md b/spaces/neuraldeepnet/NeuraldeepAI/README.md deleted file mode 100644 index 14ce396740a1a916b4d10dd2a004ad576551d161..0000000000000000000000000000000000000000 --- a/spaces/neuraldeepnet/NeuraldeepAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NeuraldeepAI -emoji: 👀 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/mask_rcnn_c4.py b/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/mask_rcnn_c4.py deleted file mode 100644 index 902d5b195f66881c67a37ec0fe606101a6812260..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/mask_rcnn_c4.py +++ /dev/null @@ -1,90 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.meta_arch import GeneralizedRCNN -from detectron2.modeling.anchor_generator import DefaultAnchorGenerator -from detectron2.modeling.backbone import BasicStem, BottleneckBlock, ResNet -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.proposal_generator import RPN, StandardRPNHead -from detectron2.modeling.roi_heads import ( - FastRCNNOutputLayers, - MaskRCNNConvUpsampleHead, - Res5ROIHeads, -) - -from ..data.constants import constants - -model = L(GeneralizedRCNN)( - backbone=L(ResNet)( - stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"), - stages=L(ResNet.make_default_stages)( - depth=50, - stride_in_1x1=True, - norm="FrozenBN", - ), - out_features=["res4"], - ), - proposal_generator=L(RPN)( - in_features=["res4"], - head=L(StandardRPNHead)(in_channels=1024, num_anchors=15), - anchor_generator=L(DefaultAnchorGenerator)( - sizes=[[32, 64, 128, 256, 512]], - aspect_ratios=[0.5, 1.0, 2.0], - strides=[16], - offset=0.0, - ), - anchor_matcher=L(Matcher)( - thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True - ), - box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]), - batch_size_per_image=256, - positive_fraction=0.5, - pre_nms_topk=(12000, 6000), - post_nms_topk=(2000, 1000), - nms_thresh=0.7, - ), - roi_heads=L(Res5ROIHeads)( - num_classes=80, - batch_size_per_image=512, - positive_fraction=0.25, - proposal_matcher=L(Matcher)( - thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False - ), - in_features=["res4"], - pooler=L(ROIPooler)( - output_size=14, - scales=(1.0 / 16,), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - res5=L(ResNet.make_stage)( - block_class=BottleneckBlock, - num_blocks=3, - stride_per_block=[2, 1, 1], - in_channels=1024, - bottleneck_channels=512, - out_channels=2048, - norm="FrozenBN", - stride_in_1x1=True, - ), - box_predictor=L(FastRCNNOutputLayers)( - input_shape=L(ShapeSpec)(channels="${...res5.out_channels}", height=1, width=1), - test_score_thresh=0.05, - box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)), - num_classes="${..num_classes}", - ), - mask_head=L(MaskRCNNConvUpsampleHead)( - input_shape=L(ShapeSpec)( - channels="${...res5.out_channels}", - width="${...pooler.output_size}", - height="${...pooler.output_size}", - ), - num_classes="${..num_classes}", - conv_dims=[256], - ), - ), - pixel_mean=constants.imagenet_bgr256_mean, - pixel_std=constants.imagenet_bgr256_std, - input_format="BGR", -) diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/Rethinking-BatchNorm/configs/mask_rcnn_SyncBNhead.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/Rethinking-BatchNorm/configs/mask_rcnn_SyncBNhead.py deleted file mode 100644 index 5f05da03514a4ee6aa37d6bc3e678873ead73c61..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/Rethinking-BatchNorm/configs/mask_rcnn_SyncBNhead.py +++ /dev/null @@ -1,3 +0,0 @@ -from .mask_rcnn_BNhead import model, dataloader, lr_multiplier, optimizer, train - -model.roi_heads.box_head.conv_norm = model.roi_heads.mask_head.conv_norm = "SyncBN" diff --git a/spaces/nkasmanoff/SearchingFace/app.py b/spaces/nkasmanoff/SearchingFace/app.py deleted file mode 100644 index e18cf4380af427eb3b0e7423197f395c7a0c64da..0000000000000000000000000000000000000000 --- a/spaces/nkasmanoff/SearchingFace/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import gradio as gr -from dataset_recommender import DatasetRecommender - -db_lookup = DatasetRecommender() -def predict(input_text, option): - - if option == "Semantic search": - response = db_lookup.recommend_based_on_text(input_text) - output = f"Message: {response['message']} \n \n Datasets: {' , '.join([x for x in response['datasets']])}" - elif option == 'Dataset similarity': - response = db_lookup.get_similar_datasets(input_text) - if 'error' in response: - output = response['error'] - else: - output = f"Similar Datasets: {' , '.join([x for x in response['datasets']])}" - - else: - output = "Please select an option" - return output - -input_type = gr.inputs.Textbox(label="Input Text") -checkbox = gr.inputs.Radio(["Semantic search", "Dataset similarity"], label="Please select search type:") - -example1 = ["Natural disasters", "Semantic search"] -example2 = ["https://huggingface.co/datasets/turkic_xwmt", "Dataset similarity"] -examples = [example1, example2] -title = "SearchingFace: Search for datasets!" - -iface = gr.Interface(fn=predict, inputs=[input_type, checkbox], examples=examples, title=title, outputs="text") - -iface.launch() diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_009.js b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_009.js deleted file mode 100644 index 50a34deaeaeb949b4db41de47e6ea2e9bedee1a5..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_009.js +++ /dev/null @@ -1,452 +0,0 @@ -/* -* Youtube Embed Plugin -* -* @author Jonnas Fonini -* @version 2.1.18 -*/ -(function () { - CKEDITOR.plugins.add('youtube', { - lang: [ 'en', 'bg', 'pt', 'pt-br', 'ja', 'hu', 'it', 'fr', 'tr', 'ru', 'de', 'ar', 'nl', 'pl', 'vi', 'zh', 'el', 'he', 'es', 'nb', 'nn', 'fi', 'et', 'sk', 'cs', 'ko', 'eu', 'uk'], - init: function (editor) { - editor.addCommand('youtube', new CKEDITOR.dialogCommand('youtube', { - allowedContent: 'div{*}(*); iframe{*}[!width,!height,!src,!frameborder,!allowfullscreen,!allow]; object param[*]; a[*]; img[*]' - })); - - editor.ui.addButton('youtube', { - label : 'Youtube', - toolbar : 'insert', - command : 'youtube', - group: '44', - icon : this.path + 'images/icon-hdpi.png' - }); - - CKEDITOR.dialog.add('youtube', function (instance) { - var video, - disabled = editor.config.youtube_disabled_fields || []; - - function handleLinkChange(el, api) { - var video = ytVidId(el.getValue()); - var time = ytVidTime(el.getValue()); - - if (el.getValue().length > 0) { - el.getDialog().getContentElement('youtubePlugin', 'txtEmbed').disable(); - } - else if (!disabled.length || !disabled.includes('txtEmbed')) { - el.getDialog().getContentElement('youtubePlugin', 'txtEmbed').enable(); - } - - if (video && time) { - var seconds = timeParamToSeconds(time); - var hms = secondsToHms(seconds); - el.getDialog().getContentElement('youtubePlugin', 'txtStartAt').setValue(hms); - } - } - - function handleEmbedChange(el, api) { - if (el.getValue().length > 0) { - el.getDialog().getContentElement('youtubePlugin', 'txtUrl').disable(); - } - else { - el.getDialog().getContentElement('youtubePlugin', 'txtUrl').enable(); - } - } - - return { - title : editor.lang.youtube.title, - minWidth : 510, - minHeight : 200, - onShow: function () { - for (var i = 0; i < disabled.length; i++) { - this.getContentElement('youtubePlugin', disabled[i]).disable(); - } - }, - contents : - [{ - id : 'youtubePlugin', - expand : true, - elements : - [ { - type : 'hbox', - widths : [ '70%', '15%', '15%' ], - children : - [ - { - id : 'txtUrl', - type : 'text', - label : editor.lang.youtube.txtUrl, - onChange : function (api) { - handleLinkChange(this, api); - }, - onKeyUp : function (api) { - handleLinkChange(this, api); - }, - validate : function () { - if (this.isEnabled()) { - if (!this.getValue()) { - alert(editor.lang.youtube.noCode); - return false; - } - else{ - video = ytVidId(this.getValue()); - - if (this.getValue().length === 0 || video === false) - { - alert(editor.lang.youtube.invalidUrl); - return false; - } - } - } - } - }, - { - type : 'text', - id : 'txtWidth', - width : '60px', - label : editor.lang.youtube.txtWidth, - 'default' : editor.config.youtube_width != null ? editor.config.youtube_width : '640', - validate : function () { - if (this.getValue()) { - var width = Number(this.getValue()); - - if (isNaN(width)) { - alert(editor.lang.youtube.invalidWidth); - return false; - } - } - else { - alert(editor.lang.youtube.noWidth); - return false; - } - } - }, - { - type : 'text', - id : 'txtHeight', - width : '60px', - label : editor.lang.youtube.txtHeight, - 'default' : editor.config.youtube_height != null ? editor.config.youtube_height : '360', - validate : function () { - if (this.getValue()) { - var height = Number(this.getValue()); - - if (isNaN(height)) { - alert(editor.lang.youtube.invalidHeight); - return false; - } - } - else { - alert(editor.lang.youtube.noHeight); - return false; - } - } - } - ] - }, - { - type : 'html', - html : editor.lang.youtube.or + '
            ' - }, - - { - id : 'txtEmbed', - type : 'textarea', - label : editor.lang.youtube.txtEmbed, - onChange : function (api) { - handleEmbedChange(this, api); - }, - onKeyUp : function (api) { - handleEmbedChange(this, api); - }, - validate : function () { - if (this.isEnabled()) { - if (!this.getValue()) { - alert(editor.lang.youtube.noCode); - return false; - } - else - if (this.getValue().length === 0 || this.getValue().indexOf('//') === -1) { - alert(editor.lang.youtube.invalidEmbed); - return false; - } - } - } - }, - - -/* { - type : 'hbox', - widths : [ '55%', '45%' ], - children : - [ - { - id : 'chkResponsive', - type : 'checkbox', - label : editor.lang.youtube.txtResponsive, - 'default' : editor.config.youtube_responsive != null ? editor.config.youtube_responsive : false - }, - { - id : 'chkNoEmbed', - type : 'checkbox', - label : editor.lang.youtube.txtNoEmbed, - 'default' : editor.config.youtube_noembed != null ? editor.config.youtube_noembed : false - } - ] - }, - { - type : 'hbox', - widths : [ '55%', '45%' ], - children : - [ - { - id : 'chkRelated', - type : 'checkbox', - 'default' : editor.config.youtube_related != null ? editor.config.youtube_related : false, - label : editor.lang.youtube.chkRelated - }, - { - id : 'chkOlderCode', - type : 'checkbox', - 'default' : editor.config.youtube_older != null ? editor.config.youtube_older : false, - label : editor.lang.youtube.chkOlderCode - } - ] - }, - { - type : 'hbox', - widths : [ '55%', '45%' ], - children : - [ - { - id : 'chkPrivacy', - type : 'checkbox', - label : editor.lang.youtube.chkPrivacy, - 'default' : editor.config.youtube_privacy != null ? editor.config.youtube_privacy : false - }, - { - id : 'chkAutoplay', - type : 'checkbox', - 'default' : editor.config.youtube_autoplay != null ? editor.config.youtube_autoplay : false, - label : editor.lang.youtube.chkAutoplay - } - ] - }, - { - type : 'hbox', - widths : [ '55%', '45%'], - children : - [ - { - id : 'txtStartAt', - type : 'text', - label : editor.lang.youtube.txtStartAt, - validate : function () { - if (this.getValue()) { - var str = this.getValue(); - - if (!/^(?:(?:([01]?\d|2[0-3]):)?([0-5]?\d):)?([0-5]?\d)$/i.test(str)) { - alert(editor.lang.youtube.invalidTime); - return false; - } - } - } - }, - { - id : 'chkControls', - type : 'checkbox', - 'default' : editor.config.youtube_controls != null ? editor.config.youtube_controls : true, - label : editor.lang.youtube.chkControls - } - ] - }*/ - ] - } - ], - onOk: function() - { - var content = ''; - var responsiveStyle = ''; - - if (this.getContentElement('youtubePlugin', 'txtEmbed').isEnabled()) { - content = this.getValueOf('youtubePlugin', 'txtEmbed'); - } - else { - var url = 'https://', params = [], startSecs, paramAutoplay=''; - var width = this.getValueOf('youtubePlugin', 'txtWidth'); - var height = this.getValueOf('youtubePlugin', 'txtHeight'); - - //if (this.getContentElement('youtubePlugin', 'chkPrivacy').getValue() === true) { - // url += 'www.youtube-nocookie.com/'; - //} - //else { - url += 'www.youtube.com/'; - //} - - url += 'embed/' + video; - - /*if (this.getContentElement('youtubePlugin', 'chkRelated').getValue() === false) { - params.push('rel=0'); - } - - if (this.getContentElement('youtubePlugin', 'chkAutoplay').getValue() === true) { - params.push('autoplay=1'); - paramAutoplay='autoplay'; - } - - if (this.getContentElement('youtubePlugin', 'chkControls').getValue() === false) { - params.push('controls=0'); - } -*/ - //startSecs = this.getValueOf('youtubePlugin', 'txtStartAt'); - - if (startSecs) { - var seconds = hmsToSeconds(startSecs); - - params.push('start=' + seconds); - } - - if (params.length > 0) { - url = url + '?' + params.join('&'); - } - -/* if (this.getContentElement('youtubePlugin', 'chkResponsive').getValue() === true) { - content += '
            '; - responsiveStyle = 'style="position:absolute;top:0;left:0;width:100%;height:100%"'; - }*/ - - if (1 === true) { //this.getContentElement('youtubePlugin', 'chkOlderCode').getValue() - url = url.replace('embed/', 'v/'); - url = url.replace(/&/g, '&'); - - if (url.indexOf('?') === -1) { - url += '?'; - } - else { - url += '&'; - } - url += 'hl=' + (this.getParentEditor().config.language ? this.getParentEditor().config.language : 'en') + '&version=3'; - - content += ''; - content += ''; - content += ''; - content += ''; - content += ''; - }*/ - else { - content += '""" - - -def set_examples(example): - ( - label, - inp, - designed_chain, - fixed_chain, - homomer, - num_seqs, - sampling_temp, - atomsel, - ) = example - return [ - label, - inp, - designed_chain, - fixed_chain, - homomer, - gr.Slider.update(value=num_seqs), - gr.Radio.update(value=sampling_temp), - atomsel, - ] - - -proteinMPNN = gr.Blocks() - -with proteinMPNN: - - # gr.Markdown("# MAINTENANC, CURRENTLY NOT WORKING") - # gr.HTML("") - gr.Markdown("# ProteinMPNN") - gr.Markdown( - """This model takes as input a protein structure and based on its backbone predicts new sequences that will fold into that backbone. - Optionally, we can run AlphaFold2 on the predicted sequence to check whether the predicted sequences adopt the same backbone. - - If you use this space please cite the ProteinMPNN paper - > J. Dauparas, I. Anishchenko, N. Bennett, H. Bai, R. J. Ragotte, L. F. Milles, B. I. M. Wicky, A. Courbet, R. J. de Haas, N. Bethel, P. J. Y. Leung, T. F. Huddy, S. Pellock, D. Tischer, F. Chan, B. Koepnick, H. Nguyen, A. Kang, B. Sankaran, A. K. Bera, N. P. King, D. Baker, Robust deep learning–based protein sequence design using ProteinMPNN. Science 378, 49–56 (2022). - - and this webapp: - - > Simon L. Dürr. (2023). ProteinMPNN Gradio Webapp (v0.3). Zenodo. https://doi.org/10.5281/zenodo.7630417 - - """ - ) - gr.Markdown("![](https://simonduerr.eu/ProteinMPNN.png)") - - with gr.Tabs(): - with gr.TabItem("Input"): - inp = gr.Textbox( - placeholder="PDB Code or upload file below", label="Input structure" - ) - file = gr.File(file_count="single") - - with gr.TabItem("Settings"): - with gr.Row(): - designed_chain = gr.Textbox(value="A", label="Designed chain") - fixed_chain = gr.Textbox( - placeholder="Use commas to fix multiple chains", label="Fixed chain" - ) - with gr.Row(): - num_seqs = gr.Slider( - minimum=1, maximum=15, value=1, step=1, label="Number of sequences" - ) - sampling_temp = gr.Radio( - choices=["0.1", "0.15", "0.2", "0.25", "0.3"], - value="0.1", - label="Sampling temperature", - ) - gr.Markdown( - """ Sampling temperature for amino acids, `T=0.0` means taking argmax, `T>>1.0` means sample randomly. Suggested values `0.1, 0.15, 0.2, 0.25, 0.3`. Higher values will lead to more diversity. - """ - ) - with gr.Row(): - model_name = gr.Dropdown( - choices=[ - "vanilla—v_48_002", - "vanilla—v_48_010", - "vanilla—v_48_020", - "vanilla—v_48_030", - "soluble—v_48_010", - "soluble—v_48_020", - ], - label="Model", - value="vanilla—v_48_020", - ) - backbone_noise = gr.Dropdown( - choices=["0", "0.02", "0.10", "0.20", "0.30"], label="Backbone noise", value="0" - ) - with gr.Row(): - homomer = gr.Checkbox(value=False, label="Homomer?") - gr.Markdown( - "for correct symmetric tying lenghts of homomer chains should be the same" - ) - with gr.Row(): - omit_AAs = gr.Textbox( - placeholder="Specify omitted amino acids ", label="Omitted amino acids" - ) - gr.Markdown("## Fixed positions") - gr.Markdown( - """You can fix important positions in the protein. Resid should be specified with the same numbering as in the input pdb file. The fixed residues will be highlighted in the output. - The [VMD selection](http://www.ks.uiuc.edu/Research/vmd/vmd-1.9.2/ug/node89.html) synthax is used. You can also select based on ligands or chains in the input structure to specify interfaces to be fixed. - - - within 5 of resid 94 All residues that have >1 atom closer than 5 Å to any atom of residue 94 - - name CA and within 5 of resid 94 All residues that have CA atom closer than 5 Å to any atom of residue 94 - - resid 94 96 119 Residues 94, 94 and 119 - - within 5 of resname ZN All residues with any atom <5 Å of zinc ion - - chain A and within 5 of chain B All residues of chain A that are part of the interface with chain B - - protein and within 5 of nucleic All residues that bind to DNA (if present in structure) - - not (chain A and within 5 of chain B) only modify residues that are in the interface with the fixed chain, not further away - - chain A or (chain B and sasa < 20) Keep chain A and all core residues fixeds - - pLDDT >70 Redesign all residues with low pLDDT - - Note that sasa and pLDDT selectors modify default VMD behavior. SASA is calculated using moleculekit and written to the mass attribute. Selections based on mass do not work. - pLDDT is an alias for beta, it only works correctly with structures that contain the appropriate values in the beta column of the PDB file. """ - ) - atomsel = gr.Textbox( - placeholder="Specify atom selection ", label="Fixed positions" - ) - - btn = gr.Button("Run") - label = gr.Textbox(label="Label", visible=False) - examples = gr.Dataset( - components=[ - label, - inp, - designed_chain, - fixed_chain, - homomer, - num_seqs, - sampling_temp, - atomsel, - ], - samples=[ - ["Homomer design", "1O91", "A,B,C", "", True, "2", "0.1", ""], - ["Monomer design", "6MRR", "A", "", False, "2", "0.1", ""], - ["Redesign of Homomer to Heteromer", "3HTN", "A,B", "C", False, "2", "0.1", ""], - [ - "Redesign of MID1 scaffold keeping binding site fixed", - "3V1C", - "A,B", - "", - False, - "2", - "0.1", - "within 5 of resname ZN", - ], - [ - "Redesign of DNA binding protein", - "3JRD", - "A,B", - "", - False, - "2", - "0.1", - "within 8 of nucleic", - ], - [ - "Surface Redesign of miniprotein", - "7JZM", - "A,B", - "", - False, - "2", - "0.1", - "chain B or (chain A and sasa < 20)", - ], - ], - ) - - gr.Markdown("# Output") - - with gr.Tabs(): - with gr.TabItem("Designed sequences"): - out = gr.Textbox(label="Status") - - with gr.TabItem("Amino acid probabilities"): - plot = gr.Plot() - all_log_probs = gr.File(visible=False) - with gr.TabItem("T adjusted probabilities"): - gr.Markdown("Sampling temperature adjusted amino acid probabilties") - plot_tadjusted = gr.Plot() - all_probs = gr.File(visible=False) - with gr.TabItem("Structure validation w/ AF2"): - gr.HTML( - """ -
            -
            -

            - Results might differ from DeepMind's published results. - Predictions are made using model_5_ptm and without MSA based on the selected single sequence (designed_chain + fixed_chain). -

            -
            -
            - """ - ) - with gr.Row(): - with gr.Row(): - chosen_seq = gr.Dropdown( - choices=[], - label="Select a sequence for validation", - visible=False, - ) - num_recycles = gr.Dropdown( - choices=["0", "1", "3", "5"], value="3", label="num Recycles" - ) - btnAF = gr.Button("Run AlphaFold on all sequences") - with gr.Row(): - mol = gr.HTML() - with gr.Column(): - gr.Markdown("## Metrics") - p = { - 0: { - "Seq": "NA", - "RMSD": "NA", - "Score": "NA", - "Recovery": "NA", - "Mean pLDDT": "NA", - } - } - placeholder = pd.DataFrame.from_dict(p, orient="index") - results = gr.Dataframe( - placeholder, - interactive=False, - row_count=(1, "dynamic"), - headers=["Seq", "RMSD", "Score", "Recovery", "Mean pLDDT"], - ) - plotAF_plddt = gr.Plot(label="pLDDT") - # remove maxh80 class from css - plotAF_pae = gr.Gallery(label="PAE plots") # gr.Plot(label="PAE") - tempFile = gr.Variable() - selectedResidues = gr.Variable() - seq_dict = gr.Variable() - btn.click( - fn=update, - inputs=[ - inp, - file, - designed_chain, - fixed_chain, - homomer, - num_seqs, - sampling_temp, - model_name, - backbone_noise, - omit_AAs, - atomsel, - ], - outputs=[ - out, - plot, - plot_tadjusted, - all_log_probs, - all_probs, - tempFile, - chosen_seq, - selectedResidues, - seq_dict, - ], - ) - btnAF.click( - fn=update_AF, - inputs=[seq_dict, tempFile, num_recycles, selectedResidues], - outputs=[mol, plotAF_plddt, plotAF_pae, results], - ) - examples.click(fn=set_examples, inputs=examples, outputs=examples._components) - gr.Markdown( - """Citation: **Robust deep learning based protein sequence design using ProteinMPNN**
            -Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J. Ragotte, Lukas F. Milles, Basile I. M. Wicky, Alexis Courbet, Robbert J. de Haas, Neville Bethel, Philip J. Y. Leung, Timothy F. Huddy, Sam Pellock, Doug Tischer, Frederick Chan, Brian Koepnick, Hannah Nguyen, Alex Kang, Banumathi Sankaran, Asim Bera, Neil P. King, David Baker
            -bioRxiv 2022.06.03.494563; doi: [10.1101/2022.06.03.494563](https://doi.org/10.1101/2022.06.03.494563)

            Server built by [@simonduerr](https://twitter.com/simonduerr) and hosted by Huggingface""" - ) - - -ray.init(runtime_env={"working_dir": "./af_backprop"}) - -proteinMPNN.launch() diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Build a City Life with City Build APK - The Ultimate Simulation Game.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Build a City Life with City Build APK - The Ultimate Simulation Game.md deleted file mode 100644 index fa1e5ca4a7a77c5a9287995800261fb8d9d892b6..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Build a City Life with City Build APK - The Ultimate Simulation Game.md +++ /dev/null @@ -1,89 +0,0 @@ -
            -

            City Build APK: How to Download and Play the Best City Building Game on Android

            -

            Do you love city building games? Do you want to create your own metropolis and manage it like a real mayor? If yes, then you should try City Build APK, the most popular and realistic city building game for Android devices. In this article, we will tell you what City Build APK is, how to download and install it, and some tips and tricks for playing it.

            -

            city build apk


            Download Zip ––– https://ssurll.com/2uNTqt



            -

            What is City Build APK?

            -

            City Build APK is an Android application package (APK) file that allows you to download and play SimCity BuildIt, a city building simulation game developed by Electronic Arts. SimCity BuildIt is one of the best city building games ever made, with stunning graphics, realistic gameplay, and endless possibilities. You can design and create your own city from scratch, manage your resources and services, trade, chat, and compete with other players, and much more.

            -

            Features of City Build APK

            -

            Create your own city from scratch

            -

            City Build APK lets you be the hero of your own city as you design and create a beautiful, bustling metropolis. You can choose from hundreds of buildings, landmarks, parks, bridges, and roads to customize your city. You can also adjust the terrain, add rivers, lakes, mountains, and forests to make your city more natural. You can even unleash disasters like earthquakes, tornadoes, or meteor strikes to test your city's resilience.

            -

            Manage your resources and services

            -

            City Build APK also challenges you to make smart choices to keep your citizens happy and your skyline growing. You have to manage your resources like money, materials, energy, water, waste, and pollution. You also have to provide essential services like education, health, fire, police, transportation, entertainment, and culture. You have to balance the needs and demands of your citizens while also dealing with traffic jams, crime waves, environmental issues, and other problems.

            -

            Trade, chat, and compete with other players

            -

            City Build APK also allows you to connect with other players from around the world. You can trade goods and resources with other mayors in the global market. You can chat with other players in real-time and share tips and strategies. You can also compete with other players in various challenges and events. You can join clubs with fellow mayors and cooperate or compete with them in club wars. You can also compare your city's progress and achievements with other players on the leaderboards.

            -

            city build apk download
            -city build apk mod
            -city build apk offline
            -city build apk free
            -city build apk hack
            -city build apk latest version
            -city build apk unlimited money
            -city build apk android
            -city build apk for pc
            -city build apk no ads
            -city build game apk
            -city build simulator apk
            -city build tycoon apk
            -city build craft apk
            -city build island apk
            -city build story apk
            -city build design apk
            -city build 3d apk
            -city build adventure apk
            -city build empire apk
            -my town - build a city life apk[^1^]
            -my town - build a city life mod apk
            -my town - build a city life hack apk
            -my town - build a city life free download apk
            -my town - build a city life unlimited money apk
            -my town - build a city life offline apk
            -my town - build a city life latest version apk
            -my town - build a city life android game apk
            -my town - build a city life for pc apk
            -my town - build a city life no ads apk
            -little big city 2 - city builder apk
            -little big city 2 - city builder mod apk
            -little big city 2 - city builder hack apk
            -little big city 2 - city builder free download apk
            -little big city 2 - city builder unlimited money apk
            -little big city 2 - city builder offline apk
            -little big city 2 - city builder latest version apk
            -little big city 2 - city builder android game apk
            -little big city 2 - city builder for pc apk
            -little big city 2 - city builder no ads apk

            -

            How to Download and Install City Build APK?

            -

            If you want to download and play City Build APK on your Android device, you have to follow these steps:

            -

            Step 1: Find a reliable source for the APK file

            -

            The first step is to find a reliable source for the City Build APK file. You cannot download SimCity BuildIt from the Google Play Store because it is not available in some regions or countries. Therefore, you have to find an alternative source that offers the latest version of the game for free. One of the best sources for City Build APK is [APKPure](^1^), a website that provides safe and fast downloads for various Android apps and games.

            -

            Step 2: Enable unknown sources on your device

            -

            The second step is to enable unknown sources on your device. This is necessary because Android devices normally do not allow the installation of apps from unknown sources. To enable unknown sources, you have to go to your device's settings, then security, then toggle on the option that says "allow installation of apps from unknown sources". This will allow you to install City Build APK without any problems.

            -

            Step 3: Download and install the APK file

            -

            The third step is to download and install the City Build APK file. To do this, you have to go to the website where you found the APK file, such as [APKPure], and click on the download button. This will start the download process and save the file to your device's storage. Once the download is complete, you have to locate the file and tap on it to start the installation process. This will take a few minutes and require some permissions from your device. After the installation is done, you will see the game icon on your device's home screen or app drawer.

            -

            Step 4: Launch the game and enjoy

            -

            The final step is to launch the game and enjoy. To do this, you have to tap on the game icon and wait for it to load. You will see a welcome screen that will ask you to choose your language and agree to the terms of service. You will also see a tutorial that will guide you through the basics of the game. You can skip the tutorial if you want, but we recommend that you follow it to learn how to play the game. After the tutorial, you can start building your own city and have fun.

            -

            Tips and Tricks for Playing City Build APK

            -

            Now that you know how to download and install City Build APK, here are some tips and tricks for playing it:

            -

            Plan your city layout carefully

            -

            One of the most important aspects of city building games is planning your city layout carefully. You have to consider factors like space, efficiency, aesthetics, and functionality when placing your buildings and roads. You have to avoid creating dead ends, bottlenecks, or gaps that will waste space or cause traffic problems. You also have to make sure that your buildings are connected to power, water, sewage, and other services. You can use the bulldozer tool to demolish or move any buildings or roads that you want to change.

            -

            Balance your population and happiness

            -

            Another important aspect of city building games is balancing your population and happiness. You have to attract more citizens to your city by providing them with housing, jobs, entertainment, and other amenities. You also have to keep them happy by fulfilling their needs and demands. You can check your population and happiness levels by tapping on the population icon at the top left corner of the screen. You can also see what your citizens are saying by tapping on their speech bubbles or reading their comments on social media. You can increase your population by building more residential zones or upgrading them. You can increase your happiness by building more parks, landmarks, specializations, or services.

            -

            Upgrade your buildings and infrastructure

            -

            A third important aspect of city building games is upgrading your buildings and infrastructure. You have to improve your city's performance and appearance by upgrading your buildings and infrastructure. You can upgrade your residential zones by collecting materials from factories or shops, or by using golden keys or platinum keys that you can earn from completing tasks or participating in events. You can upgrade your factories, shops, services, or specializations by spending coins or simcash that you can earn from taxes, trade, or other sources. Upgrading your buildings and infrastructure will increase their capacity, efficiency, quality, or attractiveness.

            -

            Join a club and cooperate with other mayors

            -

            A fourth important aspect of city building games is joining a club and cooperating with other mayors. You can join a club with fellow mayors who share your interests or goals in the game. You can chat with them in real-time and share tips and strategies. You can also cooperate with them in club wars or club challenges where you can compete against other clubs for rewards and glory. You can also trade goods and resources with them in the club market where you can find better deals than in the global market.

            -

            Conclusion

            -

            City Build APK is a great way to download and play SimCity BuildIt, one of the best city building games ever made. You can create your own city from scratch, manage your resources and services, trade, chat, and compete with other players, and much more. You can download City Build APK from [APKPure] or other reliable sources for free. You just have to enable unknown sources on your device, download and install the APK file, launch the game and enjoy. You can also follow our tips and tricks for playing City Build APK to improve your city and have fun. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy city building!

            -

            FAQs

            -

            Here are some frequently asked questions about City Build APK:

            -

            Is City Build APK safe to download and install?

            -

            Yes, City Build APK is safe to download and install as long as you get it from a reliable source like [APKPure]. However, you should always be careful when downloading and installing any APK file from unknown sources, as they may contain viruses or malware that can harm your device. You should also scan the APK file with an antivirus app before installing it.

            -

            Is City Build APK legal to use?

            -

            Yes, City Build APK is legal to use as long as you do not violate the terms of service of SimCity BuildIt or Electronic Arts. You should not use City Build APK to hack, cheat, or modify the game in any way that gives you an unfair advantage over other players or violates the game's rules. You should also not use City Build APK to distribute or sell the game or its content without the permission of the developers.

            -

            Does City Build APK require an internet connection?

            -

            Yes, City Build APK requires an internet connection to play. You need an internet connection to download and install the game, as well as to access its online features like trade, chat, competition, and events. You also need an internet connection to save your game progress and sync it with other devices. However, you can play the game offline for a limited time if you have already downloaded and installed it.

            -

            How can I update City Build APK?

            -

            You can update City Build APK by downloading and installing the latest version of the APK file from the same source where you got it. You should always update City Build APK whenever there is a new version available, as it may contain new features, improvements, bug fixes, or security patches. You should also backup your game data before updating City Build APK, in case something goes wrong during the process.

            -

            How can I uninstall City Build APK?

            -

            You can uninstall City Build APK by following the same steps as uninstalling any other app on your Android device. You have to go to your device's settings, then apps, then find and tap on City Build APK. You will see an option to uninstall the app. Tap on it and confirm your action. This will remove City Build APK from your device along with its data and cache.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Video TikTok Reels Without Watermark - No Registration No Ads No Problem.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Video TikTok Reels Without Watermark - No Registration No Ads No Problem.md deleted file mode 100644 index d5d994a00d71f602762a118ebdd2f88430125de3..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Video TikTok Reels Without Watermark - No Registration No Ads No Problem.md +++ /dev/null @@ -1,144 +0,0 @@ -
            -

            How to Create Engaging Content on TikTok and Reels

            -

            If you're looking for a way to spice up your social media marketing strategy, you might want to consider using TikTok and Reels, the two hottest short-form video platforms right now. These platforms allow you to create fun, creative, and engaging videos that can reach millions of potential customers in minutes. But how do you get started with these platforms? And how do you create content that stands out from the crowd? In this article, we'll show you how to download Reels and TikTok videos without watermark, how to create engaging content on Reels and TikTok, and some examples of successful Reels and TikTok videos from different niches and industries. Let's dive in!

            -

            How to Download Reels and TikTok Videos Without Watermark

            -

            One of the first things you might want to do when using Reels and TikTok is to download some videos from these platforms. This can help you learn from other creators, get inspired by their ideas, or repurpose their content for your own use. However, downloading videos from these platforms can be tricky, especially if you want to remove the watermark that shows the platform's logo. Fortunately, there are some tools and methods that can help you download Reels and TikTok videos without watermark. Here are some of them:

            -

            download reels tiktok


            DOWNLOADhttps://ssurll.com/2uNTdu



            -
              -
            • Snaptik.app: This is a free online tool that allows you to download video tiktok without watermark. All you need to do is copy the link of the video you want to download from TikTok or Reels, paste it into the tool's website, and click on the download button. The tool will process your request and provide you with a link to download the video in MP4 format with HD quality. You can also use Snaptik's Android app for easier downloading.
            • -
            • Ssstik.io: This is another free online tool that helps you download TikTok videos without watermark. It works similarly to Snaptik.app: you just need to copy the link of the video you want to download from TikTok or Reels, paste it into the tool's website, and click on the save button. The tool will generate a link for you to download the video in MP4 format with HD resolution. You can also use Ssstik.io's browser extension for faster downloading.
            • -
            • Screen recording: This is a simple method that doesn't require any third-party tools or apps. You just need to use your device's built-in screen recording feature to capture the video you want to download from TikTok or Reels. However, this method has some drawbacks: you might not get the best quality, you might capture some unwanted sounds or notifications, and you might still see the watermark on the video. To avoid these issues, you should turn on the airplane mode, mute your device, and crop the video after recording.
            • -
            -

            Downloading Reels and TikTok videos without watermark can have some advantages and disadvantages. On one hand, it can help you save time and bandwidth, access videos offline, and reuse them for your own purposes. On the other hand, it can also raise some ethical and legal issues, such as violating the platforms' terms of service, infringing the creators' intellectual property rights, and losing the original source and credit of the videos. Therefore, you should always be careful and respectful when downloading videos from these platforms, and only use them for personal or educational purposes. You should also always give proper attribution and credit to the original creators, and avoid using their content for commercial or malicious purposes.

            -

            Once you have downloaded some videos from Reels and TikTok, you can use them for your own content creation or repurposing. For example, you can:

            -
              -
            • Edit them: You can use some video editing tools or apps to trim, crop, rotate, merge, split, add filters, effects, stickers, text, music, voiceover, or subtitles to the downloaded videos. You can also adjust the speed, brightness, contrast, saturation, or volume of the videos. This can help you enhance the quality, style, and message of the videos.
            • -
            • Share them: You can share the downloaded videos on other social media platforms or channels, such as Instagram, Facebook, YouTube, Twitter, Snapchat, WhatsApp, or your website or blog. This can help you increase your reach, exposure, and engagement with different audiences. However, you should always respect the platforms' guidelines and policies regarding video sharing and uploading.
            • -
            • Repurpose them: You can repurpose the downloaded videos for different formats or purposes. For example, you can turn them into GIFs, memes, slideshows, podcasts, blogs, ebooks, webinars, courses, or newsletters. This can help you diversify your content, add value to your audience, and generate more leads and conversions.
            • -
            -

            Downloading Reels and TikTok videos without watermark can be a useful way to learn from other creators, get inspired by their ideas, or repurpose their content for your own use. However, you should always be mindful of the ethical and legal implications of doing so, and always respect the platforms' terms of service and the creators' intellectual property rights.

            -

            How to Create Engaging Content on Reels and TikTok

            -

            Now that you know how to download Reels and TikTok videos without watermark, you might be wondering how to create your own engaging content on these platforms. Creating short-form video content can be challenging, but also rewarding, as it can help you showcase your personality, creativity, and expertise, connect with your audience, and grow your brand awareness and loyalty. However, not all short-form video content is created equal. There are some differences between Reels and TikTok that you should be aware of, as well as some tips and tricks that can help you create captivating videos that will attract and retain your viewers. Here are some of them:

            -

            The Differences Between Reels and TikTok

            -

            Reels and TikTok are both short-form video platforms that allow you to create and share 15-60 second videos with music, filters, effects, stickers, text, and other features. However, they are not exactly the same. Here are some of the main differences between Reels and TikTok in terms of video length, music options, editing tools, algorithm, advertising, demographics, and analytics:

            - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            ReelsTikTok
            Video length: 15-30 secondsVideo length: 15-60 seconds
            Music options: Limited to Instagram's music library or original audioMusic options: Access to a wide range of songs and sounds from TikTok's music library or original audio
            Editing tools: Basic tools such as speed, timer, align, effects, filters, stickers, text, draw, voiceoverEditing tools: Advanced tools such as speed, timer, beauty mode, filters, effects, stickers, text, templates, transitions, voice effects, voiceover, duet, stitch, green screen
            Algorithm: Based on the user's interests, preferences, and behavior on InstagramAlgorithm: Based on the user's interests, preferences, and behavior on TikTok
            Advertising: Limited to Instagram's ads platform or sponsored postsAdvertising: Access to TikTok's ads platform or sponsored posts
            Demographics: Mostly millennials and Gen Z users who are already on InstagramDemographics: Mostly Gen Z and younger users who are new to social media
            Analytics: Basic metrics such as views, likes, comments, shares, saves, reach, impressionsAnalytics: Detailed metrics such as views, likes, comments, shares, saves, followers, watch time, traffic source, audience territories, audience interests
            -

            As you can see, Reels and TikTok have some similarities and differences that can affect your content creation and marketing strategy. Depending on your goals, target audience, budget, and resources, you might want to choose one platform over the other, or use both platforms to maximize your exposure and engagement. However, regardless of which platform you use, there are some general principles that can help you create engaging content on Reels and TikTok.

            -

            The Tips and Tricks for Creating Engaging Content on Reels and TikTok

            -

            Creating engaging content on Reels and TikTok can be challenging, but also fun and rewarding. Here are some tips and tricks that can help you create captivating videos that will attract and retain your viewers:

            -
              -
            • Create a captivating hook: The first few seconds of your video are crucial to capture your viewer's attention and interest. You need to create a captivating hook that will make them want to watch more. You can do this by asking a question, making a statement, telling a story, showing a teaser, or using a catchy phrase. You can also use some effects, filters, stickers, text, or music to enhance your hook and make it more appealing.
            • -
            • Deliver value: The main purpose of your video is to deliver value to your viewer. You need to provide them with something that will benefit them, such as information, education, entertainment, inspiration, motivation, or emotion. You need to make sure that your video is relevant, useful, and valuable to your viewer's needs, wants, goals, or problems. You can do this by sharing your knowledge, skills, experience, opinions, stories, tips, tricks, hacks, or recommendations.
            • -
            • Entertain and educate: The best way to deliver value to your viewer is to entertain and educate them at the same time. You need to balance the fun and the facts in your video. You need to make your video enjoyable and memorable for your viewer. You can do this by using humor, emotion, storytelling, personality, creativity, or surprises. You can also use some music, effects, filters, stickers, text, or transitions to make your video more dynamic and engaging.
            • -
            • Optimize your video: The final step of creating engaging content on Reels and TikTok is to optimize your video for maximum reach and engagement. You need to make sure that your video is visible and discoverable by your target audience. You can do this by using relevant hashtags, keywords, captions, titles, descriptions, tags, or categories. You can also use some calls to action, such as asking for likes, comments, shares, follows, feedback, or suggestions. You can also use some incentives, such as giveaways, contests, challenges, or collaborations.
            • -
            -

            Creating engaging content on Reels and TikTok can be a great way to showcase your personality, creativity, and expertise, connect with your audience, and grow your brand awareness and loyalty. However, you need to follow some tips and tricks to create captivating videos that will attract and retain your viewers. You need to create a captivating hook, deliver value, entertain and educate, and optimize your video.

            -

            How to download reels tiktok without watermark
            -Download reels tiktok online free
            -Download reels tiktok video mp4
            -Download reels tiktok with music
            -Download reels tiktok to gallery
            -Download reels tiktok app for android
            -Download reels tiktok on iphone
            -Download reels tiktok in hd quality
            -Download reels tiktok no logo
            -Download reels tiktok by link
            -Download reels tiktok from instagram
            -Download reels tiktok on pc
            -Download reels tiktok with sound
            -Download reels tiktok on mac
            -Download reels tiktok as gif
            -Download reels tiktok using snaptik.app[^1^]
            -Download reels tiktok via ssstik.io[^2^]
            -Descargar reels tiktok sin marca de agua[^3^]
            -Download reels tiktok in mp3 format
            -Download reels tiktok on chrome
            -Download reels tiktok with subtitles
            -Download reels tiktok on firefox
            -Download reels tiktok as live wallpaper
            -Download reels tiktok on safari
            -Download reels tiktok with lyrics
            -Download reels tiktok on edge
            -Download reels tiktok as ringtone
            -Download reels tiktok on opera
            -Download reels tiktok with filters
            -Download reels tiktok on brave
            -Download reels tiktok as sticker
            -Download reels tiktok on tor
            -Download reels tiktok with effects
            -Download reels tiktok on vivaldi
            -Download reels tiktok with captions
            -Download reels tiktok on linux
            -Download reels tiktok with transitions
            -Download reels tiktok on windows 10
            -Download reels tiktok with hashtags
            -Download reels tiktok on ubuntu
            -Download reels tiktok with emojis
            -Download reels tiktok on windows 7
            -Download reels tiktok with voiceover
            -Download reels tiktok on ios 15
            -Download reels tiktok with duet feature
            -Download reels tiktok on ios 14
            -Download reels tiktok with green screen effect

            -

            The Examples of Successful Reels and TikTok Videos

            -

            To give you some inspiration and ideas for creating engaging content on Reels and TikTok, here are some examples of successful Reels and TikTok videos from different niches and industries. You can learn from their strategies, techniques, and styles, or get inspired by their topics, themes, and messages. Here are some of them:

            -
              -
            • Beauty: @jamescharles is a popular beauty influencer who creates stunning makeup tutorials, transformations, and challenges on Reels and TikTok. He uses catchy music, effects, filters, stickers, text, transitions, and voiceovers to make his videos more fun and engaging. He also delivers value by sharing his tips, tricks, hacks, and recommendations for different products, looks, and occasions.
            • -
            • Business: @garyvee is a well-known entrepreneur, speaker, and author who creates motivational and educational videos on Reels and TikTok. He uses storytelling, emotion, humor, and personality to make his videos more inspiring and memorable. He also delivers value by sharing his insights, advice, opinions, and stories on various topics related to business, marketing, social media, and personal development.
            • -
            • Fitness: @blogilates is a famous fitness instructor and blogger who creates fun and effective workout videos on Reels and TikTok. She uses upbeat music, effects, filters, stickers, text, and voiceovers to make her videos more lively and engaging. She also delivers value by sharing her routines, exercises, tips, tricks, and challenges for different fitness goals, levels, and body parts.
            • -
            • Food: @buzzfeedtasty is a popular food media brand that creates delicious and easy recipes on Reels and TikTok. They use high-quality video, music, effects, filters, text, and voiceovers to make their videos more appetizing and engaging. They also deliver value by sharing their ingredients, instructions, tips, tricks, and variations for different dishes, cuisines, and occasions.
            • -
            • Travel: @damonandjo are a dynamic duo of travel influencers who create adventurous and cultural videos on Reels and TikTok. They use stunning video, music, effects, filters, stickers, text, and voiceovers to make their videos more captivating and engaging. They also deliver value by sharing their experiences, tips, tricks, and recommendations for different destinations, activities, and cultures.
            • -
            -

            These are just some of the examples of successful Reels and TikTok videos from different niches and industries. You can find more examples by browsing the platforms' explore pages, following popular hashtags, or searching for keywords related to your niche or industry. You can also follow some of the influencers or brands that you admire or resonate with, and see how they create engaging content on Reels and TikTok.

            -

            Conclusion

            -

            Reels and TikTok are two of the most popular and powerful short-form video platforms that you can use to create engaging content for your social media marketing strategy. They can help you showcase your personality, creativity, and expertise, connect with your audience, and grow your brand awareness and loyalty. However, creating engaging content on Reels and TikTok can be challenging, but also rewarding. You need to know how to download Reels and TikTok videos without watermark, how to create captivating videos that will attract and retain your viewers, and some examples of successful Reels and TikTok videos from different niches and industries. By following the tips and tricks we shared in this article, you can create amazing videos that will make your audience go wow!

            -

            So what are you waiting for? Try out Reels and TikTok for yourself and see how they can transform your social media marketing strategy. And if you need more tips and examples on how to create engaging content on Reels and TikTok, don't forget to follow our account for more updates. We'd love to see what you create!

            -

            Thank you for reading this article. We hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We'd love to hear from you!

            -

            FAQs

            -

            Here are some of the frequently asked questions that readers might have about Reels and TikTok:

            -
              -
            • How do I switch between Reels and TikTok accounts?
            • -

              If you have multiple accounts on Reels or TikTok, you can easily switch between them without logging out. On Reels, you can tap on your profile icon in the bottom right corner of the screen, then tap on the arrow next to your username, then select the account you want to switch to. On TikTok, you can tap on "Me" in the bottom right corner of the screen, then tap on the three dots in the top right corner of the screen, then tap on "Manage account", then tap on "Switch account", then select the account you want to switch to.

              -
            • How do I collaborate with other creators on Reels and TikTok?
            • -

              If you want to collaborate with other creators on Reels or TikTok, you can use some of the features that these platforms offer, such as duet, stitch, remix, or live. These features allow you to create videos with other creators, either by adding your own video or audio to their existing video, or by creating a new video together in real time. You can also collaborate with other creators by tagging them in your videos, captions, or comments, or by joining their challenges, contests, or campaigns.

              -
            • How do I measure the performance of my Reels and TikTok videos?
            • -

              If you want to measure the performance of your Reels or TikTok videos, you can use some of the analytics tools that these platforms provide. On Reels, you can tap on the three lines in the top right corner of the screen, then tap on "Insights", then tap on "Content", then tap on "Reels". Here you can see some basic metrics such as views, likes, comments, shares, saves, reach, and impressions for your Reels videos. On TikTok, you can tap on "Me" in the bottom right corner of the screen, then tap on the three dots in the top right corner of the screen, then tap on "Analytics". Here you can see some detailed metrics such as views, likes, comments, shares, saves, followers, watch time, traffic source, audience territories, and audience interests for your TikTok videos.

              -
            • How do I monetize my Reels and TikTok content?
            • -

              If you want to monetize your Reels or TikTok content, you can use some of the monetization options that these platforms offer. On Reels, you can monetize your content by using Instagram's ads platform or sponsored posts. You can also monetize your content by promoting your products or services, selling your merchandise, offering your courses or coaching, or using affiliate links or codes. On TikTok, you can monetize your content by using TikTok's ads platform or sponsored posts. You can also monetize your content by joining TikTok's Creator Fund or Creator Marketplace programs, which pay you based on your views and engagement. You can also monetize your content by promoting your products or services, selling your merchandise, offering your courses or coaching, or using affiliate links or codes.

              -
            • How do I stay updated on the latest trends and features on Reels and TikTok?
            • -

              If you want to stay updated on the latest trends and features on Reels or TikTok, you can use some of the resources that these platforms provide. On Reels, you can tap on the magnifying glass icon in the bottom of the screen, then tap on "Reels". Here you can see some of the trending videos, hashtags, music, effects, and creators on Reels. You can also follow some of the official accounts of Instagram, such as @instagram, @creators, or @shop, to get the latest news and updates on Reels. On TikTok, you can tap on the discover icon in the bottom of the screen. Here you can see some of the trending videos, hashtags, music, effects, and creators on TikTok. You can also follow some of the official accounts of TikTok, such as @tiktok, @tiktoktips, or @tiktokcreators, to get the latest news and updates on TikTok.

              -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Euro Truck Simulator 2 - The Most Realistic and Immersive Truck Driving Game for PC.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Euro Truck Simulator 2 - The Most Realistic and Immersive Truck Driving Game for PC.md deleted file mode 100644 index 3825573b4f84f68a686b9b2f0f6e026184f9b2eb..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Euro Truck Simulator 2 - The Most Realistic and Immersive Truck Driving Game for PC.md +++ /dev/null @@ -1,129 +0,0 @@ - -

            Euro Truck Simulator APK Download for PC

            -

            If you love driving simulation games, you might have heard of Euro Truck Simulator, one of the most popular trucking games in the world. In this game, you can drive a cargo truck across Europe, delivering goods from one destination to another. You can also build your own trucking company, buy new trucks, hire drivers, and earn money.

            -

            euro truck simulator apk download for pc


            DOWNLOAD === https://ssurll.com/2uNSvo



            -

            But did you know that you can also play this game on your PC? Yes, you can download Euro Truck Simulator APK for PC and enjoy the game on a bigger screen, with better graphics, sound, and performance. In this article, we will show you how to do that, as well as some tips and tricks to make your gameplay more fun and realistic.

            -

            What is Euro Truck Simulator?

            -

            Euro Truck Simulator is a simulation game developed by SCS Software, a Czech company that specializes in vehicle-related games. The first version of the game was released in 2008, and since then, several sequels and expansions have been launched. The latest version is Euro Truck Simulator 2, which was released in 2012.

            -

            In Euro Truck Simulator 2, you can drive a variety of trucks from different European manufacturers, such as Mercedes-Benz, Volvo, Scania, MAN, Renault, DAF, Iveco, and more. You can also customize your trucks with different paint jobs, accessories, engines, transmissions, chassis, wheels, lights, horns, etc.

            -

            The game features a realistic map of Europe, with over 70 cities in 13 countries. You can drive on different types of roads, such as highways, country roads, city streets, dirt roads, etc. You can also encounter different weather conditions, traffic situations, tolls, speed limits, police checks, etc.

            -

            The game also has a career mode, where you can start as a low-skilled driver who works for various companies. As you complete deliveries and earn money, you can buy your own truck depot or build a fleet of trucks yourself. You can also upgrade your skills, such as fuel efficiency, long distance driving, fragile cargo handling, etc.

            -

            Why download Euro Truck Simulator APK for PC?

            -

            Euro Truck Simulator 2 is available for Windows PCs as a paid game. However, if you want to play it for free or if you don't have a compatible PC system, you can also download Euro Truck Simulator APK for PC. This is an Android version of the game that you can run on your PC using an emulator.

            -

            euro truck simulator 2 free download full version pc
            -euro truck simulator 2 apk for pc windows 10
            -euro truck simulator 2 download for pc highly compressed
            -euro truck simulator 2 mods download for pc
            -euro truck simulator 2 crack download for pc
            -euro truck simulator 2 multiplayer download for pc
            -euro truck simulator 2 download for pc ocean of games
            -euro truck simulator 2 download for pc softonic
            -euro truck simulator 2 download for pc windows 7
            -euro truck simulator 2 download for pc with product key
            -euro truck simulator 2 scandinavia download for pc
            -euro truck simulator 2 going east download for pc
            -euro truck simulator 2 beyond the baltic sea download for pc
            -euro truck simulator 2 italia download for pc
            -euro truck simulator 2 vive la france download for pc
            -euro truck simulator 2 road to the black sea download for pc
            -euro truck simulator 2 heavy cargo pack download for pc
            -euro truck simulator 2 special transport download for pc
            -euro truck simulator 2 bus mod download for pc
            -euro truck simulator 2 car mod download for pc
            -euro truck simulator 2 map mod download for pc
            -euro truck simulator 2 save game download for pc
            -euro truck simulator 2 profile download for pc
            -euro truck simulator 2 activation key download for pc
            -euro truck simulator 2 patch download for pc
            -euro truck simulator 2 update download for pc
            -euro truck simulator 2 trainer download for pc
            -euro truck simulator 2 cheat engine download for pc
            -euro truck simulator 2 setup download for pc
            -euro truck simulator 2 iso file download for pc
            -euro truck simulator 3 apk download for pc
            -how to install euro truck simulator apk on pc
            -how to play euro truck simulator apk on pc with bluestacks[^1^]
            -how to run euro truck simulator apk on windows[^1^]
            -how to get euro truck driver apk on your computer[^1^]
            -best site to download euro truck driver apk for laptop[^1^]
            -where can i find free and safe apk files of euro truck driver[^1^]
            -how to update the latest version of euro truck driver apk on my desktop[^1^]
            -how to fix the error of loading the game data of euro truck driver apk on my notebook[^1^]
            -how to transfer my progress from android to pc in euro truck driver apk[^1^]
            -how to connect a controller to play euro truck driver apk on my macbook[^1^]
            -how to customize the graphics settings of euro truck driver apk on my chromebook[^1^]
            -how to enable the online mode of euro truck driver apk on my surface pro[^1^]
            -how to join a multiplayer server in euro truck driver apk on my lenovo yoga[^1^]
            -how to chat with other players in euro truck driver apk on my dell inspiron[^1^]
            -how to earn more money and xp in euro truck driver apk on my hp pavilion[^1^]
            -how to unlock new trucks and trailers in euro truck driver apk on my acer aspire[^1^]
            -how to upgrade and repair my trucks in euro truck driver apk on my asus zenbook[^1^]
            -how to change the camera view and control options in euro truck driver apk on my msi gaming laptop[^1^]
            -how to complete different missions and challenges in euro truck driver apk on my alienware area51m[^1^]

            -

            There are many benefits of playing Euro Truck Simulator APK for PC. Here are some of them:

            -
              -
            • You can enjoy the game on a larger screen with higher resolution and better graphics quality.
            • -
            • You can use your keyboard and mouse to control your truck more easily and precisely.
            • -
            • You can save your battery life and storage space on your mobile device.
            • -
            • You can access more features and options that are not available on the mobile version.
            • -
            • You can play the game offline or online with other players from around the world.
            • -
            -

            So, if you are interested in playing Euro Truck Simulator APK for PC, read on to find out how to do it.

            -

            How to download Euro Truck Simulator APK for PC?

            -

            To download Euro Truck Simulator APK for PC, you will need two things: an emulator and an APK file. An emulator is a software that allows you to run Android apps on your PC. An APK file is a package file that contains the app and its data. Here are the steps to follow:

            -

            Download an emulator

            -

            There are many emulators available for PC, but we recommend using BlueStacks, which is one of the most popular and reliable ones. You can download it from its official website: [BlueStacks]. Follow the instructions on the website to install it on your PC.

            -

            Download the APK file

            -

            Next, you will need to download the APK file of Euro Truck Simulator 2. You can find it on various websites, but we suggest using [APKPure], which is a trusted and safe source. Go to the website and search for Euro Truck Simulator 2. Click on the download button and save the file on your PC.

            -

            Install and launch the game

            -

            Now that you have both the emulator and the APK file, you can install and launch the game. Here's how:

            -
              -
            1. Open BlueStacks and sign in with your Google account.
            2. -
            3. Go to the folder where you saved the APK file and right-click on it.
            4. -
            5. Select "Open with" and choose "BlueStacks".
            6. -
            7. The game will start installing on the emulator.
            8. -
            9. Once the installation is complete, you will see the game icon on the BlueStacks home screen.
            10. -
            11. Click on it and enjoy playing Euro Truck Simulator 2 on your PC.
            12. -
            -

            Tips and tricks for playing Euro Truck Simulator on PC

            -

            Now that you know how to download Euro Truck Simulator APK for PC, here are some tips and tricks to make your gameplay more fun and realistic:

            -

            Customize your controls

            -

            One of the advantages of playing Euro Truck Simulator on PC is that you can use your keyboard and mouse to control your truck. However, you might want to adjust the settings to suit your preferences. To do that, go to "Options" > "Controls" and change the keys or buttons for steering, accelerating, braking, shifting gears, turning signals, etc. You can also enable or disable features such as automatic transmission, cruise control, speed limiter, etc.

            -

            Use the radio feature

            -

            Driving a truck across Europe can be boring without some music or podcasts to keep you company. Luckily, Euro Truck Simulator has a radio feature that allows you to listen to your favorite stations or files while driving. To use it, go to "Options" > "Radio" and add or remove stations or files from your list. You can also adjust the volume and switch between stations or files using the keyboard shortcuts.

            -

            Explore different maps and trucks

            -

            Euro Truck Simulator has a lot of variety when it comes to maps and trucks. You can unlock new maps by expanding your company or buying new garages in different countries. You can also unlock new trucks by earning money or visiting dealerships in different cities. You can choose from different models, brands, colors, accessories, etc. You can also customize your license plate, dashboard, interior, etc.

            -

            Manage your own company

            -

            Euro Truck Simulator also has a business simulation aspect where you can manage your own company. You can hire drivers, buy garages, assign trucks, monitor deliveries, etc. You can also view your statistics, such as income, expenses, reputation, etc. You can also access online features such as leaderboards, achievements, etc.

            -

            Conclusion

            -

            Euro Truck Simulator is a great game for anyone who loves driving simulation games. It offers a realistic and immersive experience of driving a truck across Europe. You can also download Euro Truck Simulator APK for PC and play it on a bigger screen with better graphics and performance. All you need is an emulator and an APK file. Follow our guide above to learn how to do it easily and quickly. Then enjoy playing Euro Truck Simulator 2 on your PC with our tips and tricks.

            -

            FAQs

            -
              -
            • Q: How much space does Euro Truck Simulator 2 take on PC?
            • -
            • A: The game requires about 4 GB of free disk space on your PC to install and run the game.
            • -
            • Q: How can I update Euro Truck Simulator 2 on PC?
            • -
            • A: You can update the game automatically or manually. To update it automatically, make sure you have an internet connection and launch the game. The game will check for updates and download them if available. To update it manually, go to the official website of the game and download the latest patch. Then run the patch file and follow the instructions.
            • -
            • Q: How can I play Euro Truck Simulator 2 online with other players?
            • -
            • A: You can play Euro Truck Simulator 2 online with other players using a mod called TruckersMP. This mod allows you to join servers and drive with or against other players in real time. You can also chat, join convoys, participate in events, etc. To use this mod, you need to have a Steam account and a registered copy of the game. You also need to download and install the mod from its official website: [TruckersMP].
            • -
            • Q: How can I get more money and experience in Euro Truck Simulator 2?
            • -
            • A: There are several ways to get more money and experience in Euro Truck Simulator 2. Some of them are:
            • -
                -
              • Complete more deliveries and contracts with higher rewards and bonuses.
              • -
              • Upgrade your skills and unlock more profitable types of cargo and routes.
              • -
              • Buy more trucks and hire more drivers to work for you.
              • -
              • Use cheats or mods to increase your money and experience. However, this might affect your gameplay balance and online features.
              • -
              -
            • Q: How can I fix common errors or problems in Euro Truck Simulator 2?
            • -
            • A: If you encounter any errors or problems in Euro Truck Simulator 2, such as crashes, freezes, lags, bugs, etc., you can try some of these solutions:
            • -
                -
              • Make sure your PC meets the minimum system requirements for the game.
              • -
              • Update your graphics card drivers and DirectX software.
              • -
              • Run the game as an administrator and in compatibility mode.
              • -
              • Disable any antivirus or firewall software that might interfere with the game.
              • -
              • Delete any mods or files that might cause conflicts with the game.
              • -
              • Verify the integrity of the game files using Steam or BlueStacks.
              • -
              • Reinstall the game or the emulator if nothing else works.
              • -
              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/smallyu/img-to-music/utils.py b/spaces/smallyu/img-to-music/utils.py deleted file mode 100644 index 58f6e0c1f9c6af926a3cacf090517d6a62d618be..0000000000000000000000000000000000000000 --- a/spaces/smallyu/img-to-music/utils.py +++ /dev/null @@ -1,50 +0,0 @@ -import json -import numpy as np -import httpx -import os - -from constants import MUBERT_TAGS, MUBERT_MODE, MUBERT_LICENSE, MUBERT_TOKEN - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - -def get_pat(email: str): - r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess', - json={ - "method": "GetServiceAccess", - "params": { - "email": email, - "license": MUBERT_LICENSE, - "token": MUBERT_TOKEN, - "mode": MUBERT_MODE, - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, "probably incorrect e-mail" - pat = rdata['data']['pat'] - return pat - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - return ret \ No newline at end of file diff --git a/spaces/sneedium/captcha_pixelplanet/modules/resnet.py b/spaces/sneedium/captcha_pixelplanet/modules/resnet.py deleted file mode 100644 index 6bcb4698fe8e8e5079cc891df49b73967fcfc9e3..0000000000000000000000000000000000000000 --- a/spaces/sneedium/captcha_pixelplanet/modules/resnet.py +++ /dev/null @@ -1,133 +0,0 @@ -import math - -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.model_zoo as model_zoo -import torch.utils.checkpoint as cp - -def conv1x1(in_planes, out_planes, stride=1): - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -def conv3x3(in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, with_cp=False): - super(BasicBlock, self).__init__() - self.conv1 = conv1x1(inplanes, planes) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes, stride) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.with_cp = with_cp - - def forward(self, x): - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, block, layers, output_channels=512): - super(ResNet, self).__init__() - channels = [output_channels//(2**i) for i in reversed(range(5))] - self.inplanes = channels[0] - self.conv1 = nn.Conv2d(3, channels[0], kernel_size=3, stride=1, padding=1, - bias=False) - self.bn1 = nn.BatchNorm2d(channels[0]) - self.relu = nn.ReLU(inplace=True) - - self.layer1 = self._make_layer(block, channels[0], layers[0], stride=2) - self.layer2 = self._make_layer(block, channels[1], layers[1], stride=1) - self.layer3 = self._make_layer(block, channels[2], layers[2], stride=2) - self.layer4 = self._make_layer(block, channels[3], layers[3], stride=1) - self.layer5 = self._make_layer(block, channels[4], layers[4], stride=1) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x, extra_feats=None): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - if extra_feats is not None: - if extra_feats[0].shape[1]>0: - x = x+F.interpolate(extra_feats[0], x.shape[2:], mode='nearest') - x = self.layer1(x) - if extra_feats is not None: - if extra_feats[1].shape[1]>0: - x = x+F.interpolate(extra_feats[1], x.shape[2:], mode='nearest') - x = self.layer2(x) - if extra_feats is not None: - if extra_feats[2].shape[1]>0: - x = x+F.interpolate(extra_feats[2], x.shape[2:], mode='nearest') - x = self.layer3(x) - if extra_feats is not None: - if extra_feats[3].shape[1]>0: - x = x+F.interpolate(extra_feats[3], x.shape[2:], mode='nearest') - x = self.layer4(x) - if extra_feats is not None: - if extra_feats[4].shape[1]>0: - x = x+F.interpolate(extra_feats[4], x.shape[2:], mode='nearest') - x = self.layer5(x) - if extra_feats is not None: - if extra_feats[5].shape[1]>0: - x = x+F.interpolate(extra_feats[5], x.shape[2:], mode='nearest') - return x - -def resnet45(alpha_d, output_channels=512): - layers = [int(round(x*alpha_d)) for x in [3, 4, 6, 6, 3]] - return ResNet(BasicBlock, layers, output_channels=output_channels) diff --git a/spaces/sparanoid/milky-green-sovits-4/README.md b/spaces/sparanoid/milky-green-sovits-4/README.md deleted file mode 100644 index e2b958430aaac82ae617429b2c95b2778d41b3b6..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-sovits-4/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Milky Green SoVITS 4 -emoji: 🍵 -colorFrom: cyan -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/mbart/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/mbart/README.md deleted file mode 100644 index a45e37243c2c5d4027f79cf71498ca58bbac7d98..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/mbart/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# MBART: Multilingual Denoising Pre-training for Neural Machine Translation -[https://arxiv.org/abs/2001.08210] - -## Introduction - -MBART is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`mbart.CC25` | mBART model with 12 encoder and decoder layers trained on 25 languages' monolingual corpus | 610M | [mbart.CC25.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.v2.tar.gz) -`mbart.ft.ro_en` | finetune mBART cc25 model on ro-en language pairs | 610M | [mbart.cc25.ft.enro.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.ft.enro.tar.gz) - -## Results - -**[WMT16 EN-RO](https://www.statmt.org/wmt16/translation-task.html)** - -_(test set, no additional data used)_ - -Model | en-ro | ro-en ----|---|--- -`Random` | 34.3 | 34.0 -`mbart.cc25` | 37.7 | 37.8 -`mbart.enro.bilingual` | 38.5 | 38.5 - -## BPE data -# download model -wget https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.v2.tar.gz -tar -xzvf mbart.CC25.tar.gz -# bpe data -install SPM [here](https://github.com/google/sentencepiece) -```bash -SPM=/path/to/sentencepiece/build/src/spm_encode -MODEL=sentence.bpe.model -${SPM} --model=${MODEL} < ${DATA}/${TRAIN}.${SRC} > ${DATA}/${TRAIN}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${TRAIN}.${TGT} > ${DATA}/${TRAIN}.spm.${TGT} & -${SPM} --model=${MODEL} < ${DATA}/${VALID}.${SRC} > ${DATA}/${VALID}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${VALID}.${TGT} > ${DATA}/${VALID}.spm.${TGT} & -${SPM} --model=${MODEL} < ${DATA}/${TEST}.${SRC} > ${DATA}/${TEST}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${TEST}.${TGT} > ${DATA}/${TEST}.spm.${TGT} & -``` - -## Preprocess data - -```bash -DICT=dict.txt -fairseq-preprocess \ - --source-lang ${SRC} \ - --target-lang ${TGT} \ - --trainpref ${DATA}/${TRAIN}.spm \ - --validpref ${DATA}/${VALID}.spm \ - --testpref ${DATA}/${TEST}.spm \ - --destdir ${DEST}/${NAME} \ - --thresholdtgt 0 \ - --thresholdsrc 0 \ - --srcdict ${DICT} \ - --tgtdict ${DICT} \ - --workers 70 -``` - -## Finetune on EN-RO -Finetune on mbart CC25 - -```bash -PRETRAIN=mbart.cc25 # fix if you moved the downloaded checkpoint -langs=ar_AR,cs_CZ,de_DE,en_XX,es_XX,et_EE,fi_FI,fr_XX,gu_IN,hi_IN,it_IT,ja_XX,kk_KZ,ko_KR,lt_LT,lv_LV,my_MM,ne_NP,nl_XX,ro_RO,ru_RU,si_LK,tr_TR,vi_VN,zh_CN - -fairseq-train path_2_data \ - --encoder-normalize-before --decoder-normalize-before \ - --arch mbart_large --layernorm-embedding \ - --task translation_from_pretrained_bart \ - --source-lang en_XX --target-lang ro_RO \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler polynomial_decay --lr 3e-05 --warmup-updates 2500 --total-num-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 \ - --restore-file $PRETRAIN \ - --reset-optimizer --reset-meters --reset-dataloader --reset-lr-scheduler \ - --langs $langs \ - --ddp-backend legacy_ddp -``` -## Generate on EN-RO -Get sacrebleu on finetuned en-ro model - -get tokenizer [here](https://github.com/rsennrich/wmt16-scripts) -```bash -wget https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.ft.enro.tar.gz -tar -xzvf mbart.cc25.ft.enro.tar.gz -``` - -```bash -model_dir=MBART_finetuned_enro # fix if you moved the checkpoint - -fairseq-generate path_2_data \ - --path $model_dir/model.pt \ - --task translation_from_pretrained_bart \ - --gen-subset test \ - -t ro_RO -s en_XX \ - --bpe 'sentencepiece' --sentencepiece-model $model_dir/sentence.bpe.model \ - --sacrebleu --remove-bpe 'sentencepiece' \ - --batch-size 32 --langs $langs > en_ro - -cat en_ro | grep -P "^H" |sort -V |cut -f 3- | sed 's/\[ro_RO\]//g' |$TOKENIZER ro > en_ro.hyp -cat en_ro | grep -P "^T" |sort -V |cut -f 2- | sed 's/\[ro_RO\]//g' |$TOKENIZER ro > en_ro.ref -sacrebleu -tok 'none' -s 'none' en_ro.ref < en_ro.hyp -``` - -## Citation - -```bibtex -@article{liu2020multilingual, - title={Multilingual Denoising Pre-training for Neural Machine Translation}, - author={Yinhan Liu and Jiatao Gu and Naman Goyal and Xian Li and Sergey Edunov and Marjan Ghazvininejad and Mike Lewis and Luke Zettlemoyer}, - year={2020}, - eprint={2001.08210}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/lstm.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/lstm.py deleted file mode 100644 index e1e66a7d50fa1b1b313e9d1a6e7862ac9bfaa074..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/lstm.py +++ /dev/null @@ -1,753 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import AdaptiveSoftmax, FairseqDropout -from torch import Tensor - - -DEFAULT_MAX_SOURCE_POSITIONS = 1e5 -DEFAULT_MAX_TARGET_POSITIONS = 1e5 - - -@register_model("lstm") -class LSTMModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-embed-path', type=str, metavar='STR', - help='path to pre-trained encoder embedding') - parser.add_argument('--encoder-freeze-embed', action='store_true', - help='freeze encoder embeddings') - parser.add_argument('--encoder-hidden-size', type=int, metavar='N', - help='encoder hidden size') - parser.add_argument('--encoder-layers', type=int, metavar='N', - help='number of encoder layers') - parser.add_argument('--encoder-bidirectional', action='store_true', - help='make all layers of encoder bidirectional') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-embed-path', type=str, metavar='STR', - help='path to pre-trained decoder embedding') - parser.add_argument('--decoder-freeze-embed', action='store_true', - help='freeze decoder embeddings') - parser.add_argument('--decoder-hidden-size', type=int, metavar='N', - help='decoder hidden size') - parser.add_argument('--decoder-layers', type=int, metavar='N', - help='number of decoder layers') - parser.add_argument('--decoder-out-embed-dim', type=int, metavar='N', - help='decoder output embedding dimension') - parser.add_argument('--decoder-attention', type=str, metavar='BOOL', - help='decoder attention') - parser.add_argument('--adaptive-softmax-cutoff', metavar='EXPR', - help='comma separated list of adaptive softmax cutoff points. ' - 'Must be used with adaptive_loss criterion') - parser.add_argument('--share-decoder-input-output-embed', default=False, - action='store_true', - help='share decoder input and output embeddings') - parser.add_argument('--share-all-embeddings', default=False, action='store_true', - help='share encoder, decoder and output embeddings' - ' (requires shared dictionary and embed dim)') - - # Granular dropout settings (if not specified these default to --dropout) - parser.add_argument('--encoder-dropout-in', type=float, metavar='D', - help='dropout probability for encoder input embedding') - parser.add_argument('--encoder-dropout-out', type=float, metavar='D', - help='dropout probability for encoder output') - parser.add_argument('--decoder-dropout-in', type=float, metavar='D', - help='dropout probability for decoder input embedding') - parser.add_argument('--decoder-dropout-out', type=float, metavar='D', - help='dropout probability for decoder output') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted (in case there are any new ones) - base_architecture(args) - - if args.encoder_layers != args.decoder_layers: - raise ValueError("--encoder-layers must match --decoder-layers") - - max_source_positions = getattr( - args, "max_source_positions", DEFAULT_MAX_SOURCE_POSITIONS - ) - max_target_positions = getattr( - args, "max_target_positions", DEFAULT_MAX_TARGET_POSITIONS - ) - - def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(embed_path) - utils.print_embed_overlap(embed_dict, dictionary) - return utils.load_embedding(embed_dict, dictionary, embed_tokens) - - if args.encoder_embed_path: - pretrained_encoder_embed = load_pretrained_embedding_from_file( - args.encoder_embed_path, task.source_dictionary, args.encoder_embed_dim - ) - else: - num_embeddings = len(task.source_dictionary) - pretrained_encoder_embed = Embedding( - num_embeddings, args.encoder_embed_dim, task.source_dictionary.pad() - ) - - if args.share_all_embeddings: - # double check all parameters combinations are valid - if task.source_dictionary != task.target_dictionary: - raise ValueError("--share-all-embeddings requires a joint dictionary") - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embed not compatible with --decoder-embed-path" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to " - "match --decoder-embed-dim" - ) - pretrained_decoder_embed = pretrained_encoder_embed - args.share_decoder_input_output_embed = True - else: - # separate decoder input embeddings - pretrained_decoder_embed = None - if args.decoder_embed_path: - pretrained_decoder_embed = load_pretrained_embedding_from_file( - args.decoder_embed_path, - task.target_dictionary, - args.decoder_embed_dim, - ) - # one last double check of parameter combinations - if args.share_decoder_input_output_embed and ( - args.decoder_embed_dim != args.decoder_out_embed_dim - ): - raise ValueError( - "--share-decoder-input-output-embeddings requires " - "--decoder-embed-dim to match --decoder-out-embed-dim" - ) - - if args.encoder_freeze_embed: - pretrained_encoder_embed.weight.requires_grad = False - if args.decoder_freeze_embed: - pretrained_decoder_embed.weight.requires_grad = False - - encoder = LSTMEncoder( - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - hidden_size=args.encoder_hidden_size, - num_layers=args.encoder_layers, - dropout_in=args.encoder_dropout_in, - dropout_out=args.encoder_dropout_out, - bidirectional=args.encoder_bidirectional, - pretrained_embed=pretrained_encoder_embed, - max_source_positions=max_source_positions, - ) - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - hidden_size=args.decoder_hidden_size, - out_embed_dim=args.decoder_out_embed_dim, - num_layers=args.decoder_layers, - dropout_in=args.decoder_dropout_in, - dropout_out=args.decoder_dropout_out, - attention=utils.eval_bool(args.decoder_attention), - encoder_output_units=encoder.output_units, - pretrained_embed=pretrained_decoder_embed, - share_input_output_embed=args.share_decoder_input_output_embed, - adaptive_softmax_cutoff=( - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int) - if args.criterion == "adaptive_loss" - else None - ), - max_target_positions=max_target_positions, - residuals=False, - ) - return cls(encoder, decoder) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - ) - return decoder_out - - -class LSTMEncoder(FairseqEncoder): - """LSTM encoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - bidirectional=False, - left_pad=True, - pretrained_embed=None, - padding_idx=None, - max_source_positions=DEFAULT_MAX_SOURCE_POSITIONS, - ): - super().__init__(dictionary) - self.num_layers = num_layers - self.dropout_in_module = FairseqDropout( - dropout_in*1.0, module_name=self.__class__.__name__ - ) - self.dropout_out_module = FairseqDropout( - dropout_out*1.0, module_name=self.__class__.__name__ - ) - self.bidirectional = bidirectional - self.hidden_size = hidden_size - self.max_source_positions = max_source_positions - - num_embeddings = len(dictionary) - self.padding_idx = padding_idx if padding_idx is not None else dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.lstm = LSTM( - input_size=embed_dim, - hidden_size=hidden_size, - num_layers=num_layers, - dropout=self.dropout_out_module.p if num_layers > 1 else 0.0, - bidirectional=bidirectional, - ) - self.left_pad = left_pad - - self.output_units = hidden_size - if bidirectional: - self.output_units *= 2 - - def forward( - self, - src_tokens: Tensor, - src_lengths: Tensor, - enforce_sorted: bool = True, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of - shape `(batch, src_len)` - src_lengths (LongTensor): lengths of each source sentence of - shape `(batch)` - enforce_sorted (bool, optional): if True, `src_tokens` is - expected to contain sequences sorted by length in a - decreasing order. If False, this condition is not - required. Default: True. - """ - if self.left_pad: - # nn.utils.rnn.pack_padded_sequence requires right-padding; - # convert left-padding to right-padding - src_tokens = utils.convert_padding_direction( - src_tokens, - torch.zeros_like(src_tokens).fill_(self.padding_idx), - left_to_right=True, - ) - - bsz, seqlen = src_tokens.size() - - # embed tokens - x = self.embed_tokens(src_tokens) - x = self.dropout_in_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # pack embedded source tokens into a PackedSequence - packed_x = nn.utils.rnn.pack_padded_sequence( - x, src_lengths.cpu(), enforce_sorted=enforce_sorted - ) - - # apply LSTM - if self.bidirectional: - state_size = 2 * self.num_layers, bsz, self.hidden_size - else: - state_size = self.num_layers, bsz, self.hidden_size - h0 = x.new_zeros(*state_size) - c0 = x.new_zeros(*state_size) - packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, _ = nn.utils.rnn.pad_packed_sequence( - packed_outs, padding_value=self.padding_idx * 1.0 - ) - x = self.dropout_out_module(x) - assert list(x.size()) == [seqlen, bsz, self.output_units] - - if self.bidirectional: - final_hiddens = self.combine_bidir(final_hiddens, bsz) - final_cells = self.combine_bidir(final_cells, bsz) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() - - return tuple( - ( - x, # seq_len x batch x hidden - final_hiddens, # num_layers x batch x num_directions*hidden - final_cells, # num_layers x batch x num_directions*hidden - encoder_padding_mask, # seq_len x batch - ) - ) - - def combine_bidir(self, outs, bsz: int): - out = outs.view(self.num_layers, 2, bsz, -1).transpose(1, 2).contiguous() - return out.view(self.num_layers, bsz, -1) - - def reorder_encoder_out(self, encoder_out: Tuple[Tensor, Tensor, Tensor, Tensor], new_order): - return tuple( - ( - encoder_out[0].index_select(1, new_order), - encoder_out[1].index_select(1, new_order), - encoder_out[2].index_select(1, new_order), - encoder_out[3].index_select(1, new_order), - ) - ) - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return self.max_source_positions - - -class AttentionLayer(nn.Module): - def __init__(self, input_embed_dim, source_embed_dim, output_embed_dim, bias=False): - super().__init__() - - self.input_proj = Linear(input_embed_dim, source_embed_dim, bias=bias) - self.output_proj = Linear( - input_embed_dim + source_embed_dim, output_embed_dim, bias=bias - ) - - def forward(self, input, source_hids, encoder_padding_mask): - # input: bsz x input_embed_dim - # source_hids: srclen x bsz x source_embed_dim - - # x: bsz x source_embed_dim - x = self.input_proj(input) - - # compute attention - attn_scores = (source_hids * x.unsqueeze(0)).sum(dim=2) - - # don't attend over padding - if encoder_padding_mask is not None: - attn_scores = ( - attn_scores.float() - .masked_fill_(encoder_padding_mask, float("-inf")) - .type_as(attn_scores) - ) # FP16 support: cast to float and back - - attn_scores = F.softmax(attn_scores, dim=0) # srclen x bsz - - # sum weighted sources - x = (attn_scores.unsqueeze(2) * source_hids).sum(dim=0) - - x = torch.tanh(self.output_proj(torch.cat((x, input), dim=1))) - return x, attn_scores - - -class LSTMDecoder(FairseqIncrementalDecoder): - """LSTM decoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - out_embed_dim=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - attention=True, - encoder_output_units=512, - pretrained_embed=None, - share_input_output_embed=False, - adaptive_softmax_cutoff=None, - max_target_positions=DEFAULT_MAX_TARGET_POSITIONS, - residuals=False, - ): - super().__init__(dictionary) - self.dropout_in_module = FairseqDropout( - dropout_in*1.0, module_name=self.__class__.__name__ - ) - self.dropout_out_module = FairseqDropout( - dropout_out*1.0, module_name=self.__class__.__name__ - ) - self.hidden_size = hidden_size - self.share_input_output_embed = share_input_output_embed - self.need_attn = True - self.max_target_positions = max_target_positions - self.residuals = residuals - self.num_layers = num_layers - - self.adaptive_softmax = None - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.encoder_output_units = encoder_output_units - if encoder_output_units != hidden_size and encoder_output_units != 0: - self.encoder_hidden_proj = Linear(encoder_output_units, hidden_size) - self.encoder_cell_proj = Linear(encoder_output_units, hidden_size) - else: - self.encoder_hidden_proj = self.encoder_cell_proj = None - - # disable input feeding if there is no encoder - # input feeding is described in arxiv.org/abs/1508.04025 - input_feed_size = 0 if encoder_output_units == 0 else hidden_size - self.layers = nn.ModuleList( - [ - LSTMCell( - input_size=input_feed_size + embed_dim - if layer == 0 - else hidden_size, - hidden_size=hidden_size, - ) - for layer in range(num_layers) - ] - ) - - if attention: - # TODO make bias configurable - self.attention = AttentionLayer( - hidden_size, encoder_output_units, hidden_size, bias=False - ) - else: - self.attention = None - - if hidden_size != out_embed_dim: - self.additional_fc = Linear(hidden_size, out_embed_dim) - - if adaptive_softmax_cutoff is not None: - # setting adaptive_softmax dropout to dropout_out for now but can be redefined - self.adaptive_softmax = AdaptiveSoftmax( - num_embeddings, - hidden_size, - adaptive_softmax_cutoff, - dropout=dropout_out, - ) - elif not self.share_input_output_embed: - self.fc_out = Linear(out_embed_dim, num_embeddings, dropout=dropout_out) - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Tuple[Tensor, Tensor, Tensor, Tensor]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - src_lengths: Optional[Tensor] = None, - ): - x, attn_scores = self.extract_features( - prev_output_tokens, encoder_out, incremental_state - ) - return self.output_layer(x), attn_scores - - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Tuple[Tensor, Tensor, Tensor, Tensor]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - """ - Similar to *forward* but only return features. - """ - # get outputs from encoder - if encoder_out is not None: - encoder_outs = encoder_out[0] - encoder_hiddens = encoder_out[1] - encoder_cells = encoder_out[2] - encoder_padding_mask = encoder_out[3] - else: - encoder_outs = torch.empty(0) - encoder_hiddens = torch.empty(0) - encoder_cells = torch.empty(0) - encoder_padding_mask = torch.empty(0) - srclen = encoder_outs.size(0) - - if incremental_state is not None and len(incremental_state) > 0: - prev_output_tokens = prev_output_tokens[:, -1:] - - bsz, seqlen = prev_output_tokens.size() - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - x = self.dropout_in_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental generation) - if incremental_state is not None and len(incremental_state) > 0: - prev_hiddens, prev_cells, input_feed = self.get_cached_state( - incremental_state - ) - elif encoder_out is not None: - # setup recurrent cells - prev_hiddens = [encoder_hiddens[i] for i in range(self.num_layers)] - prev_cells = [encoder_cells[i] for i in range(self.num_layers)] - if self.encoder_hidden_proj is not None: - prev_hiddens = [self.encoder_hidden_proj(y) for y in prev_hiddens] - prev_cells = [self.encoder_cell_proj(y) for y in prev_cells] - input_feed = x.new_zeros(bsz, self.hidden_size) - else: - # setup zero cells, since there is no encoder - zero_state = x.new_zeros(bsz, self.hidden_size) - prev_hiddens = [zero_state for i in range(self.num_layers)] - prev_cells = [zero_state for i in range(self.num_layers)] - input_feed = None - - assert ( - srclen > 0 or self.attention is None - ), "attention is not supported if there are no encoder outputs" - attn_scores: Optional[Tensor] = ( - x.new_zeros(srclen, seqlen, bsz) if self.attention is not None else None - ) - outs = [] - for j in range(seqlen): - # input feeding: concatenate context vector from previous time step - if input_feed is not None: - input = torch.cat((x[j, :, :], input_feed), dim=1) - else: - input = x[j] - - for i, rnn in enumerate(self.layers): - # recurrent cell - hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i])) - - # hidden state becomes the input to the next layer - input = self.dropout_out_module(hidden) - if self.residuals: - input = input + prev_hiddens[i] - - # save state for next time step - prev_hiddens[i] = hidden - prev_cells[i] = cell - - # apply attention using the last layer's hidden state - if self.attention is not None: - assert attn_scores is not None - out, attn_scores[:, j, :] = self.attention( - hidden, encoder_outs, encoder_padding_mask - ) - else: - out = hidden - out = self.dropout_out_module(out) - - # input feeding - if input_feed is not None: - input_feed = out - - # save final output - outs.append(out) - - # Stack all the necessary tensors together and store - prev_hiddens_tensor = torch.stack(prev_hiddens) - prev_cells_tensor = torch.stack(prev_cells) - cache_state = torch.jit.annotate( - Dict[str, Optional[Tensor]], - { - "prev_hiddens": prev_hiddens_tensor, - "prev_cells": prev_cells_tensor, - "input_feed": input_feed, - }, - ) - self.set_incremental_state(incremental_state, "cached_state", cache_state) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - if hasattr(self, "additional_fc") and self.adaptive_softmax is None: - x = self.additional_fc(x) - x = self.dropout_out_module(x) - # srclen x tgtlen x bsz -> bsz x tgtlen x srclen - if not self.training and self.need_attn and self.attention is not None: - assert attn_scores is not None - attn_scores = attn_scores.transpose(0, 2) - else: - attn_scores = None - return x, attn_scores - - def output_layer(self, x): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = self.fc_out(x) - return x - - def get_cached_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - ) -> Tuple[List[Tensor], List[Tensor], Optional[Tensor]]: - cached_state = self.get_incremental_state(incremental_state, "cached_state") - assert cached_state is not None - prev_hiddens_ = cached_state["prev_hiddens"] - assert prev_hiddens_ is not None - prev_cells_ = cached_state["prev_cells"] - assert prev_cells_ is not None - prev_hiddens = [prev_hiddens_[i] for i in range(self.num_layers)] - prev_cells = [prev_cells_[j] for j in range(self.num_layers)] - input_feed = cached_state[ - "input_feed" - ] # can be None for decoder-only language models - return prev_hiddens, prev_cells, input_feed - - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - if incremental_state is None or len(incremental_state) == 0: - return - prev_hiddens, prev_cells, input_feed = self.get_cached_state(incremental_state) - prev_hiddens = [p.index_select(0, new_order) for p in prev_hiddens] - prev_cells = [p.index_select(0, new_order) for p in prev_cells] - if input_feed is not None: - input_feed = input_feed.index_select(0, new_order) - cached_state_new = torch.jit.annotate( - Dict[str, Optional[Tensor]], - { - "prev_hiddens": torch.stack(prev_hiddens), - "prev_cells": torch.stack(prev_cells), - "input_feed": input_feed, - }, - ) - self.set_incremental_state(incremental_state, "cached_state", cached_state_new), - return - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return self.max_target_positions - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.uniform_(m.weight, -0.1, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def LSTM(input_size, hidden_size, **kwargs): - m = nn.LSTM(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def LSTMCell(input_size, hidden_size, **kwargs): - m = nn.LSTMCell(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0.0): - """Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - m.weight.data.uniform_(-0.1, 0.1) - if bias: - m.bias.data.uniform_(-0.1, 0.1) - return m - - -@register_model_architecture("lstm", "lstm") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_freeze_embed = getattr(args, "encoder_freeze_embed", False) - args.encoder_hidden_size = getattr( - args, "encoder_hidden_size", args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 1) - args.encoder_bidirectional = getattr(args, "encoder_bidirectional", False) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", args.dropout) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", args.dropout) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_freeze_embed = getattr(args, "decoder_freeze_embed", False) - args.decoder_hidden_size = getattr( - args, "decoder_hidden_size", args.decoder_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 1) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - args.decoder_attention = getattr(args, "decoder_attention", "1") - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.adaptive_softmax_cutoff = getattr( - args, "adaptive_softmax_cutoff", "10000,50000,200000" - ) - - -@register_model_architecture("lstm", "lstm_wiseman_iwslt_de_en") -def lstm_wiseman_iwslt_de_en(args): - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", 0) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", 0) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", 0) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - base_architecture(args) - - -@register_model_architecture("lstm", "lstm_luong_wmt_en_de") -def lstm_luong_wmt_en_de(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1000) - args.encoder_layers = getattr(args, "encoder_layers", 4) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", 0) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1000) - args.decoder_layers = getattr(args, "decoder_layers", 4) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 1000) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", 0) - base_architecture(args) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py deleted file mode 100644 index e7e597f4749c591b057d776aacec39b44d99c037..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import lightconv_cuda -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from torch import nn -from torch.autograd import Function - - -class lightconvFunction(Function): - @staticmethod - def forward(ctx, x, weights, padding_l): - ctx.padding_l = padding_l - outputs = lightconv_cuda.forward(x, weights, padding_l) - variables = [x, weights] - ctx.save_for_backward(*variables) - return outputs[0] - - @staticmethod - def backward(ctx, grad_output): - outputs = lightconv_cuda.backward( - grad_output.contiguous(), ctx.padding_l, *ctx.saved_tensors - ) - grad_input, grad_weights = outputs - return grad_input, grad_weights, None - - -@with_incremental_state -class LightconvLayer(nn.Module): - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - weight_softmax=False, - num_heads=1, - weight_dropout=0.0, - bias=False, - ): - super(LightconvLayer, self).__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_softmax = weight_softmax - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - - self.weight = nn.Parameter(torch.Tensor(num_heads, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - self.reset_parameters() - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - for k, v in state_dict.items(): - if k.endswith(prefix + "weight"): - if v.dim() == 3 and v.size(1) == 1: - state_dict[k] = v.squeeze(1) - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, x, incremental_state=None): - - # during inference time, incremental BMM is faster - if incremental_state is not None: - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - - weight = self.weight - if self.weight_softmax: - weight = F.softmax(weight.float(), dim=1).type_as(weight) - - weight = weight[:, -x_unfold.size(2) :] - - K = weight.size(1) - - weight = ( - weight.view(1, H, K) - .expand(T * B, H, K) - .contiguous() - .view(T * B * H, K, 1) - ) - - weight = self.weight_dropout_module(weight) - output = torch.bmm(x_unfold, weight) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - # during training time, use CUDA kernel - else: - x = x.permute(1, 2, 0).contiguous() - weight = self.weight - if self.weight_softmax: - weight = F.softmax(self.weight, -1) - if self.weight_dropout_module.p: - weight = self.weight_dropout_module(weight) - return lightconvFunction.apply(x, weight, self.padding_l).permute(2, 0, 1) - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def half(self): - return self._apply(lambda t: t.half() if t.is_floating_point() else t) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/adamax.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/adamax.py deleted file mode 100644 index 98ff8ad7ad6c12ab5efc53ca76db2f1663be7906..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/adamax.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adamax") -class FairseqAdamax(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = Adamax(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adamax-betas', default='(0.9, 0.999)', metavar='B', - help='betas for Adam optimizer') - parser.add_argument('--adamax-eps', type=float, default=1e-8, metavar='D', - help='epsilon for Adam optimizer') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--no-bias-correction', default=False, action='store_true', - help='disable bias correction') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "betas": eval(self.args.adamax_betas), - "eps": self.args.adamax_eps, - "weight_decay": self.args.weight_decay, - "bias_correction": not self.args.no_bias_correction, - } - - -class Adamax(torch.optim.Optimizer): - """Implements Adamax algorithm (a variant of Adam based on infinity norm). - - It has been proposed in `Adam: A Method for Stochastic Optimization`__. - - Compared to the version in PyTorch, this version implements a fix for weight decay. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): learning rate (default: 2e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square - eps (float, optional): term added to the denominator to improve - numerical stability (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - bias_correction (bool, optional): enable bias correction (default: True) - - __ https://arxiv.org/abs/1412.6980 - """ - - def __init__( - self, - params, - lr=2e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - bias_correction=True, - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - if not 0.0 <= weight_decay: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) - - defaults = dict( - lr=lr, - betas=betas, - eps=eps, - weight_decay=weight_decay, - bias_correction=bias_correction, - ) - super(Adamax, self).__init__(params, defaults) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - def step(self, closure=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError("Adamax does not support sparse gradients") - - p_data_fp32 = p.data - if p.data.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - state = self.state[p] - - # State initialization - if len(state) == 0: - state["step"] = 0 - state["exp_avg"] = torch.zeros_like(p_data_fp32) - state["exp_inf"] = torch.zeros_like(p_data_fp32) - else: - state["exp_avg"] = state["exp_avg"].to(p_data_fp32) - state["exp_inf"] = state["exp_inf"].to(p_data_fp32) - - exp_avg, exp_inf = state["exp_avg"], state["exp_inf"] - beta1, beta2 = group["betas"] - eps = group["eps"] - - state["step"] += 1 - - # Update biased first moment estimate. - exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) - - # Update the exponentially weighted infinity norm. - torch.max( - exp_inf.mul_(beta2), - grad.abs_(), - out=exp_inf, - ) - - step_size = group["lr"] - if group["bias_correction"]: - bias_correction = 1 - beta1 ** state["step"] - step_size /= bias_correction - - if group["weight_decay"] != 0: - p_data_fp32.add_( - p_data_fp32, alpha=-group["weight_decay"] * group["lr"] - ) - - p_data_fp32.addcdiv_(exp_avg, exp_inf.add(eps), value=-step_size) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dennis Deyoung One Hundred Years From Now Rar.md b/spaces/stomexserde/gpt4-ui/Examples/Dennis Deyoung One Hundred Years From Now Rar.md deleted file mode 100644 index 98c24820d275a79e60295ed6fec104c5fe29cce8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dennis Deyoung One Hundred Years From Now Rar.md +++ /dev/null @@ -1,13 +0,0 @@ -
            -

            Review: Dennis DeYoung's One Hundred Years From Now

            -

            One Hundred Years From Now is the sixth solo album by former Styx vocalist and keyboardist Dennis DeYoung. It was originally released in Canada in 2007, and then in the United States in 2009. The album features 12 tracks of melodic rock that showcase DeYoung's distinctive voice and songwriting skills. The title track, a duet with Canadian singer Eric Lapointe, was a hit single in Quebec and received airplay on classic rock stations in the US. Other highlights include "This Time Next Year", a tribute to the Beatles, "Rain", a ballad with a gospel choir, and "Private Jones", a rocker about a soldier in Iraq.

            -

            One Hundred Years From Now is a rare example of an album that appeals to both old and new fans of DeYoung. It has enough elements of his Styx legacy, such as the harmonies, the keyboards, and the theatrical flair, to satisfy the nostalgic listeners. At the same time, it also has a modern sound and production, with guitars by Jim Peterik of Survivor and Tom Dziallo of Cheap Trick, that make it relevant to today's rock scene. DeYoung proves that he still has the talent and passion to create quality music that transcends time and trends.

            -

            dennis deyoung one hundred years from now rar


            Download Zip ··· https://urlgoal.com/2uI7DS



            -

            One Hundred Years From Now is available as a digital download or as a CD with bonus tracks. It is also available as a rar file, which is a compressed archive format that can be extracted with software such as WinRAR or 7-Zip. The rar file contains the audio files in mp3 format, as well as the cover art and lyrics. The rar file can be downloaded from various websites that offer free music downloads, such as MediaFire or RapidShare. However, downloading music from these sources may be illegal or unsafe, so it is recommended to purchase the album from official channels instead.

            - -

            One Hundred Years From Now is not only a musical album, but also a personal statement by DeYoung. The album reflects his views on life, love, and legacy, as well as his gratitude to his fans and his faith in God. The album's title comes from a quote by Mahatma Gandhi: "Live as if you were to die tomorrow. Learn as if you were to live forever." DeYoung said that he wanted to make an album that would stand the test of time and inspire people to live in the present and appreciate what they have.

            -

            The album also features some guest appearances by other artists, such as John Waite of The Babys and Bad English, who sings on "There Was a Time", and Richard Marx, who co-wrote and sings on "To the Good Old Days". DeYoung also pays homage to his former bandmates in Styx, by including a cover of "I Don't Want to Lose You", a song written by Tommy Shaw for their 1990 album Edge of the Century. DeYoung said that he wanted to show his respect and appreciation for their contributions to his career and music.

            -

            -

            One Hundred Years From Now is a testament to DeYoung's enduring talent and passion for music. It is an album that showcases his versatility as a singer, songwriter, and musician, as well as his ability to connect with his audience. It is an album that celebrates the past, embraces the present, and looks forward to the future. It is an album that will make you think, feel, and sing along.

            cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/sub314xxl/zeroscope/README.md b/spaces/sub314xxl/zeroscope/README.md deleted file mode 100644 index 6ec41b808bcaabf22db02b817ffe87812ae09a1f..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/zeroscope/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Zeroscope Text-To-Video -emoji: 🐠 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -duplicated_from: fffiloni/zeroscope ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Longman Lexicon Of Contemporary English Pdf.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Longman Lexicon Of Contemporary English Pdf.md deleted file mode 100644 index a3e40dca6bb5047cf2031714fc92d6665d228174..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Longman Lexicon Of Contemporary English Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

            longman lexicon of contemporary english pdf


            Download Zip 🌟 https://cinurl.com/2uEYmQ



            - -Longman lexicon of contemporary English. Longman Multi-Activity English-Chinese Classified Dictionary (English-English and English-Chinese). 1312 pages - 2016 - 87.06 MB - 14,527 downloads - Chinese. Chinese. Longman Dictionary of Contemporary English. Longman Word Skills. Longman Word Skills. Chinese Language. Chinese Language. Longman Word Skills. Longman Pronunciation in Use. Longman Pronunciation in Use. Chinese Language. Chinese Language. Longman Real Grammar. Longman Real Grammar. Chinese. Chinese Language. Longman Vocabulary Practice. Longman Vocabulary Practice. Chinese. Chinese. Longman Words for Life. Longman Words for Life. Chinese. Chinese Language. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/tanishqvashisht/sharingan/README.md b/spaces/tanishqvashisht/sharingan/README.md deleted file mode 100644 index 7925fd757f5b5eb6bf3f7415637e14076e60b67e..0000000000000000000000000000000000000000 --- a/spaces/tanishqvashisht/sharingan/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sharingan -emoji: 🐢 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/taquynhnga/CNNs-interpretation-visualization/backend/adversarial_attack.py b/spaces/taquynhnga/CNNs-interpretation-visualization/backend/adversarial_attack.py deleted file mode 100644 index fcaf8bbeebc298443098dcc2dd2abda26335548f..0000000000000000000000000000000000000000 --- a/spaces/taquynhnga/CNNs-interpretation-visualization/backend/adversarial_attack.py +++ /dev/null @@ -1,100 +0,0 @@ -import PIL -from PIL import Image -import numpy as np -from matplotlib import pylab as P -import cv2 - -import torch -from torch.utils.data import TensorDataset -from torchvision import transforms -import torch.nn.functional as F - -from transformers.image_utils import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD - -from torchvex.base import ExplanationMethod -from torchvex.utils.normalization import clamp_quantile - -from backend.utils import load_image, load_model -from backend.smooth_grad import generate_smoothgrad_mask - -import streamlit as st - -IMAGENET_DEFAULT_MEAN = np.asarray(IMAGENET_DEFAULT_MEAN).reshape([1,3,1,1]) -IMAGENET_DEFAULT_STD = np.asarray(IMAGENET_DEFAULT_STD).reshape([1,3,1,1]) - -def deprocess_image(image_inputs): - return (image_inputs * IMAGENET_DEFAULT_STD + IMAGENET_DEFAULT_MEAN) * 255 - - -def feed_forward(input_image): - model, feature_extractor = load_model('ConvNeXt') - inputs = feature_extractor(input_image, do_resize=False, return_tensors="pt")['pixel_values'] - logits = model(inputs).logits - prediction_prob = F.softmax(logits, dim=-1).max() # prediction probability - # prediction class id, start from 1 to 1000 so it needs to +1 in the end - prediction_class = logits.argmax(-1).item() - prediction_label = model.config.id2label[prediction_class] # prediction class label - return prediction_prob, prediction_class, prediction_label - -# FGSM attack code -def fgsm_attack(image, epsilon, data_grad): - # Collect the element-wise sign of the data gradient and normalize it - sign_data_grad = torch.gt(data_grad, 0).type(torch.FloatTensor) * 2.0 - 1.0 - perturbed_image = image + epsilon*sign_data_grad - return perturbed_image - -# perform attack on the model -def perform_attack(input_image, target, epsilon): - model, feature_extractor = load_model("ConvNeXt") - # preprocess input image - inputs = feature_extractor(input_image, do_resize=False, return_tensors="pt")['pixel_values'] - inputs.requires_grad = True - - # predict - logits = model(inputs).logits - prediction_prob = F.softmax(logits, dim=-1).max() - prediction_class = logits.argmax(-1).item() - prediction_label = model.config.id2label[prediction_class] - - # Calculate the loss - loss = F.nll_loss(logits, torch.tensor([target])) - - # Zero all existing gradients - model.zero_grad() - - # Calculate gradients of model in backward pass - loss.backward() - - # Collect datagrad - data_grad = inputs.grad.data - - # Call FGSM Attack - perturbed_data = fgsm_attack(inputs, epsilon, data_grad) - - # Re-classify the perturbed image - new_prediction = model(perturbed_data).logits - new_pred_prob = F.softmax(new_prediction, dim=-1).max() - new_pred_class = new_prediction.argmax(-1).item() - new_pred_label = model.config.id2label[new_pred_class] - - return perturbed_data, new_pred_prob.item(), new_pred_class, new_pred_label - - -def find_smallest_epsilon(input_image, target): - epsilons = [i*0.001 for i in range(1000)] - - for epsilon in epsilons: - perturbed_data, new_prob, new_id, new_label = perform_attack(input_image, target, epsilon) - if new_id != target: - return perturbed_data, new_prob, new_id, new_label, epsilon - return None - -# @st.cache_data -@st.cache(allow_output_mutation=True) -def generate_images(image_id, epsilon=0): - model, feature_extractor = load_model("ConvNeXt") - original_image_dict = load_image(image_id) - image = original_image_dict['image'] - return generate_smoothgrad_mask( - image, 'ConvNeXt', - model, feature_extractor, num_samples=10, return_mask=True) diff --git a/spaces/terapyon/pyhackcon-qa2/README.md b/spaces/terapyon/pyhackcon-qa2/README.md deleted file mode 100644 index fd10f9c4c4260e30b4672794fe74ad43a6d46546..0000000000000000000000000000000000000000 --- a/spaces/terapyon/pyhackcon-qa2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pyhackcon Qa2 -emoji: 👀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/Download Novel Sepatu Dahlan Iskan Pdf Gratis.md b/spaces/terfces0erbo/CollegeProjectV2/Download Novel Sepatu Dahlan Iskan Pdf Gratis.md deleted file mode 100644 index acc358c2a8ff5daf1695b3e08a19833b5d12ec0c..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Download Novel Sepatu Dahlan Iskan Pdf Gratis.md +++ /dev/null @@ -1,49 +0,0 @@ - -

            Novel Sepatu Dahlan Iskan Pdf Gratis: Kisah Inspiratif dari Anak Dusun yang Jadi Menteri

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis adalah salah satu novel yang banyak dicari oleh para pembaca yang tertarik dengan kisah nyata Dahlan Iskan, seorang anak dusun yang berhasil menjadi pengusaha sukses dan Menteri BUMN. Novel ini ditulis oleh Khrisna Pabichara, seorang penulis muda yang mengambil inspirasi dari kehidupan Dahlan Iskan sejak kecil hingga dewasa. Novel ini merupakan bagian pertama dari Trilogi Novel Inspirasi Dahlan Iskan yang terdiri dari Sepatu Dahlan, Surat Dahlan, dan Mimpi Dahlan. Novel ini diterbitkan oleh NouraBooks pada tahun 2012 dan telah mendapatkan banyak penghargaan dan apresiasi dari para pembaca dan kritikus sastra.

            -

            Apa Saja Isi Novel Sepatu Dahlan Iskan Pdf Gratis?

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis mengisahkan tentang kehidupan Dahlan Iskan saat masih berusia 12 tahun hingga 17 tahun. Novel ini menggambarkan bagaimana Dahlan Iskan menghadapi berbagai tantangan dan rintangan dalam hidupnya, mulai dari kemiskinan, kelaparan, pendidikan, pekerjaan, hingga cinta. Novel ini juga menunjukkan bagaimana Dahlan Iskan memiliki cita-cita besar yang membuatnya bekerja keras dan pantang menyerah, yaitu memiliki sepatu dan sepeda. Novel ini juga menyajikan kisah-kisah menarik dan mengharukan tentang keluarga, persahabatan, dan guru-guru yang mendukung dan membimbing Dahlan Iskan dalam menjalani hidupnya.

            -

            download novel sepatu dahlan iskan pdf gratis


            Download Ziphttps://bytlly.com/2uGlbk



            -

            Bagaimana Gaya Penulisan Novel Sepatu Dahlan Iskan Pdf Gratis?

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis ditulis dengan gaya penulisan yang sederhana, lugas, dan mengalir. Penulis menggunakan bahasa Indonesia yang mudah dipahami oleh pembaca dari berbagai latar belakang dan usia. Penulis juga menggunakan sudut pandang orang pertama dari tokoh utama yaitu Dahlan Iskan. Penulis juga menyisipkan beberapa catatan kaki yang menjelaskan istilah-istilah atau peristiwa-peristiwa yang mungkin kurang familiar bagi pembaca. Penulis juga menggunakan beberapa teknik sastra seperti dialog, deskripsi, flashback, dan monolog untuk membangun karakter tokoh-tokoh dan suasana cerita.

            -

            Apa Saja Manfaat Membaca Novel Sepatu Dahlan Iskan Pdf Gratis?

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis memiliki banyak manfaat bagi pembaca yang membacanya. Novel ini dapat memberikan inspirasi dan motivasi bagi pembaca untuk berjuang dalam menggapai cita-cita dan impian mereka. Novel ini juga dapat memberikan pelajaran tentang nilai-nilai penting dalam hidup, seperti kerja keras, pantang menyerah, berani bermimpi, bersyukur, dan berbagi. Novel ini juga dapat memberikan wawasan tentang kondisi sosial dan ekonomi di Indonesia pada era 1960-an hingga 1970-an, yang penuh dengan perubahan dan dinamika politik. Novel ini juga dapat memberikan hiburan dan kesenangan bagi pembaca yang menyukai genre biografi atau non-fiksi.

            -

            -
            Bagaimana Cara Mendapatkan Novel Sepatu Dahlan Iskan Pdf Gratis?
            -

            Novel Sepatu Dahlan Iskan Pdf Gratis dapat didapatkan dengan mudah melalui internet. Anda dapat mencari situs-situs yang menyediakan link download novel sepatu dahlan iskan pdf gratis dengan cepat dan mudah. Anda juga dapat membaca novel sepatu dahlan iskan pdf gratis secara online di situs-situs bacaan dan penerbitan sosial seperti Scribd. Namun, jika Anda ingin mendukung karya penulis dan penerbit, Anda dapat membeli novel sepatu dahlan iskan pdf gratis dalam bentuk buku fisik atau ebook di toko-toko buku online atau offline.

            -

            Kesimpulan

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis adalah novel biografi yang mengisahkan kehidupan Dahlan Iskan saat masih muda. Novel ini merupakan bagian pertama dari Trilogi Novel Inspirasi Dahlan Iskan yang ditulis oleh Khrisna Pabichara. Novel ini menawarkan kisah inspiratif dan motivatif dari seorang anak dusun yang menjadi pengusaha sukses dan Menteri BUMN. Novel ini juga mengajarkan kita tentang nilai-nilai penting dalam hidup, seperti kerja keras, pantang menyerah, berani bermimpi, bersyukur, dan berbagi. Novel ini dapat didapatkan dengan mudah melalui internet atau toko-toko buku. Novel ini memiliki gaya penulisan yang sederhana, lugas, dan mengalir serta menggunakan beberapa teknik sastra untuk membangun karakter tokoh-tokoh dan suasana cerita.

            -Apa Saja Tips Membaca Novel Sepatu Dahlan Iskan Pdf Gratis? -

            Novel Sepatu Dahlan Iskan Pdf Gratis adalah novel yang bisa dibaca oleh siapa saja yang ingin menikmati kisah nyata yang inspiratif dan motivatif. Namun, ada beberapa tips yang bisa membantu Anda untuk membaca novel ini dengan lebih baik dan maksimal. Berikut adalah beberapa di antaranya:

            -
              -
            • Tentukan tujuan Anda membaca novel ini. Apakah Anda ingin mendapatkan inspirasi, motivasi, pelajaran, wawasan, atau hiburan dari novel ini? Tujuan Anda akan menentukan cara Anda membaca dan menanggapi novel ini.
            • -
            • Siapkan waktu dan tempat yang nyaman untuk membaca novel ini. Novel ini memiliki 369 halaman yang cukup tebal dan padat dengan informasi dan cerita. Anda perlu menyediakan waktu dan tempat yang cukup untuk membaca novel ini tanpa terganggu atau terburu-buru.
            • -
            • Bacalah novel ini dengan perhatian dan konsentrasi. Novel ini mengandung banyak fakta-fakta sejarah, konteks sosial budaya, dan pesan-pesan moral yang perlu Anda pahami dan resapi. Anda juga perlu memperhatikan gaya penulisan dan teknik sastra yang digunakan oleh penulis untuk membangun karakter tokoh-tokoh dan suasana cerita.
            • -
            • Bacalah novel ini dengan sikap kritis dan apresiatif. Novel ini merupakan karya sastra yang bersifat subjektif dan imajinatif. Anda tidak perlu menerima segala hal yang ditulis oleh penulis secara mentah-mentah. Anda bisa menilai kelebihan dan kekurangan novel ini dari berbagai aspek, seperti keakuratan fakta, kedalaman analisis, keindahan bahasa, dan lain-lain. Anda juga bisa mengapresiasi novel ini sebagai karya seni yang memiliki nilai estetis dan budaya.
            • -
            • Bacalah novel ini dengan sikap terbuka dan empatik. Novel ini merupakan kisah nyata yang dialami oleh Dahlan Iskan dan orang-orang di sekitarnya. Anda bisa belajar banyak dari pengalaman dan perjuangan mereka dalam hidup. Anda juga bisa merasakan emosi dan perasaan mereka melalui cerita-cerita yang ditampilkan dalam novel ini. Anda bisa bersikap terbuka dan empatik terhadap tokoh-tokoh dalam novel ini tanpa harus menghakimi atau menyalahkan mereka.
            • -
            -Apa Saja Rekomendasi Novel Lain yang Mirip dengan Novel Sepatu Dahlan Iskan Pdf Gratis? -

            Novel Sepatu Dahlan Iskan Pdf Gratis adalah novel biografi yang mengisahkan kehidupan Dahlan Iskan saat masih muda. Jika Anda menyukai novel ini, Anda mungkin juga tertarik untuk membaca novel-novel lain yang mirip dengan novel ini. Berikut adalah beberapa rekomendasi novel lain yang mirip dengan novel Sepatu Dahlan Iskan Pdf Gratis:

            -
              -
            • Surat Dahlan by Khrisna Pabichara. Novel ini merupakan sekuel kedua dari Trilogi Novel Inspirasi Dahlan Iskan yang mengisahkan kehidupan Dahlan Iskan saat berusia 18 tahun hingga 25 tahun. Novel ini menggambarkan bagaimana Dahlan Iskan melanjutkan pendidikan dan karirnya di Surabaya, Jakarta, Jepang, hingga Amerika Serikat.
            • -
            • Mimpi Dahlan by Khrisna Pabichara. Novel ini merupakan sekuel ketiga dari Trilogi Novel Inspirasi Dahlan Iskan yang mengisahkan kehidupan Dahlan Iskan saat berusia 26 tahun hingga 60 tahun. Novel ini menggambarkan bagaimana Dahlan Iskan mewujudkan mimpi-mimpinya dalam dunia bisnis, media, hingga politik.
            • -
            • Laskar Pelangi by Andrea Hirata. Novel ini merupakan novel pertama dari Tetralogi Laskar Pelangi yang mengisahkan kehidupan sepuluh anak miskin dari Pulau Belitong yang bersekolah di sebuah sekolah dasar paling miskin di Indonesia. Novel ini menyajikan kisah-kisah inspiratif dan motivatif tentang pendidikan, persahabatan, cinta, dan cita-cita.
            • -
            • Negeri 5 Menara by Ahmad Fuadi. Novel ini merupakan novel pertama dari Trilogi Negeri 5 Menara yang mengisahkan kehidupan enam anak laki-laki dari berbagai daerah di Indonesia yang bersekolah di sebuah pondok pesantren di Jawa Barat. Novel ini menyajikan kisah-kisah inspiratif dan motivatif tentang agama, budaya, ilmu pengetahuan, dan impian.
            • -
            • Ayat-Ayat Cinta by Habiburrahman El Shirazy. Novel ini merupakan novel religi romantis yang mengisahkan cinta segitiga antara Fahri, seorang mahasiswa Indonesia di Mesir, Aisha, seorang gadis Turki cantik nan kaya, dan Maria, seorang gadis Kristen Koptik jelita nan miskin. Novel ini menyajikan kisah-kisah inspiratif dan motivatif tentang cinta, toleransi, kesabaran, dan keikhlasan.
            • -
            -

            Kesimpulan

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis adalah novel biografi yang mengisahkan kehidupan Dahlan Iskan saat masih muda. Novel ini merupakan bagian pertama dari Trilogi Novel Inspirasi Dahlan Iskan yang ditulis oleh Khrisna Pabichara. Novel ini menawarkan kisah inspiratif dan motivatif dari seorang anak dusun yang menjadi pengusaha sukses dan Menteri BUMN. Novel ini juga mengajarkan kita tentang nilai-nilai penting dalam hidup, seperti kerja keras, pantang menyerah, berani bermimpi, bersyukur, dan berbagi. Novel ini dapat didapatkan dengan mudah melalui internet atau toko-toko buku. Novel ini memiliki gaya penulisan yang sederhana, lugas, dan mengalir serta menggunakan beberapa teknik sastra untuk membangun karakter tokoh-tokoh dan suasana cerita. Novel ini memiliki beberapa keunggulan dan kekurangan yang bisa menjadi pertimbangan bagi pembaca sebelum membacanya. Novel ini juga memiliki beberapa tips membaca yang bisa membantu pembaca untuk membaca novel ini dengan lebih baik dan maksimal. Novel ini juga memiliki beberapa rekomendasi novel lain yang mirip dengan novel ini bagi pembaca yang ingin membaca novel-novel lain yang inspiratif dan motivatif.

            -Apa Saja Review Novel Sepatu Dahlan Iskan Pdf Gratis? -

            Novel Sepatu Dahlan Iskan Pdf Gratis telah mendapatkan banyak review dari para pembaca yang telah membacanya. Berikut adalah beberapa review novel Sepatu Dahlan Iskan Pdf Gratis yang kami rangkum dari berbagai sumber:

            -
              -
            • Review dari Goodreads: Novel ini sangat menginspirasi saya untuk terus berjuang dalam menggapai cita-cita dan impian saya. Saya merasa terharu dengan kisah hidup Dahlan Iskan yang penuh dengan perjuangan dan prestasi. Saya juga merasa terhibur dengan kisah-kisah keluarga, persahabatan, dan cinta yang ditampilkan dalam novel ini. Saya suka dengan gaya penulisan penulis yang sederhana, lugas, dan mengalir. Saya juga suka dengan penggunaan teknik sastra yang membangun karakter tokoh-tokoh dan suasana cerita. Saya merekomendasikan novel ini kepada semua orang yang ingin membaca kisah nyata yang inspiratif dan motivatif.
            • -
            • Review dari Amazon: Novel ini sangat menarik dan menyentuh untuk dibaca. Novel ini mengisahkan kehidupan Dahlan Iskan saat masih muda yang penuh dengan tantangan dan rintangan. Novel ini juga mengisahkan cita-cita besar Dahlan Iskan yang membuatnya bekerja keras dan pantang menyerah, yaitu memiliki sepatu dan sepeda. Novel ini juga mengisahkan kisah-kisah menarik dan mengharukan tentang keluarga, persahabatan, dan guru-guru yang mendukung dan membimbing Dahlan Iskan dalam menjalani hidupnya. Novel ini ditulis dengan gaya bahasa yang mudah dipahami oleh pembaca dari berbagai latar belakang dan usia. Novel ini juga menggunakan beberapa teknik sastra seperti dialog, deskripsi, flashback, dan monolog untuk membangun karakter tokoh-tokoh dan suasana cerita. Novel ini sangat layak untuk dibaca oleh semua orang yang ingin mendapatkan inspirasi, motivasi, pelajaran, wawasan, atau hiburan dari novel biografi.
            • -
            • Review dari Blog: Novel ini sangat bagus dan bermutu untuk dibaca. Novel ini mengisahkan kehidupan Dahlan Iskan saat masih muda yang penuh dengan kemiskinan, kelaparan, pendidikan, pekerjaan, hingga cinta. Novel ini juga mengisahkan cita-cita besar Dahlan Iskan yang membuatnya bekerja keras dan pantang menyerah, yaitu memiliki sepatu dan sepeda. Novel ini juga mengisahkan kisah-kisah menarik dan mengharukan tentang keluarga, persahabatan, dan guru-guru yang mendukung dan membimbing Dahlan Iskan dalam menjalani hidupnya. Novel ini ditulis dengan gaya penulisan yang sederhana, lugas, dan mengalir. Penulis menggunakan bahasa Indonesia yang mudah dipahami oleh pembaca dari berbagai latar belakang dan usia. Penulis juga menggunakan sudut pandang orang pertama dari tokoh utama yaitu Dahlan Iskan. Penulis juga menyisipkan beberapa catatan kaki yang menjelaskan istilah-istilah atau peristiwa-peristiwa yang mungkin kurang familiar bagi pembaca. Penulis juga menggunakan beberapa teknik sastra seperti dialog, deskripsi, flashback, dan monolog untuk membangun karakter tokoh-tokoh dan suasana cerita. Novel ini sangat bermanfaat bagi pembaca yang ingin belajar banyak dari pengalaman dan perjuangan Dahlan Iskan dalam hidup.
            • -
            -

            Kesimpulan

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis adalah novel biografi yang mengisahkan kehidupan Dahlan Iskan saat masih muda. Novel ini merupakan bagian pertama dari Trilogi Novel Inspirasi Dahlan Iskan yang ditulis oleh Khrisna Pabichara. Novel ini menawarkan kisah inspiratif dan motivatif dari seorang anak dusun yang menjadi pengusaha sukses dan Menteri BUMN. Novel ini juga mengajarkan kita tentang nilai-nilai penting dalam hidup, seperti kerja keras, pantang menyerah, berani bermimpi, bersyukur, dan berbagi. Novel ini dapat didapatkan dengan mudah melalui internet atau toko-toko buku. Novel ini memiliki gaya penulisan yang sederhana, lugas, dan mengalir serta menggunakan beberapa teknik sastra untuk membangun karakter tokoh-tokoh dan suasana cerita. Novel ini memiliki beberapa keunggulan dan kekurangan yang bisa menjadi pertimbangan bagi pembaca sebelum membacanya. Novel ini juga memiliki beberapa tips membaca yang bisa membantu pembaca untuk membaca novel ini dengan lebih baik dan maksimal. Novel ini juga memiliki beberapa rekomendasi novel lain yang mirip dengan novel ini bagi pembaca yang ingin membaca novel-novel lain yang inspiratif dan motivatif. Novel ini juga memiliki beberapa review novel Sepatu Dahlan Iskan Pdf Gratis dari para pembaca yang telah membacanya.

            -

            Novel Sepatu Dahlan Iskan Pdf Gratis adalah novel biografi yang mengisahkan kehidupan Dahlan Iskan saat masih muda. Novel ini merupakan bagian pertama dari Trilogi Novel Inspirasi Dahlan Iskan yang ditulis oleh Khrisna Pabichara. Novel ini menawarkan kisah inspiratif dan motivatif dari seorang anak dusun yang menjadi pengusaha sukses dan Menteri BUMN. Novel ini juga mengajarkan kita tentang nilai-nilai penting dalam hidup, seperti kerja keras, pantang menyerah, berani bermimpi, bersyukur, dan berbagi. Novel ini dapat didapatkan dengan mudah melalui internet atau toko-toko buku. Novel ini memiliki gaya penulisan yang sederhana, lugas, dan mengalir serta menggunakan beberapa teknik sastra untuk membangun karakter tokoh-tokoh dan suasana cerita. Novel ini memiliki beberapa keunggulan dan kekurangan yang bisa menjadi pertimbangan bagi pembaca sebelum membacanya. Novel ini juga memiliki beberapa tips membaca yang bisa membantu pembaca untuk membaca novel ini dengan lebih baik dan maksimal. Novel ini juga memiliki beberapa rekomendasi novel lain yang mirip dengan novel ini bagi pembaca yang ingin membaca novel-novel lain yang inspiratif dan motivatif. Novel ini juga memiliki beberapa review novel Sepatu Dahlan Iskan Pdf Gratis dari para pembaca yang telah membacanya.

            -

            Demikianlah artikel yang kami buat tentang novel Sepatu Dahlan Iskan Pdf Gratis. Semoga artikel ini bermanfaat bagi Anda yang ingin mengetahui lebih banyak tentang novel ini. Jika Anda tertarik untuk membaca novel ini, Anda bisa mendownloadnya secara gratis melalui link yang kami sediakan di bawah ini. Selamat membaca dan terima kasih.

            -

            Link download novel Sepatu Dahlan Iskan Pdf Gratis: https://id.scribd.com/document/454109959/download-novel-pdf-gratis

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/thegenerativegeneration/FNeVR_demo/modules/nerf_verts_util.py b/spaces/thegenerativegeneration/FNeVR_demo/modules/nerf_verts_util.py deleted file mode 100644 index 8deccb345d6c203d9e22b70cceb929ab097eddd4..0000000000000000000000000000000000000000 --- a/spaces/thegenerativegeneration/FNeVR_demo/modules/nerf_verts_util.py +++ /dev/null @@ -1,227 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d -from sync_batchnorm import SynchronizedBatchNorm3d as BatchNorm3d -import einops -from modules.util import UpBlock2d, DownBlock2d - - -def make_coordinate_grid(spatial_size, type): - d, h, w = spatial_size - x = torch.arange(w).type(type) - y = torch.arange(h).type(type) - z = torch.arange(d).type(type) - - x = (2 * (x / (w - 1)) - 1) - y = (2 * (y / (h - 1)) - 1) - z = (2 * (z / (d - 1)) - 1) - - yy = y.view(1, -1, 1).repeat(d, 1, w) - xx = x.view(1, 1, -1).repeat(d, h, 1) - zz = z.view(-1, 1, 1).repeat(1, h, w) - - meshed = torch.cat([xx.unsqueeze_(3), yy.unsqueeze_(3), zz.unsqueeze_(3)], 3) - - return meshed - - -def kp2gaussian_3d(kp, spatial_size, kp_variance): - """ - Transform a keypoint into gaussian like representation - """ - # mean = kp['value'] - mean = kp - - coordinate_grid = make_coordinate_grid(spatial_size, mean.type()) - number_of_leading_dimensions = len(mean.shape) - 1 - shape = (1,) * number_of_leading_dimensions + coordinate_grid.shape - coordinate_grid = coordinate_grid.view(*shape) - repeats = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 1) - coordinate_grid = coordinate_grid.repeat(*repeats) - - # Preprocess kp shape - shape = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 3) - mean = mean.view(*shape) - - mean_sub = (coordinate_grid - mean) - - out = torch.exp(-0.5 * (mean_sub ** 2).sum(-1) / kp_variance) - - return out - - -class ResBlock3d(nn.Module): - """ - Res block, preserve spatial resolution. - """ - - def __init__(self, in_features, kernel_size, padding): - super(ResBlock3d, self).__init__() - self.conv1 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.conv2 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.norm1 = BatchNorm3d(in_features, affine=True) - self.norm2 = BatchNorm3d(in_features, affine=True) - - def forward(self, x): - out = self.norm1(x) - out = F.relu(out) - out = self.conv1(out) - out = self.norm2(out) - out = F.relu(out) - out = self.conv2(out) - out += x - return out - - -class rgb_predictor(nn.Module): - def __init__(self, in_channels, simpled_channel=128, floor_num=8): - super(rgb_predictor, self).__init__() - self.floor_num = floor_num - self.down_conv = nn.Conv2d(in_channels=in_channels, out_channels=simpled_channel, kernel_size=3, padding=1) - - def forward(self, feature): - """ - Args: - feature: warp feature: bs * c * h * w - Returns: - rgb: bs * h * w * floor_num * e - """ - feature = self.down_conv(feature) - feature = einops.rearrange(feature, 'b (c f) h w -> b c f h w', f=self.floor_num) - feature = einops.rearrange(feature, 'b c f h w -> b h w f c') - return feature - - -class sigma_predictor(nn.Module): - def __init__(self, in_channels, simpled_channel=128, floor_num=8): - super(sigma_predictor, self).__init__() - self.floor_num = floor_num - self.down_conv = nn.Conv2d(in_channels=in_channels, out_channels=simpled_channel, kernel_size=3, padding=1) - - self.res_conv3d = nn.Sequential( - ResBlock3d(16, 3, 1), - nn.BatchNorm3d(16), - ResBlock3d(16, 3, 1), - nn.BatchNorm3d(16), - ResBlock3d(16, 3, 1), - nn.BatchNorm3d(16) - ) - - def forward(self, feature): - """ - Args: - feature: bs * h * w * floor * c, the output of rgb predictor - Returns: - sigma: bs * h * w * floor * encode - point: bs * 5023 * 3 - """ - heatmap = self.down_conv(feature) - heatmap = einops.rearrange(heatmap, "b (c f) h w -> b c f h w", f=self.floor_num) - heatmap = self.res_conv3d(heatmap) - sigma = einops.rearrange(heatmap, "b c f h w -> b h w f c") - - point_dict = {'sigma_map': heatmap} - # point_pred = einops.rearrange(point_pred, 'b p n -> b n p') - return sigma, point_dict - - -class MultiHeadNeRFModel(torch.nn.Module): - - def __init__(self, hidden_size=128, num_encoding_rgb=16, num_encoding_sigma=16): - super(MultiHeadNeRFModel, self).__init__() - # self.xyz_encoding_dims = 1 + 1 * 2 * num_encoding_functions + num_encoding_rgb - self.xyz_encoding_dims = num_encoding_sigma - self.viewdir_encoding_dims = num_encoding_rgb - - # Input layer (default: 16 -> 128) - self.layer1 = torch.nn.Linear(self.xyz_encoding_dims, hidden_size) - # Layer 2 (default: 128 -> 128) - self.layer2 = torch.nn.Linear(hidden_size, hidden_size) - # Layer 3_1 (default: 128 -> 1): Predicts radiance ("sigma") - self.layer3_1 = torch.nn.Linear(hidden_size, 1) - # Layer 3_2 (default: 128 -> 32): Predicts a feature vector (used for color) - self.layer3_2 = torch.nn.Linear(hidden_size, hidden_size // 4) - self.layer3_3 = torch.nn.Linear(self.viewdir_encoding_dims, hidden_size) - - # Layer 4 (default: 32 + 128 -> 128) - self.layer4 = torch.nn.Linear( - hidden_size // 4 + hidden_size, hidden_size - ) - # Layer 5 (default: 128 -> 128) - self.layer5 = torch.nn.Linear(hidden_size, hidden_size) - # Layer 6 (default: 128 -> 256): Predicts RGB color - self.layer6 = torch.nn.Linear(hidden_size, 256) - - # Short hand for torch.nn.functional.relu - self.relu = torch.nn.functional.relu - - def forward(self, rgb_in, sigma_in): - """ - Args: - x: rgb pred result of Perdict3D - view: result of LightPredict - Returns: - """ - bs, h, w, floor_num, _ = rgb_in.size() - # x = torch.cat((x, point3D), dim=-1) - out = self.relu(self.layer1(sigma_in)) - out = self.relu(self.layer2(out)) - sigma = self.layer3_1(out) - feat_sigma = self.relu(self.layer3_2(out)) - feat_rgb = self.relu(self.layer3_3(rgb_in)) - x = torch.cat((feat_sigma, feat_rgb), dim=-1) - x = self.relu(self.layer4(x)) - x = self.relu(self.layer5(x)) - x = self.layer6(x) - return x, sigma - - -def volume_render(rgb_pred, sigma_pred): - """ - Args: - rgb_pred: result of Nerf, [bs, h, w, floor, rgb_channel] - sigma_pred: result of Nerf, [bs, h, w, floor, sigma_channel] - Returns: - - """ - _, _, _, floor, _ = sigma_pred.size() - c = 0 - T = 0 - for i in range(floor): - sigma_mid = torch.nn.functional.relu(sigma_pred[:, :, :, i, :]) - T = T + (-sigma_mid) - c = c + torch.exp(T) * (1 - torch.exp(-sigma_mid)) * rgb_pred[:, :, :, i, :] - c = einops.rearrange(c, 'b h w c -> b c h w') - return c - - -class RenderModel(nn.Module): - def __init__(self, in_channels, simpled_channel_rgb, simpled_channel_sigma, floor_num, hidden_size): - super(RenderModel, self).__init__() - self.rgb_predict = rgb_predictor(in_channels=in_channels, simpled_channel=simpled_channel_rgb, - floor_num=floor_num) - self.sigma_predict = sigma_predictor(in_channels=in_channels, simpled_channel=simpled_channel_sigma, - floor_num=floor_num) - num_encoding_rgb, num_encoding_sigma = simpled_channel_rgb // floor_num, simpled_channel_sigma // floor_num - self.nerf_module = MultiHeadNeRFModel(hidden_size=hidden_size, num_encoding_rgb=num_encoding_rgb, - num_encoding_sigma=num_encoding_sigma) - self.mini_decoder = nn.Sequential( - UpBlock2d(256, 64, kernel_size=3, padding=1), - nn.ReLU(), - UpBlock2d(64, 3, kernel_size=3, padding=1), - nn.Sigmoid() - ) - - def forward(self, feature): - rgb_in = self.rgb_predict(feature) - # sigma_in, point_dict = self.sigma_predict(feature.detach()) - sigma_in, point_dict = self.sigma_predict(feature) - rgb_out, sigma_out = self.nerf_module(rgb_in, sigma_in) - render_result = volume_render(rgb_out, sigma_out) - render_result = torch.sigmoid(render_result) - mini_pred = self.mini_decoder(render_result) - out_dict = {'render': render_result, 'mini_pred': mini_pred, 'point_pred': point_dict} - return out_dict diff --git a/spaces/thejagstudio/procom/main/admin.py b/spaces/thejagstudio/procom/main/admin.py deleted file mode 100644 index bff651c75db39ec4bd273a99fe1d7000d328820a..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/main/admin.py +++ /dev/null @@ -1,11 +0,0 @@ -from django.contrib import admin -from .models import Products, Categories -from import_export.admin import ExportActionMixin - -class ProductAdmin(ExportActionMixin, admin.ModelAdmin): - list_display = ('name', 'score','link','category','terms') - search_fields = ['name', 'score','link','category__name','terms'] - -# Register your models here. -admin.site.register(Products,ProductAdmin) -admin.site.register(Categories) diff --git a/spaces/tialenAdioni/chat-gpt-api/app.py b/spaces/tialenAdioni/chat-gpt-api/app.py deleted file mode 100644 index 7dc9b73da7eb39ebce28c5e5a33ea898ece61adb..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/app.py +++ /dev/null @@ -1,132 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" - -#Testing with my Open AI Key -#OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") - -def predict(inputs, top_p, temperature, openai_api_key, chat_counter, chatbot=[], history=[]): #repetition_penalty, top_k - - payload = { - "model": "gpt-3.5-turbo", - "messages": [{"role": "user", "content": f"{inputs}"}], - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}" - } - - print(f"chat_counter - {chat_counter}") - if chat_counter != 0 : - messages=[] - for data in chatbot: - temp1 = {} - temp1["role"] = "user" - temp1["content"] = data[0] - temp2 = {} - temp2["role"] = "assistant" - temp2["content"] = data[1] - messages.append(temp1) - messages.append(temp2) - temp3 = {} - temp3["role"] = "user" - temp3["content"] = inputs - messages.append(temp3) - #messages - payload = { - "model": "gpt-3.5-turbo", - "messages": messages, #[{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - - chat_counter+=1 - - history.append(inputs) - print(f"payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - #response = requests.post(API_URL, headers=headers, json=payload, stream=True) - token_counter = 0 - partial_words = "" - - counter=0 - for chunk in response.iter_lines(): - #Skipping first chunk - if counter == 0: - counter+=1 - continue - #counter+=1 - # check whether each line is non-empty - if chunk.decode() : - chunk = chunk.decode() - # decode each line as response data is in bytes - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - #if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0: - # break - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter # resembles {chatbot: chat, state: history} - - -def reset_textbox(): - return gr.update(value='') - -title = """

            ChatGPT API

            """ -description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form: -``` -User: -Assistant: -User: -Assistant: -... -``` -In this app, you can explore the outputs of a gpt-3.5-turbo LLM. -""" - -with gr.Blocks(css = """#col_container {margin-left: auto; margin-right: auto;} - #chatbot {height: 520px; overflow: auto;}""") as demo: - gr.HTML(title) - #gr.HTML('''
            Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
            ''') - with gr.Column(elem_id = "col_container"): - openai_api_key = gr.Textbox(type='password', label="请输入您的 OpenAI API 密钥") - chatbot = gr.Chatbot(elem_id='chatbot') #c - inputs = gr.Textbox(placeholder= "你好!", label= "请输入您的问题并按Run发送") #t - state = gr.State([]) #s - b1 = gr.Button() - - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("参数", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - inputs.submit( predict, [inputs, top_p, temperature, openai_api_key, chat_counter, chatbot, state], [chatbot, state, chat_counter],) - b1.click( predict, [inputs, top_p, temperature, openai_api_key, chat_counter, chatbot, state], [chatbot, state, chat_counter],) - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - #gr.Markdown(description) - demo.queue().launch(debug=True) \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Band Of Brothers LEnfer Du Pacifique Saison 1 VF TORRENT VERIFIED.md b/spaces/tialenAdioni/chat-gpt-api/logs/Band Of Brothers LEnfer Du Pacifique Saison 1 VF TORRENT VERIFIED.md deleted file mode 100644 index ea0acad732d05369a856a4393bd3e04fee1ddbb6..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Band Of Brothers LEnfer Du Pacifique Saison 1 VF TORRENT VERIFIED.md +++ /dev/null @@ -1,22 +0,0 @@ - -Hello, this is Bing. I can help you with writing a title and an article with SEO optimization and HTML formatting for the keyword "Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT". Here is a possible example: - -

            Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT - Une série historique captivante sur la Seconde Guerre mondiale

            - -

            Si vous êtes fan de séries historiques et de guerre, vous ne pouvez pas passer à côté de Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT. Cette série, produite par Tom Hanks et Steven Spielberg, retrace le parcours de trois marines américains qui ont combattu dans le Pacifique contre l'invasion japonaise après l'attaque de Pearl Harbor. Vous allez vivre avec eux les horreurs, les sacrifices et les exploits de cette guerre impitoyable qui a changé le monde.

            -

            Band Of Brothers : L\\\\\\\\\\\\\\\\'Enfer Du Pacifique Saison 1 VF TORRENT


            DOWNLOAD 🌟 https://urlcod.com/2uK7lc



            - -

            Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT est une série en dix épisodes d'environ une heure chacun, qui vous plonge dans l'histoire vraie de Robert Leckie, John Basilone et Eugene Sledge. Ces trois soldats ont participé à des batailles décisives comme Guadalcanal, Iwo Jima ou Okinawa. Ils ont fait preuve de courage, de fraternité et de résilience face à un ennemi redoutable et à des conditions extrêmes.

            - -

            Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT est une série qui vous fera découvrir un aspect méconnu de la Seconde Guerre mondiale, celui du front du Pacifique. Vous serez émus par le destin de ces hommes ordinaires qui ont accompli des choses extraordinaires. Vous serez impressionnés par la qualité de la réalisation, des décors, des costumes et des effets spéciaux. Vous serez captivés par le scénario, basé sur des témoignages réels et des recherches historiques.

            - -

            Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT est une série à ne pas manquer si vous aimez l'histoire, l'action et l'émotion. Vous pouvez la télécharger facilement et rapidement sur le site Yggtorrent[^1^], qui vous propose la version originale sous-titrée en français. N'attendez plus pour vous plonger dans cette aventure épique et humaine qui vous marquera à jamais.

            -

            Sure, I can write a few more paragraphs for you. Here is a possible example: - -

            Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT est une série qui vous fera vibrer par son réalisme et sa puissance dramatique. Vous serez témoins des atrocités de la guerre, mais aussi des moments de solidarité, d'amitié et d'amour entre les marines. Vous serez touchés par les personnages, interprétés par des acteurs talentueux comme James Badge Dale, Joseph Mazzello ou Jon Seda. Vous serez fascinés par les paysages magnifiques et les scènes de combat époustouflantes.

            - -

            Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT est une série qui vous fera apprendre beaucoup de choses sur la Seconde Guerre mondiale, notamment sur le rôle des Etats-Unis dans le Pacifique, sur la culture et la stratégie japonaises, sur les armes et les équipements utilisés, sur les maladies et les blessures subies par les soldats. Vous serez éclairés par les commentaires des vétérans qui ont participé à la guerre et qui racontent leurs expériences personnelles.

            - -

            Band Of Brothers : L'Enfer Du Pacifique Saison 1 VF TORRENT est une série qui vous fera réfléchir sur le sens de la guerre, sur les conséquences humaines et morales qu'elle entraîne, sur les valeurs qui animent les combattants, sur les liens qui se créent entre eux. Vous serez émus par le message de paix et de réconciliation qui se dégage de la série, qui montre que malgré les différences et les conflits, il est possible de se respecter et de se comprendre.

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (The Inside Out English Full Movie In) - Enjoy the Best Quality of the Animated Film.md b/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (The Inside Out English Full Movie In) - Enjoy the Best Quality of the Animated Film.md deleted file mode 100644 index f90a8e3a6fadc7364808be1ef5bfe95fd326f106..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (The Inside Out English Full Movie In) - Enjoy the Best Quality of the Animated Film.md +++ /dev/null @@ -1,82 +0,0 @@ -
            -

            How to Watch Inside Out in HD Online

            -

            Inside Out is one of the most acclaimed animated movies of 2015, featuring a heartwarming and hilarious story about the emotions inside a young girl's mind. If you want to watch this movie in HD online, you have several options to choose from. In this article, we will show you how to use an HD online player to stream or download Inside Out in English, and what are the benefits of doing so.

            -

            HD Online Player (The Inside Out English Full Movie In)


            Download ✺✺✺ https://urlcod.com/2uK6c7



            - -

            What is an HD Online Player?

            -

            An HD online player is a software or a website that allows you to watch movies and shows in high definition quality on your computer, smartphone, tablet, or smart TV. An HD online player can either stream the content from the internet, or download it to your device for offline viewing. Some HD online players also offer subtitles, audio tracks, and other features to enhance your viewing experience.

            - -

            Why Use an HD Online Player to Watch Inside Out?

            -

            There are many reasons why you might want to use an HD online player to watch Inside Out in English. Here are some of them:

            -
              -
            • You can enjoy the stunning animation and visuals of Inside Out in full HD quality, without any pixelation or buffering issues.
            • -
            • You can choose from different sources and platforms that offer Inside Out online, such as Disney Plus, Amazon Video, Google Play Movies, YouTube, Vudu, Microsoft Store, Redbox, DIRECTV, AMC on Demand, and more.
            • -
            • You can save money by comparing the prices and deals of different providers, and choosing the best option for your budget.
            • -
            • You can watch Inside Out anytime and anywhere you want, as long as you have an internet connection or a downloaded file.
            • -
            • You can watch Inside Out with your family and friends, and share your emotions and thoughts about the movie.
            • -
            - -

            How to Use an HD Online Player to Watch Inside Out?

            -

            Using an HD online player to watch Inside Out in English is very easy and convenient. Here are the steps you need to follow:

            -
              -
            1. Find an HD online player that suits your needs and preferences. You can search for one on Google or use a website like JustWatch that compares different streaming services and platforms.
            2. -
            3. Sign up for an account if required, and choose a subscription plan or a payment method.
            4. -
            5. Search for Inside Out on the HD online player's library or catalog.
            6. -
            7. Select the option to stream or download Inside Out in HD quality.
            8. -
            9. Enjoy watching Inside Out in English on your device of choice.
            10. -
            - -

            Conclusion

            -

            Inside Out is a wonderful movie that explores the complex emotions of a young girl who moves to a new city with her family. If you want to watch this movie in HD online, you can use an HD online player to stream or download it in English. An HD online player offers many advantages, such as high-quality video, multiple sources and platforms, affordable prices, flexibility, and social interaction. To use an HD online player to watch Inside Out, you just need to find one that works for you, sign up for an account if needed, search for Inside Out, and select the option to stream or download it. We hope this article helped you learn how to use an HD online player to watch Inside Out in English.

            -

            Watch Inside Out Full Movie Online Free HD Quality
            -Stream Inside Out English Movie HD on Your Device
            -How to Download Inside Out Full Movie in HD for Free
            -Inside Out HD Online Player with Subtitles and Extras
            -Best Sites to Watch Inside Out English Movie Online
            -Inside Out Full Movie HD Download Link
            -Watch Inside Out Online Free No Sign Up or Registration
            -Inside Out English Movie Streaming Options and Reviews
            -Inside Out HD Online Player for PC, Mac, Android and iOS
            -Where to Watch Inside Out Full Movie in HD with VPN
            -Inside Out English Movie Torrent Download HD Quality
            -Watch Inside Out Online Free with Disney Plus Subscription
            -Inside Out Full Movie HD 1080p Online Player
            -Stream Inside Out English Movie with Chromecast or Roku
            -How to Watch Inside Out Full Movie in HD Offline
            -Inside Out HD Online Player with Dolby Atmos and 4K Support
            -Best VPNs to Watch Inside Out English Movie Online
            -Inside Out Full Movie HD Blu-ray Download
            -Watch Inside Out Online Free with Amazon Prime Video
            -Inside Out English Movie Online Player with Commentary and Bonus Features
            -How to Watch Inside Out Full Movie in HD on Netflix
            -Inside Out HD Online Player for Smart TV and Gaming Consoles
            -Stream Inside Out English Movie with Friends and Family Online
            -Inside Out Full Movie HD DVD Download
            -Watch Inside Out Online Free with Hulu Subscription
            -Inside Out English Movie Online Player with Multiple Languages and Audio Tracks
            -How to Watch Inside Out Full Movie in HD on YouTube
            -Inside Out HD Online Player for Mobile Devices and Tablets
            -Stream Inside Out English Movie with VR Headset or 3D Glasses
            -Inside Out Full Movie HD Digital Download Code
            -Watch Inside Out Online Free with HBO Max Subscription
            -Inside Out English Movie Online Player with Closed Captions and Descriptive Audio
            -How to Watch Inside Out Full Movie in HD on Apple TV or iTunes
            -Inside Out HD Online Player for Windows, Linux and Chrome OS
            -Stream Inside Out English Movie with AirPlay or Miracast
            -Inside Out Full Movie HD Google Drive Download Link
            -Watch Inside Out Online Free with Peacock Subscription
            -Inside Out English Movie Online Player with Trivia and Easter Eggs
            -How to Watch Inside Out Full Movie in HD on Vudu or Fandango Now
            -Inside Out HD Online Player for Fire TV Stick and Kindle Fire
            -Stream Inside Out English Movie with Live Chat and Reactions
            -Inside Out Full Movie HD Dropbox Download Link
            -Watch Inside Out Online Free with Paramount Plus Subscription
            -Inside Out English Movie Online Player with Behind the Scenes and Interviews
            -How to Watch Inside Out Full Movie in HD on Plex or Kodi
            -Inside Out HD Online Player for Samsung Smart TV and Galaxy Devices
            -Stream Inside Out English Movie with Parental Controls and Ratings
            -Inside Out Full Movie HD OneDrive Download Link

            -

            Conclusion

            -

            Inside Out is a wonderful movie that explores the complex emotions of a young girl who moves to a new city with her family. If you want to watch this movie in HD online, you can use an HD online player to stream or download it in English. An HD online player offers many advantages, such as high-quality video, multiple sources and platforms, affordable prices, flexibility, and social interaction. To use an HD online player to watch Inside Out, you just need to find one that works for you, sign up for an account if needed, search for Inside Out, and select the option to stream or download it. We hope this article helped you learn how to use an HD online player to watch Inside Out in English.

            679dcb208e
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How To Install ReFX Nexus 1.4.1 On Mac Verified.md b/spaces/tialenAdioni/chat-gpt-api/logs/How To Install ReFX Nexus 1.4.1 On Mac Verified.md deleted file mode 100644 index faa813dda4c0fbf864c71d772c1ebaf34c5b2064..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How To Install ReFX Nexus 1.4.1 On Mac Verified.md +++ /dev/null @@ -1,28 +0,0 @@ - -

            How To Install ReFX Nexus 1.4.1 On Mac Verified

            -

            ReFX Nexus is a popular ROM synthesizer that offers a wide range of sounds and presets for various genres of music. If you want to install ReFX Nexus 1.4.1 on your Mac computer, you will need to follow these steps:

            -

            How To Install ReFX Nexus 1.4.1 On Mac Verified


            Download File –––––>>> https://urlcod.com/2uK559



            -
              -
            1. Download ReFX Nexus 1.4.1 from the official website[^1^] or from a trusted source[^2^]. Make sure you have enough space on your hard drive to store the installer and the content files.
            2. -
            3. Extract the installer and run it. Follow the instructions on the screen to install ReFX Nexus 1.4.1 on your Mac. You will need to enter your license key and choose a destination folder for the plugin and the content.
            4. -
            5. After the installation is complete, you can launch your DAW and scan for new plugins. You should see ReFX Nexus 1.4.1 in your plugin list. You can also access it from the reFX Cloud App[^1^], which allows you to manage your expansions and skins.
            6. -
            7. To load ReFX Nexus 1.4.1 in your DAW, create a new track and insert ReFX Nexus as an instrument plugin. You can then browse through the presets and sounds using the advanced librarian feature[^1^]. You can also customize the look of ReFX Nexus by choosing from different skins[^1^].
            8. -
            9. Enjoy making music with ReFX Nexus 1.4.1 on your Mac!
            10. -
            -

            ReFX Nexus 1.4.1 is compatible with macOS 10.13 and later, including macOS Ventura[^1^]. It supports AudioUnit, VST, VST3 and AAX host software[^1^]. It requires an Apple Silicon M1 or Intel 2.0 GHz processor, 8GB of RAM (16GB or more highly recommended), and a display with 1024-by-768 or higher resolution[^1^]. You will also need an internet connection to download and activate your license[^1^].

            -

            If you have any questions or issues with installing or using ReFX Nexus 1.4.1 on your Mac, you can contact the reFX support team via email or phone[^1^]. You can also check out some tutorials and videos on how to use ReFX Nexus on YouTube[^3^] [^4^].

            -

            We hope this article helped you learn how to install ReFX Nexus 1.4.1 on your Mac verified. ReFX Nexus is a powerful and versatile plugin that can enhance your music production with its high-quality sounds and features. Have fun exploring its possibilities and creating amazing tracks!

            - -

            How To Use ReFX Nexus 1.4.1 On Mac

            -

            Now that you have installed ReFX Nexus 1.4.1 on your Mac, you might be wondering how to use it effectively and creatively. In this section, we will give you some tips and tricks on how to get the most out of ReFX Nexus 1.4.1 on your Mac.

            -

            -
              -
            • Learn the basics of ReFX Nexus 1.4.1. ReFX Nexus 1.4.1 has a simple and intuitive interface that allows you to access all its features and functions easily. You can learn the basics of ReFX Nexus 1.4.1 by reading the user manual or watching some introductory videos . You can also explore the presets and sounds by category and genre to get familiar with the sonic possibilities of ReFX Nexus 1.4.1.
            • -
            • Use the advanced librarian to find and organize your sounds. ReFX Nexus 1.4.1 has an advanced librarian that lets you browse, search, filter, tag, bookmark, and favorite your sounds and presets. You can also create your own user presets and save them in a dedicated location. The librarian also shows you the history of your previous selections, so you can easily go back and forth between different sounds. The librarian is a powerful tool that can help you find and manage your sounds efficiently and creatively.
            • -
            • Use the arpeggiator to create complex and dynamic patterns. ReFX Nexus 1.4.1 has a deluxe arpeggiator that lets you create complex and dynamic patterns with up to 256 steps. You can access all sixteen layer arpeggiators, in addition to the main arpeggiator. You can also edit the parameters of each step, such as velocity, transpose, slide, hold, shuffle, gate, and more. You can also use the zoomed-out overview to see the whole pattern at a glance. The arpeggiator is a great way to add movement and variation to your sounds and melodies.
            • -
            • Use the routing to customize your signal flow. ReFX Nexus 1.4.1 has a flexible routing system that lets you customize your signal flow and effects chain. You can see the signal flow of each layer and oscillator on one page. You can also enable or disable entire layers or individual effects with one click. You can also split complex SQ sequences into their components and single out individual sounds (just the bass, for instance) to easily create your own melodies. The routing is a useful feature that can help you tweak and fine-tune your sounds to your liking.
            • -
            • Use the macros and modulation to add expression and modulation to your sounds. ReFX Nexus 1.4.1 has four quick-access macro controls and a total of 24 modulation slots. You can assign any parameter to any macro or modulation slot with ease. You can also use the modulation matrix to see all your assignments at once. You can also use external MIDI controllers or automation to control the macros and modulation slots in real time. The macros and modulation are essential features that can help you add expression and modulation to your sounds.
            • -
            -

            These are some of the tips and tricks on how to use ReFX Nexus 1.4.1 on your Mac. Of course, there are many more features and functions that you can explore and experiment with in ReFX Nexus 1.4.1. The best way to learn how to use ReFX Nexus 1.4.1 is by using it yourself and having fun with it!

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/tianpanyu/ChatYuan-Demo/README.md b/spaces/tianpanyu/ChatYuan-Demo/README.md deleted file mode 100644 index e7e7441b15078f08a690449ec8a6831090a051ba..0000000000000000000000000000000000000000 --- a/spaces/tianpanyu/ChatYuan-Demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ChatYuan Demo -emoji: 📈 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tioseFevbu/cartoon-converter/CandleScanner 4.3.0.5 Full With Medicine[BabuPC] Serial Key [BEST].md b/spaces/tioseFevbu/cartoon-converter/CandleScanner 4.3.0.5 Full With Medicine[BabuPC] Serial Key [BEST].md deleted file mode 100644 index 9bef4084f13577ba66a413fb28b5e64670478234..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/CandleScanner 4.3.0.5 Full With Medicine[BabuPC] Serial Key [BEST].md +++ /dev/null @@ -1,47 +0,0 @@ -## CandleScanner 4.3.0.5 Full With Medicine[BabuPC] Serial Key - - - -**CLICK HERE ✶✶✶ [https://ditzcosupo.blogspot.com/?d=2tx0oF](https://ditzcosupo.blogspot.com/?d=2tx0oF)** - - - -# CandleScanner 4.3.0.5: A Powerful Tool for Candlestick Analysis - - - -CandleScanner 4.3.0.5 is a technical analysis software package created for investors interested in Japanese candle patterns. What makes this application exceptional is that, from the outset, it has been specifically designed for the detection of Japanese candle patterns[^1^] [^2^]. It is not just an add-on to an existing analysis platform, but a specialist charting application written by people with an extensive knowledge of the topic of Japanese candlestick patterns[^2^]. It is suitable for both seasoned traders and complete beginners[^1^] [^2^]. - - - -Japanese candle patterns are well known and routinely implemented in displaying price behavior. However, when apparent emerging patterns are analysed and discussed, it is frequently the case that the conclusions are imprecise, and, indeed, often result in contradictory interpretations of what the patterns are actually saying. Hence, to accurately implement a tool scanning charts for candle patterns is not a straightforward undertaking[^2^]. The application of CandleScanner is extremely versatile, and can be used by a whole spectrum of traders involved in, for example, stock market trading, commodities markets, futures markets or forex[^1^] [^2^]. Also, those who are just beginners will find CandleScanner a great learning and training tool, where they can learn from real-life data-based examples, rather than just pure text book theory[^2^]. - - - -With the application you can do the following: - - - -- Quickly scan candlestick charts to find all occurrences of candle patterns[^1^] [^2^] - -- Measure the efficiency of patterns, i.e. are they working as you expect them to?[^1^] [^2^] - -- Build trading strategies based on candle patterns and simulate transactions (backtesting)[^1^] [^2^] - - - -CandleScanner is highly configurable, meaning that you can adjust it to your specific needs. The algorithms scanning the candlesticks charts have many parameters, which can be set by the user. Also, the visual aspects of the application are highly customizable, enabling everyone to find their desired optimal settings[^2^]. - - - -Version 4.3.0.5 adds a primary trend notion extending efficiency statistics (primary trend, report) new market data panel on top of the candlestick chart panel[^3^]. You can watch an introducing video at [https://www.youtube.com/watch?v=9QbXZ8l7YcY](https://www.youtube.com/watch?v=9QbXZ8l7YcY) [^3^]. - - - -If you are interested in CandleScanner 4.3.0.5, you can download it from [https://www.candlescanner.com/download/](https://www.candlescanner.com/download/) [^1^] or [https://candlescanner.soft32.com/](https://candlescanner.soft32.com/) [^2^]. You can also read more about the features and benefits of CandleScanner at [https://www.candlescanner.com/features/](https://www.candlescanner.com/features/) [^1^] or [https://download.cnet.com/CandleScanner/3000-2064\_4-76283035.html](https://download.cnet.com/CandleScanner/3000-2064_4-76283035.html) [^3^]. - - - -CandleScanner 4.3.0.5 is a powerful tool for candlestick analysis that can help you improve your trading performance and gain more insights into the market movements. Whether you are a novice or an expert trader, CandleScanner 4.3.0.5 can be a valuable addition to your trading arsenal. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Baya Marathi Masik VERIFIED.md b/spaces/tioseFevbu/cartoon-converter/scripts/Baya Marathi Masik VERIFIED.md deleted file mode 100644 index 28669679602852ca621055456edd9050ddcc98db..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Baya Marathi Masik VERIFIED.md +++ /dev/null @@ -1,23 +0,0 @@ - -I can try to write a title and an article with SEO optimization and HTML formatting for the keyword "baya marathi masik". Here is a possible example: - -

            What is Baya Marathi Masik and Why You Should Read It

            -

            Baya Marathi Masik is a popular Marathi magazine that publishes articles on various topics such as culture, literature, entertainment, health, fashion, and more. It is one of the oldest and most respected magazines in the Marathi language, with a loyal readership of millions. In this article, we will tell you more about Baya Marathi Masik and why you should read it.

            -

            The History of Baya Marathi Masik

            -

            Baya Marathi Masik was founded in 1929 by Shri Vasant Desai, a renowned journalist and social activist. He wanted to create a magazine that would cater to the needs and interests of the Marathi-speaking women, who were often neglected by the mainstream media. He named the magazine Baya, which means "sister" in Marathi, to signify the bond of sisterhood among the readers. The magazine soon became a platform for women to express their opinions, share their experiences, and learn from each other.

            -

            baya marathi masik


            DOWNLOADhttps://urlcod.com/2uHw3A



            -

            Over the years, Baya Marathi Masik has evolved with the changing times and trends, but has always maintained its core values of quality, credibility, and relevance. It has featured some of the most eminent writers, poets, artists, and personalities from the Marathi world, such as Pu La Deshpande, Vinda Karandikar, Mangesh Padgaonkar, Smita Patil, Madhuri Dixit, and many more. It has also covered important social issues such as women's empowerment, education, health care, environment, and human rights.

            -

            The Benefits of Reading Baya Marathi Masik

            -

            Reading Baya Marathi Masik can offer you many benefits, such as:

            -
              -
            • It can enrich your knowledge and awareness of various topics related to your culture, society, and lifestyle.
            • -
            • It can improve your language skills and vocabulary by exposing you to different styles and genres of writing.
            • -
            • It can inspire you to pursue your passions and hobbies by showcasing stories of successful and talented people.
            • -
            • It can entertain you with its humorous and witty columns, quizzes, puzzles, and cartoons.
            • -
            • It can help you relax and unwind from your daily stress by offering you tips and advice on health, beauty, fashion, and wellness.
            • -
            -

            How to Subscribe to Baya Marathi Masik

            -

            If you are interested in reading Baya Marathi Masik, you can subscribe to it online or offline. You can visit their official website http://friendslibrary.in/books/detailedinfo/21251/Baya+ to get more information about their subscription plans and offers. You can also order their print or digital editions through various online platforms such as Amazon or Flipkart. Alternatively, you can visit your nearest bookstore or newsstand to buy their latest issue.

            -

            Baya Marathi Masik is a magazine that you should not miss if you love reading in Marathi. It is a magazine that will enrich your mind, heart, and soul with its diverse and engaging content. So what are you waiting for? Subscribe to Baya Marathi Masik today and enjoy reading it every month!

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Facebook 2021 Download For Nokia C5-00.2.md b/spaces/tioseFevbu/cartoon-converter/scripts/Facebook 2021 Download For Nokia C5-00.2.md deleted file mode 100644 index a8677f6ea5261f65c5af9cc9773b61bda4e8ddf5..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Facebook 2021 Download For Nokia C5-00.2.md +++ /dev/null @@ -1,37 +0,0 @@ -
            -

            How to Download Facebook on Your Nokia C5-00.2

            -

            Facebook is one of the most popular social media platforms in the world. It allows you to connect with your friends, family, and colleagues, share your photos and videos, like and comment on various posts and articles, and much more. If you have a Nokia C5-00.2 phone, you might be wondering how to download Facebook on your device. In this article, we will show you how to do that in a few simple steps.

            -

            facebook download for nokia c5-00.2


            Download File 🔗 https://urlcod.com/2uHxm0



            -

            Step 1: Connect Your Phone to Your PC

            -

            The first thing you need to do is to connect your Nokia C5-00.2 phone to your PC using the USB cable. This will allow you to transfer files between your phone and your computer. On your PC, go to My Computer or This PC and open your device. You should see a folder named Nokia C5-00.2 or something similar.

            -

            Step 2: Download Facebook for Nokia

            -

            The next thing you need to do is to download Facebook for Nokia from a reliable source. You can use the link below to download it from CNET, one of the most trusted websites for software downloads. The file name should be facebook-for-nokia.exe or something similar.

            -Download Facebook for Nokia -

            Once you have downloaded the file, copy it to your phone's folder on your PC. You can also drag and drop it if you prefer.

            -

            Step 3: Install Facebook on Your Phone

            -

            The final thing you need to do is to install Facebook on your Nokia C5-00.2 phone. To do that, disconnect your phone from your PC and go to the menu on your phone. Then, go to Applications > File Manager > Memory Card > Nokia C5-00.2 (or whatever folder name you have). You should see the file facebook-for-nokia.exe or something similar. Tap on it and follow the instructions on the screen to install Facebook on your phone.

            -

            Conclusion

            -

            Congratulations! You have successfully downloaded and installed Facebook on your Nokia C5-00.2 phone. Now you can enjoy all the features of Facebook on your device. You can log in with your existing account or create a new one if you don't have one yet. You can also customize your settings, notifications, privacy, and more according to your preferences.

            -

            We hope this article was helpful for you. If you have any questions or feedback, feel free to leave a comment below.

            - -

            Benefits of Using Facebook on Your Nokia C5-00.2

            -

            Using Facebook on your Nokia C5-00.2 phone has many benefits. Here are some of them:

            -

            -
              -
            • You can stay in touch with your friends and family anytime and anywhere. You can send and receive messages, make voice and video calls, and join groups and events.
            • -
            • You can share your life with the world. You can post status updates, photos, videos, stories, and live streams. You can also react to other people's posts and express your emotions.
            • -
            • You can discover new things and learn new skills. You can follow pages and groups that interest you, watch videos and stories from creators and influencers, and join courses and workshops.
            • -
            • You can support causes and communities that matter to you. You can donate to charities and fundraisers, sign petitions and pledges, and volunteer for social good.
            • -
            • You can have fun and entertainment. You can play games and quizzes, watch movies and shows, listen to music and podcasts, and shop for products and services.
            • -
            -

            Tips and Tricks for Using Facebook on Your Nokia C5-00.2

            -

            Using Facebook on your Nokia C5-00.2 phone is easy and convenient. However, there are some tips and tricks that can make your experience even better. Here are some of them:

            -
              -
            • You can use Facebook Lite instead of the regular Facebook app. Facebook Lite is a lighter version of Facebook that uses less data and battery. It also works faster on slow or unstable networks. You can download Facebook Lite from the link below.
            • -Download Facebook Lite -
            • You can use shortcuts to access your favorite features quickly. For example, you can press *#06# to check your phone's IMEI number, *#0000# to check your phone's software version, *#92702689# to check your phone's warranty status, and *#7780# to reset your phone's settings.
            • -
            • You can use the Nokia Browser to browse Facebook faster and more securely. The Nokia Browser is a web browser that comes pre-installed on your Nokia C5-00.2 phone. It has features like data compression, privacy mode, night mode, and bookmarks.
            • -
            • You can use the Nokia Store to download more apps for your Nokia C5-00.2 phone. The Nokia Store is an app store that comes pre-installed on your Nokia C5-00.2 phone. It has thousands of apps in various categories like games, social media, education, health, and more.
            • -

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/color_triplet.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/color_triplet.py deleted file mode 100644 index 02cab328251af9bfa809981aaa44933c407e2cd7..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/color_triplet.py +++ /dev/null @@ -1,38 +0,0 @@ -from typing import NamedTuple, Tuple - - -class ColorTriplet(NamedTuple): - """The red, green, and blue components of a color.""" - - red: int - """Red component in 0 to 255 range.""" - green: int - """Green component in 0 to 255 range.""" - blue: int - """Blue component in 0 to 255 range.""" - - @property - def hex(self) -> str: - """get the color triplet in CSS style.""" - red, green, blue = self - return f"#{red:02x}{green:02x}{blue:02x}" - - @property - def rgb(self) -> str: - """The color in RGB format. - - Returns: - str: An rgb color, e.g. ``"rgb(100,23,255)"``. - """ - red, green, blue = self - return f"rgb({red},{green},{blue})" - - @property - def normalized(self) -> Tuple[float, float, float]: - """Convert components into floats between 0 and 1. - - Returns: - Tuple[float, float, float]: A tuple of three normalized colour components. - """ - red, green, blue = self - return red / 255.0, green / 255.0, blue / 255.0 diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/dep_util.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/dep_util.py deleted file mode 100644 index d94e111ca6c4ae6dc35bc36ccf094f1c0ccb34f6..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/dep_util.py +++ /dev/null @@ -1,96 +0,0 @@ -"""distutils.dep_util - -Utility functions for simple, timestamp-based dependency of files -and groups of files; also, function based entirely on such -timestamp dependency analysis.""" - -import os -from distutils.errors import DistutilsFileError - - -def newer(source, target): - """Return true if 'source' exists and is more recently modified than - 'target', or if 'source' exists and 'target' doesn't. Return false if - both exist and 'target' is the same age or younger than 'source'. - Raise DistutilsFileError if 'source' does not exist. - """ - if not os.path.exists(source): - raise DistutilsFileError("file '%s' does not exist" % os.path.abspath(source)) - if not os.path.exists(target): - return 1 - - from stat import ST_MTIME - - mtime1 = os.stat(source)[ST_MTIME] - mtime2 = os.stat(target)[ST_MTIME] - - return mtime1 > mtime2 - - -# newer () - - -def newer_pairwise(sources, targets): - """Walk two filename lists in parallel, testing if each source is newer - than its corresponding target. Return a pair of lists (sources, - targets) where source is newer than target, according to the semantics - of 'newer()'. - """ - if len(sources) != len(targets): - raise ValueError("'sources' and 'targets' must be same length") - - # build a pair of lists (sources, targets) where source is newer - n_sources = [] - n_targets = [] - for i in range(len(sources)): - if newer(sources[i], targets[i]): - n_sources.append(sources[i]) - n_targets.append(targets[i]) - - return (n_sources, n_targets) - - -# newer_pairwise () - - -def newer_group(sources, target, missing='error'): - """Return true if 'target' is out-of-date with respect to any file - listed in 'sources'. In other words, if 'target' exists and is newer - than every file in 'sources', return false; otherwise return true. - 'missing' controls what we do when a source file is missing; the - default ("error") is to blow up with an OSError from inside 'stat()'; - if it is "ignore", we silently drop any missing source files; if it is - "newer", any missing source files make us assume that 'target' is - out-of-date (this is handy in "dry-run" mode: it'll make you pretend to - carry out commands that wouldn't work because inputs are missing, but - that doesn't matter because you're not actually going to run the - commands). - """ - # If the target doesn't even exist, then it's definitely out-of-date. - if not os.path.exists(target): - return 1 - - # Otherwise we have to find out the hard way: if *any* source file - # is more recent than 'target', then 'target' is out-of-date and - # we can immediately return true. If we fall through to the end - # of the loop, then 'target' is up-to-date and we return false. - from stat import ST_MTIME - - target_mtime = os.stat(target)[ST_MTIME] - for source in sources: - if not os.path.exists(source): - if missing == 'error': # blow up when we stat() the file - pass - elif missing == 'ignore': # missing source dropped from - continue # target's dependency list - elif missing == 'newer': # missing source means target is - return 1 # out-of-date - - source_mtime = os.stat(source)[ST_MTIME] - if source_mtime > target_mtime: - return 1 - else: - return 0 - - -# newer_group () diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/importlib_metadata/_collections.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/importlib_metadata/_collections.py deleted file mode 100644 index cf0954e1a30546d781bf25781ec716ef92a77e32..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/importlib_metadata/_collections.py +++ /dev/null @@ -1,30 +0,0 @@ -import collections - - -# from jaraco.collections 3.3 -class FreezableDefaultDict(collections.defaultdict): - """ - Often it is desirable to prevent the mutation of - a default dict after its initial construction, such - as to prevent mutation during iteration. - - >>> dd = FreezableDefaultDict(list) - >>> dd[0].append('1') - >>> dd.freeze() - >>> dd[1] - [] - >>> len(dd) - 1 - """ - - def __missing__(self, key): - return getattr(self, '_frozen', super().__missing__)(key) - - def freeze(self): - self._frozen = lambda key: self.default_factory() - - -class Pair(collections.namedtuple('Pair', 'name value')): - @classmethod - def parse(cls, text): - return cls(*map(str.strip, text.split("=", 1))) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/demo/video_demo.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/demo/video_demo.py deleted file mode 100644 index 661130b42c56f64707c4c79749f10e488be02ef0..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/demo/video_demo.py +++ /dev/null @@ -1,60 +0,0 @@ -import argparse - -import cv2 -import mmcv - -from mmdet.apis import inference_detector, init_detector - - -def parse_args(): - parser = argparse.ArgumentParser(description='MMDetection video demo') - parser.add_argument('video', help='Video file') - parser.add_argument('config', help='Config file') - parser.add_argument('checkpoint', help='Checkpoint file') - parser.add_argument( - '--device', default='cuda:0', help='Device used for inference') - parser.add_argument( - '--score-thr', type=float, default=0.3, help='Bbox score threshold') - parser.add_argument('--out', type=str, help='Output video file') - parser.add_argument('--show', action='store_true', help='Show video') - parser.add_argument( - '--wait-time', - type=float, - default=1, - help='The interval of show (s), 0 is block') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - assert args.out or args.show, \ - ('Please specify at least one operation (save/show the ' - 'video) with the argument "--out" or "--show"') - - model = init_detector(args.config, args.checkpoint, device=args.device) - - video_reader = mmcv.VideoReader(args.video) - video_writer = None - if args.out: - fourcc = cv2.VideoWriter_fourcc(*'mp4v') - video_writer = cv2.VideoWriter( - args.out, fourcc, video_reader.fps, - (video_reader.width, video_reader.height)) - - for frame in mmcv.track_iter_progress(video_reader): - result = inference_detector(model, frame) - frame = model.show_result(frame, result, score_thr=args.score_thr) - if args.show: - cv2.namedWindow('video', 0) - mmcv.imshow(frame, 'video', args.wait_time) - if args.out: - video_writer.write(frame) - - if video_writer: - video_writer.release() - cv2.destroyAllWindows() - - -if __name__ == '__main__': - main() diff --git a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/output_settings.py b/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/output_settings.py deleted file mode 100644 index 4146cd955361fe738525c50b033054a6ae1b3a82..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/output_settings.py +++ /dev/null @@ -1,43 +0,0 @@ -from typing import Optional -import gradio - -import DeepFakeAI.choices -import DeepFakeAI.globals -from DeepFakeAI import wording -from DeepFakeAI.typing import OutputVideoEncoder -from DeepFakeAI.uis.typing import Update - -OUTPUT_VIDEO_ENCODER_DROPDOWN : Optional[gradio.Dropdown] = None -OUTPUT_VIDEO_QUALITY_SLIDER : Optional[gradio.Slider] = None - - -def render() -> None: - global OUTPUT_VIDEO_ENCODER_DROPDOWN - global OUTPUT_VIDEO_QUALITY_SLIDER - - with gradio.Box(): - OUTPUT_VIDEO_ENCODER_DROPDOWN = gradio.Dropdown( - label = wording.get('output_video_encoder_dropdown_label'), - choices = DeepFakeAI.choices.output_video_encoder, - value = DeepFakeAI.globals.output_video_encoder - ) - OUTPUT_VIDEO_QUALITY_SLIDER = gradio.Slider( - label = wording.get('output_video_quality_slider_label'), - value = DeepFakeAI.globals.output_video_quality, - step = 1 - ) - - -def listen() -> None: - OUTPUT_VIDEO_ENCODER_DROPDOWN.select(update_output_video_encoder, inputs = OUTPUT_VIDEO_ENCODER_DROPDOWN, outputs = OUTPUT_VIDEO_ENCODER_DROPDOWN) - OUTPUT_VIDEO_QUALITY_SLIDER.change(update_output_video_quality, inputs = OUTPUT_VIDEO_QUALITY_SLIDER, outputs = OUTPUT_VIDEO_QUALITY_SLIDER) - - -def update_output_video_encoder(output_video_encoder: OutputVideoEncoder) -> Update: - DeepFakeAI.globals.output_video_encoder = output_video_encoder - return gradio.update(value = output_video_encoder) - - -def update_output_video_quality(output_video_quality : int) -> Update: - DeepFakeAI.globals.output_video_quality = output_video_quality - return gradio.update(value = output_video_quality) diff --git a/spaces/training-transformers-together/Dashboard/streamlit_observable/frontend/build/service-worker.js b/spaces/training-transformers-together/Dashboard/streamlit_observable/frontend/build/service-worker.js deleted file mode 100644 index dc58040d0d0c083e829902c37df2ba329abb09eb..0000000000000000000000000000000000000000 --- a/spaces/training-transformers-together/Dashboard/streamlit_observable/frontend/build/service-worker.js +++ /dev/null @@ -1,39 +0,0 @@ -/** - * Welcome to your Workbox-powered service worker! - * - * You'll need to register this file in your web app and you should - * disable HTTP caching for this file too. - * See https://goo.gl/nhQhGp - * - * The rest of the code is auto-generated. Please don't update this file - * directly; instead, make changes to your Workbox build configuration - * and re-run your build process. - * See https://goo.gl/2aRDsh - */ - -importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js"); - -importScripts( - "./precache-manifest.2e1db2924cb1e112608cee049b0d33cc.js" -); - -self.addEventListener('message', (event) => { - if (event.data && event.data.type === 'SKIP_WAITING') { - self.skipWaiting(); - } -}); - -workbox.core.clientsClaim(); - -/** - * The workboxSW.precacheAndRoute() method efficiently caches and responds to - * requests for URLs in the manifest. - * See https://goo.gl/S9QRab - */ -self.__precacheManifest = [].concat(self.__precacheManifest || []); -workbox.precaching.precacheAndRoute(self.__precacheManifest, {}); - -workbox.routing.registerNavigationRoute(workbox.precaching.getCacheKeyForURL("./index.html"), { - - blacklist: [/^\/_/,/\/[^/?]+\.[^/]+$/], -}); diff --git a/spaces/uSerNameDDHL/bingo/cloudflare/worker.js b/spaces/uSerNameDDHL/bingo/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/ulysses115/vits-models/models.py b/spaces/ulysses115/vits-models/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/vits-models/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/umm-maybe/unitary-toxic-bert/README.md b/spaces/umm-maybe/unitary-toxic-bert/README.md deleted file mode 100644 index 43a53ff4b51327f220e1dc0dc8668369aea4b4c4..0000000000000000000000000000000000000000 --- a/spaces/umm-maybe/unitary-toxic-bert/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Unitary Toxic Bert -emoji: 🌖 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/user238921933/stable-diffusion-webui/test/basic_features/extras_test.py b/spaces/user238921933/stable-diffusion-webui/test/basic_features/extras_test.py deleted file mode 100644 index 0170c511fe54cc6bcf49ec7f75ca7c747de41db5..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/test/basic_features/extras_test.py +++ /dev/null @@ -1,54 +0,0 @@ -import unittest -import requests -from gradio.processing_utils import encode_pil_to_base64 -from PIL import Image - -class TestExtrasWorking(unittest.TestCase): - def setUp(self): - self.url_extras_single = "http://localhost:7860/sdapi/v1/extra-single-image" - self.extras_single = { - "resize_mode": 0, - "show_extras_results": True, - "gfpgan_visibility": 0, - "codeformer_visibility": 0, - "codeformer_weight": 0, - "upscaling_resize": 2, - "upscaling_resize_w": 128, - "upscaling_resize_h": 128, - "upscaling_crop": True, - "upscaler_1": "None", - "upscaler_2": "None", - "extras_upscaler_2_visibility": 0, - "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")) - } - - def test_simple_upscaling_performed(self): - self.extras_single["upscaler_1"] = "Lanczos" - self.assertEqual(requests.post(self.url_extras_single, json=self.extras_single).status_code, 200) - - -class TestPngInfoWorking(unittest.TestCase): - def setUp(self): - self.url_png_info = "http://localhost:7860/sdapi/v1/extra-single-image" - self.png_info = { - "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")) - } - - def test_png_info_performed(self): - self.assertEqual(requests.post(self.url_png_info, json=self.png_info).status_code, 200) - - -class TestInterrogateWorking(unittest.TestCase): - def setUp(self): - self.url_interrogate = "http://localhost:7860/sdapi/v1/extra-single-image" - self.interrogate = { - "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")), - "model": "clip" - } - - def test_interrogate_performed(self): - self.assertEqual(requests.post(self.url_interrogate, json=self.interrogate).status_code, 200) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/data/utils.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/data/utils.md deleted file mode 100644 index f0f2e2fad4e478da499cf0920e7727dd8f0751c6..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/data/utils.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -description: Efficiently handle data in YOLO with Ultralytics. Utilize HUBDatasetStats and customize dataset with these data utility functions. -keywords: YOLOv4, Object Detection, Computer Vision, Deep Learning, Convolutional Neural Network, CNN, Ultralytics Docs ---- - -## HUBDatasetStats ---- -### ::: ultralytics.yolo.data.utils.HUBDatasetStats -

            - -## img2label_paths ---- -### ::: ultralytics.yolo.data.utils.img2label_paths -

            - -## get_hash ---- -### ::: ultralytics.yolo.data.utils.get_hash -

            - -## exif_size ---- -### ::: ultralytics.yolo.data.utils.exif_size -

            - -## verify_image_label ---- -### ::: ultralytics.yolo.data.utils.verify_image_label -

            - -## polygon2mask ---- -### ::: ultralytics.yolo.data.utils.polygon2mask -

            - -## polygons2masks ---- -### ::: ultralytics.yolo.data.utils.polygons2masks -

            - -## polygons2masks_overlap ---- -### ::: ultralytics.yolo.data.utils.polygons2masks_overlap -

            - -## check_det_dataset ---- -### ::: ultralytics.yolo.data.utils.check_det_dataset -

            - -## check_cls_dataset ---- -### ::: ultralytics.yolo.data.utils.check_cls_dataset -

            - -## compress_one_image ---- -### ::: ultralytics.yolo.data.utils.compress_one_image -

            - -## delete_dsstore ---- -### ::: ultralytics.yolo.data.utils.delete_dsstore -

            - -## zip_directory ---- -### ::: ultralytics.yolo.data.utils.zip_directory -

            diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/patches.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/patches.md deleted file mode 100644 index 85ceefa323e7e5b8fd16816becc71fb3f5c8ace7..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/patches.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -description: Learn how to use the Ultralytics YOLO Utils package's imread and imshow functions. These functions are used for reading and writing image files. Try out our TorchSave feature today. -keywords: imread, imshow, ultralytics, YOLO, image files, torchsave ---- - -## imread ---- -### ::: ultralytics.yolo.utils.patches.imread -

            - -## imwrite ---- -### ::: ultralytics.yolo.utils.patches.imwrite -

            - -## imshow ---- -### ::: ultralytics.yolo.utils.patches.imshow -

            - -## torch_save ---- -### ::: ultralytics.yolo.utils.patches.torch_save -

            diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/examples/YOLOv8-CPP-Inference/README.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/examples/YOLOv8-CPP-Inference/README.md deleted file mode 100644 index 8e32cbbcaf8eadb298cecbbd3dc57d645003a582..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/examples/YOLOv8-CPP-Inference/README.md +++ /dev/null @@ -1,50 +0,0 @@ -# YOLOv8/YOLOv5 Inference C++ - -This example demonstrates how to perform inference using YOLOv8 and YOLOv5 models in C++ with OpenCV's DNN API. - -## Usage - -```bash -git clone ultralytics -cd ultralytics -pip install . -cd examples/cpp_ - -# Add a **yolov8\_.onnx** and/or **yolov5\_.onnx** model(s) to the ultralytics folder. -# Edit the **main.cpp** to change the **projectBasePath** to match your user. - -# Note that by default the CMake file will try and import the CUDA library to be used with the OpenCVs dnn (cuDNN) GPU Inference. -# If your OpenCV build does not use CUDA/cuDNN you can remove that import call and run the example on CPU. - -mkdir build -cd build -cmake .. -make -./Yolov8CPPInference -``` - -## Exporting YOLOv8 and YOLOv5 Models - -To export YOLOv8 models: - -```commandline -yolo export model=yolov8s.pt imgsz=480,640 format=onnx opset=12 -``` - -To export YOLOv5 models: - -```commandline -python3 export.py --weights yolov5s.pt --img 480 640 --include onnx --opset 12 -``` - -yolov8s.onnx: - -![image](https://user-images.githubusercontent.com/40023722/217356132-a4cecf2e-2729-4acb-b80a-6559022d7707.png) - -yolov5s.onnx: - -![image](https://user-images.githubusercontent.com/40023722/217357005-07464492-d1da-42e3-98a7-fc753f87d5e6.png) - -This repository utilizes OpenCV's DNN API to run ONNX exported models of YOLOv5 and YOLOv8. In theory, it should work for YOLOv6 and YOLOv7 as well, but they have not been tested. Note that the example networks are exported with rectangular (640x480) resolutions, but any exported resolution will work. You may want to use the letterbox approach for square images, depending on your use case. - -The **main** branch version uses Qt as a GUI wrapper. The primary focus here is the **Inference** class file, which demonstrates how to transpose YOLOv8 models to work as YOLOv5 models. diff --git a/spaces/vinthony/SadTalker/src/face3d/util/preprocess.py b/spaces/vinthony/SadTalker/src/face3d/util/preprocess.py deleted file mode 100644 index b77a3a4058c208e5ba8cb1cfbb563954a5f7a3e2..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/face3d/util/preprocess.py +++ /dev/null @@ -1,103 +0,0 @@ -"""This script contains the image preprocessing code for Deep3DFaceRecon_pytorch -""" - -import numpy as np -from scipy.io import loadmat -from PIL import Image -import cv2 -import os -from skimage import transform as trans -import torch -import warnings -warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning) -warnings.filterwarnings("ignore", category=FutureWarning) - - -# calculating least square problem for image alignment -def POS(xp, x): - npts = xp.shape[1] - - A = np.zeros([2*npts, 8]) - - A[0:2*npts-1:2, 0:3] = x.transpose() - A[0:2*npts-1:2, 3] = 1 - - A[1:2*npts:2, 4:7] = x.transpose() - A[1:2*npts:2, 7] = 1 - - b = np.reshape(xp.transpose(), [2*npts, 1]) - - k, _, _, _ = np.linalg.lstsq(A, b) - - R1 = k[0:3] - R2 = k[4:7] - sTx = k[3] - sTy = k[7] - s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2 - t = np.stack([sTx, sTy], axis=0) - - return t, s - -# resize and crop images for face reconstruction -def resize_n_crop_img(img, lm, t, s, target_size=224., mask=None): - w0, h0 = img.size - w = (w0*s).astype(np.int32) - h = (h0*s).astype(np.int32) - left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32) - right = left + target_size - up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32) - below = up + target_size - - img = img.resize((w, h), resample=Image.BICUBIC) - img = img.crop((left, up, right, below)) - - if mask is not None: - mask = mask.resize((w, h), resample=Image.BICUBIC) - mask = mask.crop((left, up, right, below)) - - lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] - - t[1] + h0/2], axis=1)*s - lm = lm - np.reshape( - np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2]) - - return img, lm, mask - -# utils for face reconstruction -def extract_5p(lm): - lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1 - lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean( - lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0) - lm5p = lm5p[[1, 2, 0, 3, 4], :] - return lm5p - -# utils for face reconstruction -def align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.): - """ - Return: - transparams --numpy.array (raw_W, raw_H, scale, tx, ty) - img_new --PIL.Image (target_size, target_size, 3) - lm_new --numpy.array (68, 2), y direction is opposite to v direction - mask_new --PIL.Image (target_size, target_size) - - Parameters: - img --PIL.Image (raw_H, raw_W, 3) - lm --numpy.array (68, 2), y direction is opposite to v direction - lm3D --numpy.array (5, 3) - mask --PIL.Image (raw_H, raw_W, 3) - """ - - w0, h0 = img.size - if lm.shape[0] != 5: - lm5p = extract_5p(lm) - else: - lm5p = lm - - # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face - t, s = POS(lm5p.transpose(), lm3D.transpose()) - s = rescale_factor/s - - # processing the image - img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask) - trans_params = np.array([w0, h0, s, t[0], t[1]]) - - return trans_params, img_new, lm_new, mask_new diff --git a/spaces/vivym/image-matting-app/ppmatting/utils/__init__.py b/spaces/vivym/image-matting-app/ppmatting/utils/__init__.py deleted file mode 100644 index 79717c71036b5b730cce8548bc27f6fef7222c21..0000000000000000000000000000000000000000 --- a/spaces/vivym/image-matting-app/ppmatting/utils/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .estimate_foreground_ml import estimate_foreground_ml -from .utils import get_files, get_image_list, mkdir diff --git a/spaces/weiyuanchen/stabilityai-stable-diffusion-2-1/README.md b/spaces/weiyuanchen/stabilityai-stable-diffusion-2-1/README.md deleted file mode 100644 index dc02d518a9f8852a798a06b8c9734746283479a8..0000000000000000000000000000000000000000 --- a/spaces/weiyuanchen/stabilityai-stable-diffusion-2-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stabilityai Stable Diffusion 2 1 -emoji: 👀 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/provider/metagpt_llm_api.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/provider/metagpt_llm_api.py deleted file mode 100644 index c27e7132da336336c608d79d606111fff7c75538..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/provider/metagpt_llm_api.py +++ /dev/null @@ -1,33 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/30 -@Author : mashenquan -@File : metagpt_llm_api.py -@Desc : MetaGPT LLM related APIs -""" - -import openai - -from metagpt.config import CONFIG -from metagpt.provider import OpenAIGPTAPI -from metagpt.provider.openai_api import RateLimiter - - -class MetaGPTLLMAPI(OpenAIGPTAPI): - """MetaGPT LLM api""" - - def __init__(self): - self.__init_openai() - self.llm = openai - self.model = CONFIG.METAGPT_API_MODEL - self.auto_max_tokens = False - RateLimiter.__init__(self, rpm=self.rpm) - - def __init_openai(self, *args, **kwargs): - openai.api_key = CONFIG.METAGPT_API_KEY - if CONFIG.METAGPT_API_BASE: - openai.api_base = CONFIG.METAGPT_API_BASE - if CONFIG.METAGPT_API_TYPE: - openai.api_type = CONFIG.METAGPT_API_TYPE - openai.api_version = CONFIG.METAGPT_API_VERSION - self.rpm = int(CONFIG.RPM) if CONFIG.RPM else 10 diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/roles/test_project_manager.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/roles/test_project_manager.py deleted file mode 100644 index ebda5901da4163429f0f446e6b00d37571e34c49..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/roles/test_project_manager.py +++ /dev/null @@ -1,19 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/12 10:23 -@Author : alexanderwu -@File : test_project_manager.py -""" -import pytest - -from metagpt.logs import logger -from metagpt.roles import ProjectManager -from tests.metagpt.roles.mock import MockMessages - - -@pytest.mark.asyncio -async def test_project_manager(): - project_manager = ProjectManager() - rsp = await project_manager.handle(MockMessages.system_design) - logger.info(rsp) diff --git a/spaces/widged/text-classification/app.py b/spaces/widged/text-classification/app.py deleted file mode 100644 index 1c15cb7bd15d51487336bf2064b2a35c10d2bd17..0000000000000000000000000000000000000000 --- a/spaces/widged/text-classification/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import streamlit as st -from transformers import pipeline -import spacy -from spacy import displacy -import plotly.express as px -import numpy as np -st.set_page_config(page_title="Text Classification") -st.title("Text Classification'") -st.write("_This web application is intended for educational use, please do not upload any sensitive information._") -st.write("Placing a piece of text into one or more categories.") - -@st.cache(allow_output_mutation=True, show_spinner=False) -def Loading_Classifier(): - class1 = pipeline("zero-shot-classification",framework="pt") - return class1 - -def plot_result(top_topics, scores): - top_topics = np.array(top_topics) - scores = np.array(scores) - scores *= 100 - fig = px.bar(x=scores, y=top_topics, orientation='h', - labels={'x': 'Probability', 'y': 'Category'}, - text=scores, - range_x=(0,115), - title='Top Predictions', - color=np.linspace(0,1,len(scores)), - color_continuous_scale="Bluered") - fig.update(layout_coloraxis_showscale=False) - fig.update_traces(texttemplate='%{text:0.1f}%', textposition='outside') - st.plotly_chart(fig) - -with st.spinner(text="Please wait for the models to load. This could take up to 60 seconds."): - class1 = Loading_Classifier() - -cat1 = st.text_input('Enter each possible category name (separated by a comma). Maximum 5 categories.') -text = st.text_area('Enter Text Below:', height=200) -submit = st.button('Generate') -if submit: - st.subheader("Classification Results:") - labels1 = cat1.strip().split(',') - result = class1(text, candidate_labels=labels1) - cat1name = result['labels'][0] - cat1prob = result['scores'][0] - st.write('Category: {} | Probability: {:.1f}%'.format(cat1name,(cat1prob*100))) - plot_result(result['labels'][::-1][-10:], result['scores'][::-1][-10:]) diff --git a/spaces/wikidere/crying/Dockerfile b/spaces/wikidere/crying/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/wikidere/crying/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/wilson1/bingo/src/components/ui/badge.tsx b/spaces/wilson1/bingo/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
            - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/wuhuik/bingo/src/components/button-scroll-to-bottom.tsx b/spaces/wuhuik/bingo/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/wuhuik/bingo/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/xangma/chat-pykg/ingest.py b/spaces/xangma/chat-pykg/ingest.py deleted file mode 100644 index 541fef7f54707a69cbb02c1552b3d780c2489147..0000000000000000000000000000000000000000 --- a/spaces/xangma/chat-pykg/ingest.py +++ /dev/null @@ -1,226 +0,0 @@ -# chat-pykg/ingest.py -import tempfile -import gradio as gr -from langchain.document_loaders import SitemapLoader, ReadTheDocsLoader, TextLoader -from langchain.embeddings import OpenAIEmbeddings, HuggingFaceEmbeddings -from langchain.text_splitter import RecursiveCharacterTextSplitter, PythonCodeTextSplitter, MarkdownTextSplitter, TextSplitter -from langchain.vectorstores.faiss import FAISS -import os -from langchain.vectorstores import Chroma -import shutil -from pathlib import Path -import subprocess -import chromadb -import magic -from typing import Any, Dict, Iterable, List, Optional, Type, TypeVar -from pydantic import Extra, Field, root_validator -import logging -logger = logging.getLogger() -from langchain.docstore.document import Document -import numpy as np - -def embedding_chooser(embedding_radio): - if embedding_radio == "Sentence Transformers": - embedding_function = HuggingFaceEmbeddings() - elif embedding_radio == "OpenAI": - embedding_function = OpenAIEmbeddings() - else: - embedding_function = HuggingFaceEmbeddings() - return embedding_function - -# Monkeypatch pending PR -def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]: - # We now want to combine these smaller pieces into medium size - # chunks to send to the LLM. - separator_len = self._length_function(separator) - - docs = [] - current_doc: List[str] = [] - total = 0 - for index, d in enumerate(splits): - _len = self._length_function(d) - if ( - total + _len + (separator_len if len(current_doc) > 0 else 0) - > self._chunk_size - ): - if total > self._chunk_size: - logger.warning( - f"Created a chunk of size {total}, " - f"which is longer than the specified {self._chunk_size}" - ) - if len(current_doc) > 0: - doc = self._join_docs(current_doc, separator) - if doc is not None: - docs.append(doc) - # Keep on popping if: - # - we have a larger chunk than in the chunk overlap - # - or if we still have any chunks and the length is long - while total > self._chunk_overlap or ( - total + _len + (separator_len if len(current_doc) > 0 else 0) - > self._chunk_size - and total > 0 - ): - total -= self._length_function(current_doc[0]) + ( - separator_len if len(current_doc) > 1 else 0 - ) - current_doc = current_doc[1:] - - if index > 0: - current_doc.append(separator + d) - else: - current_doc.append(d) - total += _len + (separator_len if len(current_doc) > 1 else 0) - doc = self._join_docs(current_doc, separator) - if doc is not None: - docs.append(doc) - return docs - -def get_text(content): - relevant_part = content.find("div", {"class": "markdown"}) - if relevant_part is not None: - return relevant_part.get_text(separator=" ") - else: - return "" - -def ingest_docs(all_collections_state, urls, chunk_size, chunk_overlap, vectorstore_radio, embedding_radio, debug=False): - cleared_list = urls.copy() - def sanitize_folder_name(folder_name): - if folder_name != '': - folder_name = folder_name.strip().rstrip('/') - else: - folder_name = '.' # current directory - return folder_name - - def is_hidden(path): - return os.path.basename(path).startswith('.') - - embedding_function = embedding_chooser(embedding_radio) - all_docs = [] - shutil.rmtree('downloaded/', ignore_errors=True) - known_exts = ["py", "md"] - # Initialize text splitters - py_splitter = PythonCodeTextSplitter(chunk_size=int(chunk_size), chunk_overlap=int(chunk_overlap)) - text_splitter = RecursiveCharacterTextSplitter(chunk_size=int(chunk_size), chunk_overlap=int(chunk_overlap)) - md_splitter = MarkdownTextSplitter(chunk_size=int(chunk_size), chunk_overlap=int(chunk_overlap)) - py_splitter._merge_splits = _merge_splits.__get__(py_splitter, TextSplitter) - # Process input URLs - urls = [[url.strip(), [sanitize_folder_name(folder) for folder in url_folders.split(',')]] for url, url_folders in urls] - for j in range(len(urls)): - orgrepo = urls[j][0] - repo_folders = urls[j][1] - if orgrepo == '': - continue - if orgrepo.replace('/','-') in all_collections_state: - logging.info(f"Skipping {orgrepo} as it is already in the database") - continue - documents_split = [] - documents = [] - paths = [] - paths_by_ext = {} - docs_by_ext = {} - for ext in known_exts + ["other"]: - docs_by_ext[ext] = [] - paths_by_ext[ext] = [] - - if orgrepo[0] == '/' or orgrepo[0] == '.': - # Ingest local folder - local_repo_path = sanitize_folder_name(orgrepo[1:]) - else: - # Ingest remote git repo - org = orgrepo.split('/')[0] - repo = orgrepo.split('/')[1] - repo_url = f"https://github.com/{org}/{repo}.git" - local_repo_path = os.path.join('.downloaded', orgrepo) if debug else tempfile.mkdtemp() - - # Initialize the Git repository - subprocess.run(["git", "init"], cwd=local_repo_path) - # Add the remote repository - subprocess.run(["git", "remote", "add", "-f", "origin", repo_url], cwd=local_repo_path) - # Enable sparse-checkout - subprocess.run(["git", "config", "core.sparseCheckout", "true"], cwd=local_repo_path) - # Specify the folder to checkout - cmd = ["git", "sparse-checkout", "set"] + [i for i in repo_folders] - subprocess.run(cmd, cwd=local_repo_path) - # Check if branch is called main or master - - # Checkout the desired branch - res = subprocess.run(["git", "checkout", 'main'], cwd=local_repo_path) - if res.returncode == 1: - res = subprocess.run(["git", "checkout", "master"], cwd=local_repo_path) - #res = subprocess.run(["cp", "-r", (Path(local_repo_path) / repo_folders[i]).as_posix(), '/'.join(destination.split('/')[:-1])])# - # Iterate through files and process them - if local_repo_path == '.': - orgrepo='chat-pykg' - for root, dirs, files in os.walk(local_repo_path): - dirs[:] = [d for d in dirs if not is_hidden(d)] # Ignore hidden directories - for file in files: - if is_hidden(file): - continue - file_path = os.path.join(root, file) - rel_file_path = os.path.relpath(file_path, local_repo_path) - try: - if '.' not in rel_file_path: - inferred_filetype = magic.from_file(file_path, mime=True) - if "python" in inferred_filetype or "text/plain" in inferred_filetype: - ext = "py" - else: - ext = "other" - else: - ext = rel_file_path.split('.')[-1] - if docs_by_ext.get(ext) is None: - ext = "other" - doc = TextLoader(os.path.join(local_repo_path, rel_file_path)).load()[0] - doc.metadata["source"] = os.path.join(orgrepo, rel_file_path) - docs_by_ext[ext].append(doc) - paths_by_ext[ext].append(rel_file_path) - except Exception as e: - continue - for ext in docs_by_ext.keys(): - if ext == "py": - documents_split += py_splitter.split_documents(docs_by_ext[ext]) - documents += docs_by_ext[ext] - if ext == "md": - documents_split += md_splitter.split_documents(docs_by_ext[ext]) - documents += docs_by_ext[ext] - # else: - # documents += text_splitter.split_documents(docs_by_ext[ext] - all_docs += documents_split - # For each document, add the metadata to the page_content - for doc in documents_split: - if local_repo_path != '.': - doc.metadata["source"] = doc.metadata["source"].replace(local_repo_path, "") - if doc.metadata["source"] == '/': - doc.metadata["source"] = doc.metadata["source"][1:] - doc.page_content = f'# source:{doc.metadata["source"]}\n{doc.page_content}' - for doc in documents: - if local_repo_path != '.': - doc.metadata["source"] = doc.metadata["source"].replace(local_repo_path, "") - if doc.metadata["source"] == '/': - doc.metadata["source"] = doc.metadata["source"][1:] - doc.page_content = f'# source:{doc.metadata["source"]}\n{doc.page_content}' - - if type(embedding_radio) == gr.Radio: - embedding_radio = embedding_radio.value - persist_directory = os.path.join(".persisted_data", embedding_radio.replace(' ','_')) - persist_directory_raw = Path('.persisted_data_raw') - persist_directory_raw.mkdir(parents=True, exist_ok=True) - collection_name = orgrepo.replace('/','-') - - if vectorstore_radio == 'Chroma': - collection = Chroma.from_documents(documents=documents_split, collection_name=collection_name, embedding=embedding_function, persist_directory=persist_directory) - collection.persist() - - if vectorstore_radio == 'raw': - # Persist the raw documents - docarr = np.array([doc.page_content for doc in documents_split]) - np.save(os.path.join(persist_directory_raw, f"{collection_name}.npy"), docarr) - # with open(os.path.join(persist_directory_raw, f"{collection_name}"), "w") as f: - # for doc in documents: - # f.write(doc.page_content) - - all_collections_state.append(collection_name) - cleared_list[j][0], cleared_list[j][1] = '', '' - return all_collections_state, gr.update(value=cleared_list) - -if __name__ == "__main__": - ingest_docs() diff --git "a/spaces/xwsm/gpt/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py" "b/spaces/xwsm/gpt/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py" deleted file mode 100644 index b4bcd56109b42d3023f24eade7c0cd5671d3c5a4..0000000000000000000000000000000000000000 --- "a/spaces/xwsm/gpt/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py" +++ /dev/null @@ -1,146 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = True - - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len( - enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf( - file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append( - self.file_paths[index] + f".part-{j}.txt") - - - -def parseNotebook(filename, enable_markdown=1): - import json - - CodeBlocks = [] - with open(filename, 'r', encoding='utf-8', errors='replace') as f: - notebook = json.load(f) - for cell in notebook['cells']: - if cell['cell_type'] == 'code' and cell['source']: - # remove blank lines - cell['source'] = [line for line in cell['source'] if line.strip() - != ''] - CodeBlocks.append("".join(cell['source'])) - elif enable_markdown and cell['cell_type'] == 'markdown' and cell['source']: - cell['source'] = [line for line in cell['source'] if line.strip() - != ''] - CodeBlocks.append("Markdown:"+"".join(cell['source'])) - - Code = "" - for idx, code in enumerate(CodeBlocks): - Code += f"This is {idx+1}th code block: \n" - Code += code+"\n" - - return Code - - -def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - enable_markdown = plugin_kwargs.get("advanced_arg", "1") - try: - enable_markdown = int(enable_markdown) - except ValueError: - enable_markdown = 1 - - pfg = PaperFileGroup() - - for fp in file_manifest: - file_content = parseNotebook(fp, enable_markdown=enable_markdown) - pfg.file_paths.append(fp) - pfg.file_contents.append(file_content) - - # <-------- 拆分过长的IPynb文件 ----------> - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - inputs_array = [r"This is a Jupyter Notebook file, tell me about Each Block in Chinese. Focus Just On Code." + - r"If a block starts with `Markdown` which means it's a markdown block in ipynbipynb. " + - r"Start a new line for a block and block num use Chinese." + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"{f}的分析如下" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional programmer."] * n_split - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # OpenAI所允许的最大并行过载 - scroller_max_len=80 - ) - - # <-------- 整理结果,退出 ----------> - block_result = " \n".join(gpt_response_collection) - chatbot.append(("解析的结果如下", block_result)) - history.extend(["解析的结果如下", block_result]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # <-------- 写入文件,退出 ----------> - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - -@CatchException -def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - chatbot.append([ - "函数插件功能?", - "对IPynb文件进行解析。Contributor: codycjy."]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - history = [] # 清空历史 - import glob - import os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": - txt = '空空如也的输入栏' - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if txt.endswith('.ipynb'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob( - f'{project_folder}/**/*.ipynb', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到任何.ipynb文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, ) diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/Waifu2x/Models.py b/spaces/yangheng/Super-Resolution-Anime-Diffusion/Waifu2x/Models.py deleted file mode 100644 index 8072a01ed3c242832a60160e60a03f025f198193..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/Waifu2x/Models.py +++ /dev/null @@ -1,450 +0,0 @@ -import json -from collections import OrderedDict -from math import exp - -from .Common import * - - -# +++++++++++++++++++++++++++++++++++++ -# FP16 Training -# ------------------------------------- -# Modified from Nvidia/Apex -# https://github.com/NVIDIA/apex/blob/master/apex/fp16_utils/fp16util.py - - -class tofp16(nn.Module): - def __init__(self): - super(tofp16, self).__init__() - - def forward(self, input): - if input.is_cuda: - return input.half() - else: # PyTorch 1.0 doesn't support fp16 in CPU - return input.float() - - -def BN_convert_float(module): - if isinstance(module, torch.nn.modules.batchnorm._BatchNorm): - module.float() - for child in module.children(): - BN_convert_float(child) - return module - - -def network_to_half(network): - return nn.Sequential(tofp16(), BN_convert_float(network.half())) - - -# warnings.simplefilter('ignore') - -# +++++++++++++++++++++++++++++++++++++ -# DCSCN -# ------------------------------------- - - -class DCSCN(BaseModule): - # https://github.com/jiny2001/dcscn-super-resolution - def __init__( - self, - color_channel=3, - up_scale=2, - feature_layers=12, - first_feature_filters=196, - last_feature_filters=48, - reconstruction_filters=128, - up_sampler_filters=32, - ): - super(DCSCN, self).__init__() - self.total_feature_channels = 0 - self.total_reconstruct_filters = 0 - self.upscale = up_scale - - self.act_fn = nn.SELU(inplace=False) - self.feature_block = self.make_feature_extraction_block( - color_channel, feature_layers, first_feature_filters, last_feature_filters - ) - - self.reconstruction_block = self.make_reconstruction_block( - reconstruction_filters - ) - self.up_sampler = self.make_upsampler(up_sampler_filters, color_channel) - self.selu_init_params() - - def selu_init_params(self): - for i in self.modules(): - if isinstance(i, nn.Conv2d): - i.weight.data.normal_(0.0, 1.0 / sqrt(i.weight.numel())) - if i.bias is not None: - i.bias.data.fill_(0) - - def conv_block(self, in_channel, out_channel, kernel_size): - m = OrderedDict( - [ - # ("Padding", nn.ReplicationPad2d((kernel_size - 1) // 2)), - ( - "Conv2d", - nn.Conv2d( - in_channel, - out_channel, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - ), - ), - ("Activation", self.act_fn), - ] - ) - - return nn.Sequential(m) - - def make_feature_extraction_block( - self, color_channel, num_layers, first_filters, last_filters - ): - # input layer - feature_block = [ - ("Feature 1", self.conv_block(color_channel, first_filters, 3)) - ] - # exponential decay - # rest layers - alpha_rate = log(first_filters / last_filters) / (num_layers - 1) - filter_nums = [ - round(first_filters * exp(-alpha_rate * i)) for i in range(num_layers) - ] - - self.total_feature_channels = sum(filter_nums) - - layer_filters = [ - [filter_nums[i], filter_nums[i + 1], 3] for i in range(num_layers - 1) - ] - - feature_block.extend( - [ - ("Feature {}".format(index + 2), self.conv_block(*x)) - for index, x in enumerate(layer_filters) - ] - ) - return nn.Sequential(OrderedDict(feature_block)) - - def make_reconstruction_block(self, num_filters): - B1 = self.conv_block(self.total_feature_channels, num_filters // 2, 1) - B2 = self.conv_block(num_filters // 2, num_filters, 3) - m = OrderedDict( - [ - ("A", self.conv_block(self.total_feature_channels, num_filters, 1)), - ("B", nn.Sequential(*[B1, B2])), - ] - ) - self.total_reconstruct_filters = num_filters * 2 - return nn.Sequential(m) - - def make_upsampler(self, out_channel, color_channel): - out = out_channel * self.upscale**2 - m = OrderedDict( - [ - ( - "Conv2d_block", - self.conv_block(self.total_reconstruct_filters, out, kernel_size=3), - ), - ("PixelShuffle", nn.PixelShuffle(self.upscale)), - ( - "Conv2d", - nn.Conv2d( - out_channel, color_channel, kernel_size=3, padding=1, bias=False - ), - ), - ] - ) - - return nn.Sequential(m) - - def forward(self, x): - # residual learning - lr, lr_up = x - feature = [] - for layer in self.feature_block.children(): - lr = layer(lr) - feature.append(lr) - feature = torch.cat(feature, dim=1) - - reconstruction = [ - layer(feature) for layer in self.reconstruction_block.children() - ] - reconstruction = torch.cat(reconstruction, dim=1) - - lr = self.up_sampler(reconstruction) - return lr + lr_up - - -# +++++++++++++++++++++++++++++++++++++ -# CARN -# ------------------------------------- - - -class CARN_Block(BaseModule): - def __init__( - self, - channels, - kernel_size=3, - padding=1, - dilation=1, - groups=1, - activation=nn.SELU(), - repeat=3, - SEBlock=False, - conv=nn.Conv2d, - single_conv_size=1, - single_conv_group=1, - ): - super(CARN_Block, self).__init__() - m = [] - for i in range(repeat): - m.append( - ResidualFixBlock( - channels, - channels, - kernel_size=kernel_size, - padding=padding, - dilation=dilation, - groups=groups, - activation=activation, - conv=conv, - ) - ) - if SEBlock: - m.append(SpatialChannelSqueezeExcitation(channels, reduction=channels)) - self.blocks = nn.Sequential(*m) - self.singles = nn.Sequential( - *[ - ConvBlock( - channels * (i + 2), - channels, - kernel_size=single_conv_size, - padding=(single_conv_size - 1) // 2, - groups=single_conv_group, - activation=activation, - conv=conv, - ) - for i in range(repeat) - ] - ) - - def forward(self, x): - c0 = x - for block, single in zip(self.blocks, self.singles): - b = block(x) - c0 = c = torch.cat([c0, b], dim=1) - x = single(c) - - return x - - -class CARN(BaseModule): - # Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network - # https://github.com/nmhkahn/CARN-pytorch - def __init__( - self, - color_channels=3, - mid_channels=64, - scale=2, - activation=nn.SELU(), - num_blocks=3, - conv=nn.Conv2d, - ): - super(CARN, self).__init__() - - self.color_channels = color_channels - self.mid_channels = mid_channels - self.scale = scale - - self.entry_block = ConvBlock( - color_channels, - mid_channels, - kernel_size=3, - padding=1, - activation=activation, - conv=conv, - ) - self.blocks = nn.Sequential( - *[ - CARN_Block( - mid_channels, - kernel_size=3, - padding=1, - activation=activation, - conv=conv, - single_conv_size=1, - single_conv_group=1, - ) - for _ in range(num_blocks) - ] - ) - self.singles = nn.Sequential( - *[ - ConvBlock( - mid_channels * (i + 2), - mid_channels, - kernel_size=1, - padding=0, - activation=activation, - conv=conv, - ) - for i in range(num_blocks) - ] - ) - - self.upsampler = UpSampleBlock( - mid_channels, scale=scale, activation=activation, conv=conv - ) - self.exit_conv = conv(mid_channels, color_channels, kernel_size=3, padding=1) - - def forward(self, x): - x = self.entry_block(x) - c0 = x - for block, single in zip(self.blocks, self.singles): - b = block(x) - c0 = c = torch.cat([c0, b], dim=1) - x = single(c) - x = self.upsampler(x) - out = self.exit_conv(x) - return out - - -class CARN_V2(CARN): - def __init__( - self, - color_channels=3, - mid_channels=64, - scale=2, - activation=nn.LeakyReLU(0.1), - SEBlock=True, - conv=nn.Conv2d, - atrous=(1, 1, 1), - repeat_blocks=3, - single_conv_size=3, - single_conv_group=1, - ): - super(CARN_V2, self).__init__( - color_channels=color_channels, - mid_channels=mid_channels, - scale=scale, - activation=activation, - conv=conv, - ) - - num_blocks = len(atrous) - m = [] - for i in range(num_blocks): - m.append( - CARN_Block( - mid_channels, - kernel_size=3, - padding=1, - dilation=1, - activation=activation, - SEBlock=SEBlock, - conv=conv, - repeat=repeat_blocks, - single_conv_size=single_conv_size, - single_conv_group=single_conv_group, - ) - ) - - self.blocks = nn.Sequential(*m) - - self.singles = nn.Sequential( - *[ - ConvBlock( - mid_channels * (i + 2), - mid_channels, - kernel_size=single_conv_size, - padding=(single_conv_size - 1) // 2, - groups=single_conv_group, - activation=activation, - conv=conv, - ) - for i in range(num_blocks) - ] - ) - - def forward(self, x): - x = self.entry_block(x) - c0 = x - res = x - for block, single in zip(self.blocks, self.singles): - b = block(x) - c0 = c = torch.cat([c0, b], dim=1) - x = single(c) - x = x + res - x = self.upsampler(x) - out = self.exit_conv(x) - return out - - -# +++++++++++++++++++++++++++++++++++++ -# original Waifu2x model -# ------------------------------------- - - -class UpConv_7(BaseModule): - # https://github.com/nagadomi/waifu2x/blob/3c46906cb78895dbd5a25c3705994a1b2e873199/lib/srcnn.lua#L311 - def __init__(self): - super(UpConv_7, self).__init__() - self.act_fn = nn.LeakyReLU(0.1, inplace=False) - self.offset = 7 # because of 0 padding - from torch.nn import ZeroPad2d - - self.pad = ZeroPad2d(self.offset) - m = [ - nn.Conv2d(3, 16, 3, 1, 0), - self.act_fn, - nn.Conv2d(16, 32, 3, 1, 0), - self.act_fn, - nn.Conv2d(32, 64, 3, 1, 0), - self.act_fn, - nn.Conv2d(64, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 256, 3, 1, 0), - self.act_fn, - # in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding= - nn.ConvTranspose2d(256, 3, kernel_size=4, stride=2, padding=3, bias=False), - ] - self.Sequential = nn.Sequential(*m) - - def load_pre_train_weights(self, json_file): - with open(json_file) as f: - weights = json.load(f) - box = [] - for i in weights: - box.append(i["weight"]) - box.append(i["bias"]) - own_state = self.state_dict() - for index, (name, param) in enumerate(own_state.items()): - own_state[name].copy_(torch.FloatTensor(box[index])) - - def forward(self, x): - x = self.pad(x) - return self.Sequential.forward(x) - - -class Vgg_7(UpConv_7): - def __init__(self): - super(Vgg_7, self).__init__() - self.act_fn = nn.LeakyReLU(0.1, inplace=False) - self.offset = 7 - m = [ - nn.Conv2d(3, 32, 3, 1, 0), - self.act_fn, - nn.Conv2d(32, 32, 3, 1, 0), - self.act_fn, - nn.Conv2d(32, 64, 3, 1, 0), - self.act_fn, - nn.Conv2d(64, 64, 3, 1, 0), - self.act_fn, - nn.Conv2d(64, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 3, 3, 1, 0), - ] - self.Sequential = nn.Sequential(*m) diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/song/SongFactory.ts b/spaces/yderre-aubay/midi-player-demo/src/common/song/SongFactory.ts deleted file mode 100644 index be17831a49b57abf0cc0d27e4c0501e3ed88174a..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/common/song/SongFactory.ts +++ /dev/null @@ -1,11 +0,0 @@ -import { conductorTrack, emptyTrack } from "../track" -import Song from "./Song" - -export function emptySong() { - const song = new Song() - song.addTrack(conductorTrack()) - song.addTrack(emptyTrack(0)) - // Empty songs do not need to be saved. - song.isSaved = true - return song -} diff --git a/spaces/yeqingmei123/face-test/e4e/datasets/inference_dataset.py b/spaces/yeqingmei123/face-test/e4e/datasets/inference_dataset.py deleted file mode 100644 index fb577d7b538d634f27013c2784d2ea32143154cb..0000000000000000000000000000000000000000 --- a/spaces/yeqingmei123/face-test/e4e/datasets/inference_dataset.py +++ /dev/null @@ -1,25 +0,0 @@ -from torch.utils.data import Dataset -from PIL import Image -from utils import data_utils - - -class InferenceDataset(Dataset): - - def __init__(self, root, opts, transform=None, preprocess=None): - self.paths = sorted(data_utils.make_dataset(root)) - self.transform = transform - self.preprocess = preprocess - self.opts = opts - - def __len__(self): - return len(self.paths) - - def __getitem__(self, index): - from_path = self.paths[index] - if self.preprocess is not None: - from_im = self.preprocess(from_path) - else: - from_im = Image.open(from_path).convert('RGB') - if self.transform: - from_im = self.transform(from_im) - return from_im diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/datasets/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/benchmark/benchmark_utils.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/benchmark/benchmark_utils.py deleted file mode 100644 index a71b1fb65a23efa85642a23b2f7e0ec5c9922826..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/benchmark/benchmark_utils.py +++ /dev/null @@ -1,914 +0,0 @@ -# This file is adapted from the AllenNLP library at https://github.com/allenai/allennlp - -# Copyright 2020 The HuggingFace Team and the AllenNLP authors. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Utilities for working with the local dataset cache. -""" - -import copy -import csv -import linecache -import os -import platform -import sys -import warnings -from abc import ABC, abstractmethod -from collections import defaultdict, namedtuple -from datetime import datetime -from multiprocessing import Pipe, Process, Queue -from multiprocessing.connection import Connection -from typing import Callable, Iterable, List, NamedTuple, Optional, Union - -from .. import AutoConfig, PretrainedConfig -from .. import __version__ as version -from ..utils import is_psutil_available, is_py3nvml_available, is_tf_available, is_torch_available, logging -from .benchmark_args_utils import BenchmarkArguments - - -if is_torch_available(): - from torch.cuda import empty_cache as torch_empty_cache - -if is_tf_available(): - from tensorflow.python.eager import context as tf_context - -if is_psutil_available(): - import psutil - -if is_py3nvml_available(): - import py3nvml.py3nvml as nvml - -if platform.system() == "Windows": - from signal import CTRL_C_EVENT as SIGKILL -else: - from signal import SIGKILL - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -_is_memory_tracing_enabled = False - -BenchmarkOutput = namedtuple( - "BenchmarkOutput", - [ - "time_inference_result", - "memory_inference_result", - "time_train_result", - "memory_train_result", - "inference_summary", - "train_summary", - ], -) - - -def separate_process_wrapper_fn(func: Callable[[], None], do_multi_processing: bool) -> Callable[[], None]: - """ - This function wraps another function into its own separated process. In order to ensure accurate memory - measurements it is important that the function is executed in a separate process - - Args: - - `func`: (`callable`): function() -> ... generic function which will be executed in its own separate process - - `do_multi_processing`: (`bool`) Whether to run function on separate process or not - """ - - def multi_process_func(*args, **kwargs): - # run function in an individual - # process to get correct memory - def wrapper_func(queue: Queue, *args): - try: - result = func(*args) - except Exception as e: - logger.error(e) - print(e) - result = "N/A" - queue.put(result) - - queue = Queue() - p = Process(target=wrapper_func, args=[queue] + list(args)) - p.start() - result = queue.get() - p.join() - return result - - if do_multi_processing: - logger.info(f"Function {func} is executed in its own process...") - return multi_process_func - else: - return func - - -def is_memory_tracing_enabled(): - global _is_memory_tracing_enabled - return _is_memory_tracing_enabled - - -class Frame(NamedTuple): - """ - `Frame` is a NamedTuple used to gather the current frame state. `Frame` has the following fields: - - - 'filename' (string): Name of the file currently executed - - 'module' (string): Name of the module currently executed - - 'line_number' (int): Number of the line currently executed - - 'event' (string): Event that triggered the tracing (default will be "line") - - 'line_text' (string): Text of the line in the python script - """ - - filename: str - module: str - line_number: int - event: str - line_text: str - - -class UsedMemoryState(NamedTuple): - """ - `UsedMemoryState` are named tuples with the following fields: - - - 'frame': a `Frame` namedtuple (see below) storing information on the current tracing frame (current file, - location in current file) - - 'cpu_memory': CPU RSS memory state *before* executing the line - - 'gpu_memory': GPU used memory *before* executing the line (sum for all GPUs or for only `gpus_to_trace` if - provided) - """ - - frame: Frame - cpu_memory: int - gpu_memory: int - - -class Memory(NamedTuple): - """ - `Memory` NamedTuple have a single field `bytes` and you can get a human readable str of the number of mega bytes by - calling `__repr__` - - - `byte` (integer): number of bytes, - """ - - bytes: int - - def __repr__(self) -> str: - return str(bytes_to_mega_bytes(self.bytes)) - - -class MemoryState(NamedTuple): - """ - `MemoryState` are namedtuples listing frame + CPU/GPU memory with the following fields: - - - `frame` (`Frame`): the current frame (see above) - - `cpu`: CPU memory consumed at during the current frame as a `Memory` named tuple - - `gpu`: GPU memory consumed at during the current frame as a `Memory` named tuple - - `cpu_gpu`: CPU + GPU memory consumed at during the current frame as a `Memory` named tuple - """ - - frame: Frame - cpu: Memory - gpu: Memory - cpu_gpu: Memory - - -class MemorySummary(NamedTuple): - """ - `MemorySummary` namedtuple otherwise with the fields: - - - `sequential`: a list of `MemoryState` namedtuple (see below) computed from the provided `memory_trace` by - subtracting the memory after executing each line from the memory before executing said line. - - `cumulative`: a list of `MemoryState` namedtuple (see below) with cumulative increase in memory for each line - obtained by summing repeated memory increase for a line if it's executed several times. The list is sorted - from the frame with the largest memory consumption to the frame with the smallest (can be negative if memory - is released) - - `total`: total memory increase during the full tracing as a `Memory` named tuple (see below). Line with - memory release (negative consumption) are ignored if `ignore_released_memory` is `True` (default). - """ - - sequential: List[MemoryState] - cumulative: List[MemoryState] - current: List[MemoryState] - total: Memory - - -MemoryTrace = List[UsedMemoryState] - - -def measure_peak_memory_cpu(function: Callable[[], None], interval=0.5, device_idx=None) -> int: - """ - measures peak cpu memory consumption of a given `function` running the function for at least interval seconds and - at most 20 * interval seconds. This function is heavily inspired by: `memory_usage` of the package - `memory_profiler`: - https://github.com/pythonprofilers/memory_profiler/blob/895c4ac7a08020d66ae001e24067da6dcea42451/memory_profiler.py#L239 - - Args: - - `function`: (`callable`): function() -> ... function without any arguments to measure for which to measure - the peak memory - - - `interval`: (`float`, `optional`, defaults to `0.5`) interval in second for which to measure the memory usage - - - `device_idx`: (`int`, `optional`, defaults to `None`) device id for which to measure gpu usage - - Returns: - - - `max_memory`: (`int`) consumed memory peak in Bytes - """ - - def get_cpu_memory(process_id: int) -> int: - """ - measures current cpu memory usage of a given `process_id` - - Args: - - `process_id`: (`int`) process_id for which to measure memory - - Returns - - - `memory`: (`int`) consumed memory in Bytes - """ - process = psutil.Process(process_id) - try: - meminfo_attr = "memory_info" if hasattr(process, "memory_info") else "get_memory_info" - memory = getattr(process, meminfo_attr)()[0] - except psutil.AccessDenied: - raise ValueError("Error with Psutil.") - return memory - - if not is_psutil_available(): - logger.warning( - "Psutil not installed, we won't log CPU memory usage. " - "Install Psutil (pip install psutil) to use CPU memory tracing." - ) - max_memory = "N/A" - else: - - class MemoryMeasureProcess(Process): - - """ - `MemoryMeasureProcess` inherits from `Process` and overwrites its `run()` method. Used to measure the - memory usage of a process - """ - - def __init__(self, process_id: int, child_connection: Connection, interval: float): - super().__init__() - self.process_id = process_id - self.interval = interval - self.connection = child_connection - self.num_measurements = 1 - self.mem_usage = get_cpu_memory(self.process_id) - - def run(self): - self.connection.send(0) - stop = False - while True: - self.mem_usage = max(self.mem_usage, get_cpu_memory(self.process_id)) - self.num_measurements += 1 - - if stop: - break - - stop = self.connection.poll(self.interval) - - # send results to parent pipe - self.connection.send(self.mem_usage) - self.connection.send(self.num_measurements) - - while True: - # create child, parent connection - child_connection, parent_connection = Pipe() - - # instantiate process - mem_process = MemoryMeasureProcess(os.getpid(), child_connection, interval) - mem_process.start() - - # wait until we get memory - parent_connection.recv() - - try: - # execute function - function() - - # start parent connection - parent_connection.send(0) - - # receive memory and num measurements - max_memory = parent_connection.recv() - num_measurements = parent_connection.recv() - except Exception: - # kill process in a clean way - parent = psutil.Process(os.getpid()) - for child in parent.children(recursive=True): - os.kill(child.pid, SIGKILL) - mem_process.join(0) - raise RuntimeError("Process killed. Error in Process") - - # run process at least 20 * interval or until it finishes - mem_process.join(20 * interval) - - if (num_measurements > 4) or (interval < 1e-6): - break - - # reduce interval - interval /= 10 - - return max_memory - - -def start_memory_tracing( - modules_to_trace: Optional[Union[str, Iterable[str]]] = None, - modules_not_to_trace: Optional[Union[str, Iterable[str]]] = None, - events_to_trace: str = "line", - gpus_to_trace: Optional[List[int]] = None, -) -> MemoryTrace: - """ - Setup line-by-line tracing to record rss mem (RAM) at each line of a module or sub-module. See `./benchmark.py` for - usage examples. Current memory consumption is returned using psutil and in particular is the RSS memory "Resident - Set Size” (the non-swapped physical memory the process is using). See - https://psutil.readthedocs.io/en/latest/#psutil.Process.memory_info - - Args: - - `modules_to_trace`: (None, string, list/tuple of string) if None, all events are recorded if string or list - of strings: only events from the listed module/sub-module will be recorded (e.g. 'fairseq' or - 'transformers.models.gpt2.modeling_gpt2') - - `modules_not_to_trace`: (None, string, list/tuple of string) if None, no module is avoided if string or list - of strings: events from the listed module/sub-module will not be recorded (e.g. 'torch') - - `events_to_trace`: string or list of string of events to be recorded (see official python doc for - `sys.settrace` for the list of events) default to line - - `gpus_to_trace`: (optional list, default None) list of GPUs to trace. Default to tracing all GPUs - - Return: - - - `memory_trace` is a list of `UsedMemoryState` for each event (default each line of the traced script). - - - `UsedMemoryState` are named tuples with the following fields: - - - 'frame': a `Frame` namedtuple (see below) storing information on the current tracing frame (current - file, location in current file) - - 'cpu_memory': CPU RSS memory state *before* executing the line - - 'gpu_memory': GPU used memory *before* executing the line (sum for all GPUs or for only - `gpus_to_trace` if provided) - - `Frame` is a namedtuple used by `UsedMemoryState` to list the current frame state. `Frame` has the following - fields: - 'filename' (string): Name of the file currently executed - 'module' (string): Name of the module - currently executed - 'line_number' (int): Number of the line currently executed - 'event' (string): Event that - triggered the tracing (default will be "line") - 'line_text' (string): Text of the line in the python script - - """ - if is_psutil_available(): - process = psutil.Process(os.getpid()) - else: - logger.warning( - "Psutil not installed, we won't log CPU memory usage. " - "Install psutil (pip install psutil) to use CPU memory tracing." - ) - process = None - - if is_py3nvml_available(): - try: - nvml.nvmlInit() - devices = list(range(nvml.nvmlDeviceGetCount())) if gpus_to_trace is None else gpus_to_trace - nvml.nvmlShutdown() - except (OSError, nvml.NVMLError): - logger.warning("Error while initializing communication with GPU. We won't perform GPU memory tracing.") - log_gpu = False - else: - log_gpu = is_torch_available() or is_tf_available() - else: - logger.warning( - "py3nvml not installed, we won't log GPU memory usage. " - "Install py3nvml (pip install py3nvml) to use GPU memory tracing." - ) - log_gpu = False - - memory_trace = [] - - def traceit(frame, event, args): - """ - Tracing method executed before running each line in a module or sub-module Record memory allocated in a list - with debugging information - """ - global _is_memory_tracing_enabled - - if not _is_memory_tracing_enabled: - return traceit - - # Filter events - if events_to_trace is not None: - if isinstance(events_to_trace, str) and event != events_to_trace: - return traceit - elif isinstance(events_to_trace, (list, tuple)) and event not in events_to_trace: - return traceit - - if "__name__" not in frame.f_globals: - return traceit - - # Filter modules - name = frame.f_globals["__name__"] - if not isinstance(name, str): - return traceit - else: - # Filter whitelist of modules to trace - if modules_to_trace is not None: - if isinstance(modules_to_trace, str) and modules_to_trace not in name: - return traceit - elif isinstance(modules_to_trace, (list, tuple)) and all(m not in name for m in modules_to_trace): - return traceit - - # Filter blacklist of modules not to trace - if modules_not_to_trace is not None: - if isinstance(modules_not_to_trace, str) and modules_not_to_trace in name: - return traceit - elif isinstance(modules_not_to_trace, (list, tuple)) and any(m in name for m in modules_not_to_trace): - return traceit - - # Record current tracing state (file, location in file...) - lineno = frame.f_lineno - filename = frame.f_globals["__file__"] - if filename.endswith(".pyc") or filename.endswith(".pyo"): - filename = filename[:-1] - line = linecache.getline(filename, lineno).rstrip() - traced_state = Frame(filename, name, lineno, event, line) - - # Record current memory state (rss memory) and compute difference with previous memory state - cpu_mem = 0 - if process is not None: - mem = process.memory_info() - cpu_mem = mem.rss - - gpu_mem = 0 - if log_gpu: - # Clear GPU caches - if is_torch_available(): - torch_empty_cache() - if is_tf_available(): - tf_context.context()._clear_caches() # See https://github.com/tensorflow/tensorflow/issues/20218#issuecomment-416771802 - - # Sum used memory for all GPUs - nvml.nvmlInit() - - for i in devices: - handle = nvml.nvmlDeviceGetHandleByIndex(i) - meminfo = nvml.nvmlDeviceGetMemoryInfo(handle) - gpu_mem += meminfo.used - - nvml.nvmlShutdown() - - mem_state = UsedMemoryState(traced_state, cpu_mem, gpu_mem) - memory_trace.append(mem_state) - - return traceit - - sys.settrace(traceit) - - global _is_memory_tracing_enabled - _is_memory_tracing_enabled = True - - return memory_trace - - -def stop_memory_tracing( - memory_trace: Optional[MemoryTrace] = None, ignore_released_memory: bool = True -) -> Optional[MemorySummary]: - """ - Stop memory tracing cleanly and return a summary of the memory trace if a trace is given. - - Args: - `memory_trace` (optional output of start_memory_tracing, default: None): - memory trace to convert in summary - `ignore_released_memory` (boolean, default: None): - if True we only sum memory increase to compute total memory - - Return: - - - None if `memory_trace` is None - - `MemorySummary` namedtuple otherwise with the fields: - - - `sequential`: a list of `MemoryState` namedtuple (see below) computed from the provided `memory_trace` by - subtracting the memory after executing each line from the memory before executing said line. - - `cumulative`: a list of `MemoryState` namedtuple (see below) with cumulative increase in memory for each - line obtained by summing repeated memory increase for a line if it's executed several times. The list is - sorted from the frame with the largest memory consumption to the frame with the smallest (can be negative - if memory is released) - - `total`: total memory increase during the full tracing as a `Memory` named tuple (see below). Line with - memory release (negative consumption) are ignored if `ignore_released_memory` is `True` (default). - - `Memory` named tuple have fields - - - `byte` (integer): number of bytes, - - `string` (string): same as human readable string (ex: "3.5MB") - - `Frame` are namedtuple used to list the current frame state and have the following fields: - - - 'filename' (string): Name of the file currently executed - - 'module' (string): Name of the module currently executed - - 'line_number' (int): Number of the line currently executed - - 'event' (string): Event that triggered the tracing (default will be "line") - - 'line_text' (string): Text of the line in the python script - - `MemoryState` are namedtuples listing frame + CPU/GPU memory with the following fields: - - - `frame` (`Frame`): the current frame (see above) - - `cpu`: CPU memory consumed at during the current frame as a `Memory` named tuple - - `gpu`: GPU memory consumed at during the current frame as a `Memory` named tuple - - `cpu_gpu`: CPU + GPU memory consumed at during the current frame as a `Memory` named tuple - """ - global _is_memory_tracing_enabled - _is_memory_tracing_enabled = False - - if memory_trace is not None and len(memory_trace) > 1: - memory_diff_trace = [] - memory_curr_trace = [] - - cumulative_memory_dict = defaultdict(lambda: [0, 0, 0]) - - for ( - (frame, cpu_mem, gpu_mem), - (next_frame, next_cpu_mem, next_gpu_mem), - ) in zip(memory_trace[:-1], memory_trace[1:]): - cpu_mem_inc = next_cpu_mem - cpu_mem - gpu_mem_inc = next_gpu_mem - gpu_mem - cpu_gpu_mem_inc = cpu_mem_inc + gpu_mem_inc - memory_diff_trace.append( - MemoryState( - frame=frame, - cpu=Memory(cpu_mem_inc), - gpu=Memory(gpu_mem_inc), - cpu_gpu=Memory(cpu_gpu_mem_inc), - ) - ) - - memory_curr_trace.append( - MemoryState( - frame=frame, - cpu=Memory(next_cpu_mem), - gpu=Memory(next_gpu_mem), - cpu_gpu=Memory(next_gpu_mem + next_cpu_mem), - ) - ) - - cumulative_memory_dict[frame][0] += cpu_mem_inc - cumulative_memory_dict[frame][1] += gpu_mem_inc - cumulative_memory_dict[frame][2] += cpu_gpu_mem_inc - - cumulative_memory = sorted( - cumulative_memory_dict.items(), key=lambda x: x[1][2], reverse=True - ) # order by the total CPU + GPU memory increase - cumulative_memory = [ - MemoryState( - frame=frame, - cpu=Memory(cpu_mem_inc), - gpu=Memory(gpu_mem_inc), - cpu_gpu=Memory(cpu_gpu_mem_inc), - ) - for frame, (cpu_mem_inc, gpu_mem_inc, cpu_gpu_mem_inc) in cumulative_memory - ] - - memory_curr_trace = sorted(memory_curr_trace, key=lambda x: x.cpu_gpu.bytes, reverse=True) - - if ignore_released_memory: - total_memory = sum(max(0, step_trace.cpu_gpu.bytes) for step_trace in memory_diff_trace) - else: - total_memory = sum(step_trace.cpu_gpu.bytes for step_trace in memory_diff_trace) - - total_memory = Memory(total_memory) - - return MemorySummary( - sequential=memory_diff_trace, - cumulative=cumulative_memory, - current=memory_curr_trace, - total=total_memory, - ) - - return None - - -def bytes_to_mega_bytes(memory_amount: int) -> int: - """Utility to convert a number of bytes (int) into a number of mega bytes (int)""" - return memory_amount >> 20 - - -class Benchmark(ABC): - """ - Benchmarks is a simple but feature-complete benchmarking script to compare memory and time performance of models in - Transformers. - """ - - args: BenchmarkArguments - configs: PretrainedConfig - framework: str - - def __init__(self, args: BenchmarkArguments = None, configs: PretrainedConfig = None): - self.args = args - if configs is None: - self.config_dict = { - model_name: AutoConfig.from_pretrained(model_name) for model_name in self.args.model_names - } - else: - self.config_dict = dict(zip(self.args.model_names, configs)) - - warnings.warn( - f"The class {self.__class__} is deprecated. Hugging Face Benchmarking utils" - " are deprecated in general and it is advised to use external Benchmarking libraries " - " to benchmark Transformer models.", - FutureWarning, - ) - - if self.args.memory and os.getenv("TRANSFORMERS_USE_MULTIPROCESSING") == 0: - logger.warning( - "Memory consumption will not be measured accurately if `args.multi_process` is set to `False.` The" - " flag 'TRANSFORMERS_USE_MULTIPROCESSING' should only be disabled for debugging / testing." - ) - - self._print_fn = None - self._framework_version = None - self._environment_info = None - - @property - def print_fn(self): - if self._print_fn is None: - if self.args.log_print: - - def print_and_log(*args): - with open(self.args.log_filename, "a") as log_file: - log_file.write("".join(args) + "\n") - print(*args) - - self._print_fn = print_and_log - else: - self._print_fn = print - return self._print_fn - - @property - @abstractmethod - def framework_version(self): - pass - - @abstractmethod - def _inference_speed(self, model_name: str, batch_size: int, sequence_length: int) -> float: - pass - - @abstractmethod - def _train_speed(self, model_name: str, batch_size: int, sequence_length: int) -> float: - pass - - @abstractmethod - def _inference_memory( - self, model_name: str, batch_size: int, sequence_length: int - ) -> [Memory, Optional[MemorySummary]]: - pass - - @abstractmethod - def _train_memory( - self, model_name: str, batch_size: int, sequence_length: int - ) -> [Memory, Optional[MemorySummary]]: - pass - - def inference_speed(self, *args, **kwargs) -> float: - return separate_process_wrapper_fn(self._inference_speed, self.args.do_multi_processing)(*args, **kwargs) - - def train_speed(self, *args, **kwargs) -> float: - return separate_process_wrapper_fn(self._train_speed, self.args.do_multi_processing)(*args, **kwargs) - - def inference_memory(self, *args, **kwargs) -> [Memory, Optional[MemorySummary]]: - return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs) - - def train_memory(self, *args, **kwargs) -> [Memory, Optional[MemorySummary]]: - return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs) - - def run(self): - result_dict = {model_name: {} for model_name in self.args.model_names} - inference_result_time = copy.deepcopy(result_dict) - inference_result_memory = copy.deepcopy(result_dict) - train_result_time = copy.deepcopy(result_dict) - train_result_memory = copy.deepcopy(result_dict) - - for c, model_name in enumerate(self.args.model_names): - self.print_fn(f"{c + 1} / {len(self.args.model_names)}") - - model_dict = { - "bs": self.args.batch_sizes, - "ss": self.args.sequence_lengths, - "result": {i: {} for i in self.args.batch_sizes}, - } - inference_result_time[model_name] = copy.deepcopy(model_dict) - inference_result_memory[model_name] = copy.deepcopy(model_dict) - train_result_time[model_name] = copy.deepcopy(model_dict) - train_result_memory[model_name] = copy.deepcopy(model_dict) - - inference_summary = train_summary = None - - for batch_size in self.args.batch_sizes: - for sequence_length in self.args.sequence_lengths: - if self.args.inference: - if self.args.memory: - memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length) - inference_result_memory[model_name]["result"][batch_size][sequence_length] = memory - if self.args.speed: - time = self.inference_speed(model_name, batch_size, sequence_length) - inference_result_time[model_name]["result"][batch_size][sequence_length] = time - - if self.args.training: - if self.args.memory: - memory, train_summary = self.train_memory(model_name, batch_size, sequence_length) - train_result_memory[model_name]["result"][batch_size][sequence_length] = memory - if self.args.speed: - time = self.train_speed(model_name, batch_size, sequence_length) - train_result_time[model_name]["result"][batch_size][sequence_length] = time - - if self.args.inference: - if self.args.speed: - self.print_fn("\n" + 20 * "=" + ("INFERENCE - SPEED - RESULT").center(40) + 20 * "=") - self.print_results(inference_result_time, type_label="Time in s") - self.save_to_csv(inference_result_time, self.args.inference_time_csv_file) - if self.args.is_tpu: - self.print_fn( - "TPU was used for inference. Note that the time after compilation stabilized (after ~10" - " inferences model.forward(..) calls) was measured." - ) - - if self.args.memory: - self.print_fn("\n" + 20 * "=" + ("INFERENCE - MEMORY - RESULT").center(40) + 20 * "=") - self.print_results(inference_result_memory, type_label="Memory in MB") - self.save_to_csv(inference_result_memory, self.args.inference_memory_csv_file) - - if self.args.trace_memory_line_by_line: - self.print_fn("\n" + 20 * "=" + ("INFERENCE - MEMOMRY - LINE BY LINE - SUMMARY").center(40) + 20 * "=") - self.print_memory_trace_statistics(inference_summary) - - if self.args.training: - if self.args.speed: - self.print_fn("\n" + 20 * "=" + ("TRAIN - SPEED - RESULTS").center(40) + 20 * "=") - self.print_results(train_result_time, "Time in s") - self.save_to_csv(train_result_time, self.args.train_time_csv_file) - if self.args.is_tpu: - self.print_fn( - "TPU was used for training. Note that the time after compilation stabilized (after ~10 train" - " loss=model.forward(...) + loss.backward() calls) was measured." - ) - - if self.args.memory: - self.print_fn("\n" + 20 * "=" + ("TRAIN - MEMORY - RESULTS").center(40) + 20 * "=") - self.print_results(train_result_memory, type_label="Memory in MB") - self.save_to_csv(train_result_memory, self.args.train_memory_csv_file) - - if self.args.trace_memory_line_by_line: - self.print_fn("\n" + 20 * "=" + ("TRAIN - MEMOMRY - LINE BY LINE - SUMMARY").center(40) + 20 * "=") - self.print_memory_trace_statistics(train_summary) - - if self.args.env_print: - self.print_fn("\n" + 20 * "=" + ("ENVIRONMENT INFORMATION").center(40) + 20 * "=") - self.print_fn("\n".join([f"- {prop}: {val}" for prop, val in self.environment_info.items()]) + "\n") - - if self.args.save_to_csv: - with open(self.args.env_info_csv_file, mode="w", newline="") as csv_file: - writer = csv.writer(csv_file) - for key, value in self.environment_info.items(): - writer.writerow([key, value]) - - return BenchmarkOutput( - inference_result_time, - inference_result_memory, - train_result_time, - train_result_memory, - inference_summary, - train_summary, - ) - - @property - def environment_info(self): - if self._environment_info is None: - info = {} - info["transformers_version"] = version - info["framework"] = self.framework - if self.framework == "PyTorch": - info["use_torchscript"] = self.args.torchscript - if self.framework == "TensorFlow": - info["eager_mode"] = self.args.eager_mode - info["use_xla"] = self.args.use_xla - info["framework_version"] = self.framework_version - info["python_version"] = platform.python_version() - info["system"] = platform.system() - info["cpu"] = platform.processor() - info["architecture"] = platform.architecture()[0] - info["date"] = datetime.date(datetime.now()) - info["time"] = datetime.time(datetime.now()) - info["fp16"] = self.args.fp16 - info["use_multiprocessing"] = self.args.do_multi_processing - info["only_pretrain_model"] = self.args.only_pretrain_model - - if is_psutil_available(): - info["cpu_ram_mb"] = bytes_to_mega_bytes(psutil.virtual_memory().total) - else: - logger.warning( - "Psutil not installed, we won't log available CPU memory. " - "Install psutil (pip install psutil) to log available CPU memory." - ) - info["cpu_ram_mb"] = "N/A" - - info["use_gpu"] = self.args.is_gpu - if self.args.is_gpu: - info["num_gpus"] = 1 # TODO(PVP) Currently only single GPU is supported - if is_py3nvml_available(): - nvml.nvmlInit() - handle = nvml.nvmlDeviceGetHandleByIndex(self.args.device_idx) - info["gpu"] = nvml.nvmlDeviceGetName(handle) - info["gpu_ram_mb"] = bytes_to_mega_bytes(nvml.nvmlDeviceGetMemoryInfo(handle).total) - info["gpu_power_watts"] = nvml.nvmlDeviceGetPowerManagementLimit(handle) / 1000 - info["gpu_performance_state"] = nvml.nvmlDeviceGetPerformanceState(handle) - nvml.nvmlShutdown() - else: - logger.warning( - "py3nvml not installed, we won't log GPU memory usage. " - "Install py3nvml (pip install py3nvml) to log information about GPU." - ) - info["gpu"] = "N/A" - info["gpu_ram_mb"] = "N/A" - info["gpu_power_watts"] = "N/A" - info["gpu_performance_state"] = "N/A" - - info["use_tpu"] = self.args.is_tpu - # TODO(PVP): See if we can add more information about TPU - # see: https://github.com/pytorch/xla/issues/2180 - - self._environment_info = info - return self._environment_info - - def print_results(self, result_dict, type_label): - self.print_fn(80 * "-") - self.print_fn( - "Model Name".center(30) + "Batch Size".center(15) + "Seq Length".center(15) + type_label.center(15) - ) - self.print_fn(80 * "-") - for model_name in self.args.model_names: - for batch_size in result_dict[model_name]["bs"]: - for sequence_length in result_dict[model_name]["ss"]: - result = result_dict[model_name]["result"][batch_size][sequence_length] - if isinstance(result, float): - result = round(1000 * result) / 1000 - result = "< 0.001" if result == 0.0 else str(result) - else: - result = str(result) - self.print_fn( - model_name[:30].center(30) + str(batch_size).center(15), - str(sequence_length).center(15), - result.center(15), - ) - self.print_fn(80 * "-") - - def print_memory_trace_statistics(self, summary: MemorySummary): - self.print_fn( - "\nLine by line memory consumption:\n" - + "\n".join( - f"{state.frame.filename}:{state.frame.line_number}: mem {state.cpu_gpu}: {state.frame.line_text}" - for state in summary.sequential - ) - ) - self.print_fn( - "\nLines with top memory consumption:\n" - + "\n".join( - f"=> {state.frame.filename}:{state.frame.line_number}: mem {state.cpu_gpu}: {state.frame.line_text}" - for state in summary.cumulative[:6] - ) - ) - self.print_fn( - "\nLines with lowest memory consumption:\n" - + "\n".join( - f"=> {state.frame.filename}:{state.frame.line_number}: mem {state.cpu_gpu}: {state.frame.line_text}" - for state in summary.cumulative[-6:] - ) - ) - self.print_fn(f"\nTotal memory increase: {summary.total}") - - def save_to_csv(self, result_dict, filename): - if not self.args.save_to_csv: - return - self.print_fn("Saving results to csv.") - with open(filename, mode="w") as csv_file: - if len(self.args.model_names) <= 0: - raise ValueError(f"At least 1 model should be defined, but got {self.model_names}") - - fieldnames = ["model", "batch_size", "sequence_length"] - writer = csv.DictWriter(csv_file, fieldnames=fieldnames + ["result"]) - writer.writeheader() - - for model_name in self.args.model_names: - result_dict_model = result_dict[model_name]["result"] - for bs in result_dict_model: - for ss in result_dict_model[bs]: - result_model = result_dict_model[bs][ss] - writer.writerow( - { - "model": model_name, - "batch_size": bs, - "sequence_length": ss, - "result": ("{}" if not isinstance(result_model, float) else "{:.4f}").format( - result_model - ), - } - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pop2piano/processing_pop2piano.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pop2piano/processing_pop2piano.py deleted file mode 100644 index 5ea579111ddbcd226820a34a45c7bdd3276202a2..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pop2piano/processing_pop2piano.py +++ /dev/null @@ -1,138 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Processor class for Pop2Piano.""" - -import os -from typing import List, Optional, Union - -import numpy as np - -from ...feature_extraction_utils import BatchFeature -from ...processing_utils import ProcessorMixin -from ...tokenization_utils import BatchEncoding, PaddingStrategy, TruncationStrategy -from ...utils import TensorType - - -class Pop2PianoProcessor(ProcessorMixin): - r""" - Constructs an Pop2Piano processor which wraps a Pop2Piano Feature Extractor and Pop2Piano Tokenizer into a single - processor. - - [`Pop2PianoProcessor`] offers all the functionalities of [`Pop2PianoFeatureExtractor`] and [`Pop2PianoTokenizer`]. - See the docstring of [`~Pop2PianoProcessor.__call__`] and [`~Pop2PianoProcessor.decode`] for more information. - - Args: - feature_extractor (`Pop2PianoFeatureExtractor`): - An instance of [`Pop2PianoFeatureExtractor`]. The feature extractor is a required input. - tokenizer (`Pop2PianoTokenizer`): - An instance of ['Pop2PianoTokenizer`]. The tokenizer is a required input. - """ - attributes = ["feature_extractor", "tokenizer"] - feature_extractor_class = "Pop2PianoFeatureExtractor" - tokenizer_class = "Pop2PianoTokenizer" - - def __init__(self, feature_extractor, tokenizer): - super().__init__(feature_extractor, tokenizer) - - def __call__( - self, - audio: Union[np.ndarray, List[float], List[np.ndarray]] = None, - sampling_rate: Union[int, List[int]] = None, - steps_per_beat: int = 2, - resample: Optional[bool] = True, - notes: Union[List, TensorType] = None, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - pad_to_multiple_of: Optional[int] = None, - verbose: bool = True, - **kwargs, - ) -> Union[BatchFeature, BatchEncoding]: - """ - This method uses [`Pop2PianoFeatureExtractor.__call__`] method to prepare log-mel-spectrograms for the model, - and [`Pop2PianoTokenizer.__call__`] to prepare token_ids from notes. - - Please refer to the docstring of the above two methods for more information. - """ - - # Since Feature Extractor needs both audio and sampling_rate and tokenizer needs both token_ids and - # feature_extractor_output, we must check for both. - if (audio is None and sampling_rate is None) and (notes is None): - raise ValueError( - "You have to specify at least audios and sampling_rate in order to use feature extractor or " - "notes to use the tokenizer part." - ) - - if audio is not None and sampling_rate is not None: - inputs = self.feature_extractor( - audio=audio, - sampling_rate=sampling_rate, - steps_per_beat=steps_per_beat, - resample=resample, - **kwargs, - ) - if notes is not None: - encoded_token_ids = self.tokenizer( - notes=notes, - padding=padding, - truncation=truncation, - max_length=max_length, - pad_to_multiple_of=pad_to_multiple_of, - verbose=verbose, - **kwargs, - ) - - if notes is None: - return inputs - - elif audio is None or sampling_rate is None: - return encoded_token_ids - - else: - inputs["token_ids"] = encoded_token_ids["token_ids"] - return inputs - - def batch_decode( - self, - token_ids, - feature_extractor_output: BatchFeature, - return_midi: bool = True, - ) -> BatchEncoding: - """ - This method uses [`Pop2PianoTokenizer.batch_decode`] method to convert model generated token_ids to midi_notes. - - Please refer to the docstring of the above two methods for more information. - """ - - return self.tokenizer.batch_decode( - token_ids=token_ids, feature_extractor_output=feature_extractor_output, return_midi=return_midi - ) - - @property - def model_input_names(self): - tokenizer_input_names = self.tokenizer.model_input_names - feature_extractor_input_names = self.feature_extractor.model_input_names - return list(dict.fromkeys(tokenizer_input_names + feature_extractor_input_names)) - - def save_pretrained(self, save_directory, **kwargs): - if os.path.isfile(save_directory): - raise ValueError(f"Provided path ({save_directory}) should be a directory, not a file") - os.makedirs(save_directory, exist_ok=True) - return super().save_pretrained(save_directory, **kwargs) - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, **kwargs): - args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) - return cls(*args) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/spkmix.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/spkmix.py deleted file mode 100644 index 1d266e017859aca3c48727c5acbef9c8da8c1411..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/spkmix.py +++ /dev/null @@ -1,11 +0,0 @@ -# 角色混合轨道 编写规则: -# 角色ID : [[起始时间1, 终止时间1, 起始数值1, 起始数值1], [起始时间2, 终止时间2, 起始数值2, 起始数值2]] -# 起始时间和前一个的终止时间必须相同,第一个起始时间必须为0,最后一个终止时间必须为1 (时间的范围为0-1) -# 全部角色必须填写,不使用的角色填[[0., 1., 0., 0.]]即可 -# 融合数值可以随便填,在指定的时间段内从起始数值线性变化为终止数值,内部会自动确保线性组合为1,可以放心使用 - -spk_mix_map = { - 0 : [[0., 0.5, 1, 0.5], [0.5, 1, 0.5, 1]], - 1 : [[0., 0.35, 1, 0.5], [0.35, 0.75, 0.75, 1], [0.75, 1, 0.45, 1]], - 2 : [[0., 0.35, 1, 0.5], [0.35, 0.75, 0.75, 1], [0.75, 1, 0.45, 1]] -} \ No newline at end of file diff --git a/spaces/younus93/pdfgpt/README.md b/spaces/younus93/pdfgpt/README.md deleted file mode 100644 index de5994b2fa8a0bab99c9e026eca5a237c9e1e3a1..0000000000000000000000000000000000000000 --- a/spaces/younus93/pdfgpt/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pdfgpt -emoji: ⚡ -colorFrom: purple -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -duplicated_from: ns2001/pdfgpt ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ysharma/xtts/README.md b/spaces/ysharma/xtts/README.md deleted file mode 100644 index 20f45a2647f190ae09c44c61614c8e61ee4dea6f..0000000000000000000000000000000000000000 --- a/spaces/ysharma/xtts/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: XTTS -emoji: 🐸 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.44.2 -app_file: app.py -pinned: false -models: -- coqui/XTTS-v1 -duplicated_from: coqui/xtts ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yueranseo/mygpt/assets/custom.js b/spaces/yueranseo/mygpt/assets/custom.js deleted file mode 100644 index f013209931218fd054979e290706f1945de76856..0000000000000000000000000000000000000000 --- a/spaces/yueranseo/mygpt/assets/custom.js +++ /dev/null @@ -1,502 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var chatbotWrap = null; -var apSwitch = null; -var empty_botton = null; -var messageBotDivs = null; -var loginUserForm = null; -var logginUser = null; - -var userLogged = false; -var usernameGotten = false; -var historyLoaded = false; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); -var language = navigator.language.slice(0,2); - -var forView_i18n = { - 'zh': "仅供查看", - 'en': "For viewing only", - 'ja': "閲覧専用", - 'fr': "Pour consultation seulement", - 'es': "Solo para visualización", -}; - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - loginUserForm = document.querySelector(".gradio-container > .main > .wrap > .panel > .form") - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - chatbotWrap = document.querySelector('#chuanhu_chatbot > .wrap'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - empty_botton = document.getElementById("empty_btn") - - if (loginUserForm) { - localStorage.setItem("userLogged", true); - userLogged = true; - } - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - if (!usernameGotten) { - getUserInfo(); - } - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight(); - } - if (chatbotWrap) { - if (!historyLoaded) { - loadHistoryHtml(); - } - setChatbotScroll(); - } - if (empty_botton) { - emptyHistory(); - } - } - } -} - -function webLocale() { - console.log("webLocale", language); - if (forView_i18n.hasOwnProperty(language)) { - var forView = forView_i18n[language]; - var forViewStyle = document.createElement('style'); - forViewStyle.innerHTML = '.wrap>.history-message>:last-child::after { content: "' + forView + '"!important; }'; - document.head.appendChild(forViewStyle); - // console.log("added forViewStyle", forView); - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -var username = null; -function getUserInfo() { - if (usernameGotten) { - return; - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged) { - username = userInfoDiv.innerText; - if (username) { - if (username.includes("getting user info…")) { - setTimeout(getUserInfo, 500); - return; - } else if (username === " ") { - localStorage.removeItem("username"); - localStorage.removeItem("userLogged") - userLogged = false; - usernameGotten = true; - return; - } else { - username = username.match(/User:\s*(.*)/)[1] || username; - localStorage.setItem("username", username); - usernameGotten = true; - clearHistoryHtml(); - } - } - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - document.body.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - document.body.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} -function setChatbotScroll() { - var scrollHeight = chatbotWrap.scrollHeight; - chatbotWrap.scrollTo(0,scrollHeight) -} -var rangeInputs = null; -var numberInputs = null; -function setSlider() { - rangeInputs = document.querySelectorAll('input[type="range"]'); - numberInputs = document.querySelectorAll('input[type="number"]') - setSliderRange(); - rangeInputs.forEach(rangeInput => { - rangeInput.addEventListener('input', setSliderRange); - }); - numberInputs.forEach(numberInput => { - numberInput.addEventListener('input', setSliderRange); - }) -} -function setSliderRange() { - var range = document.querySelectorAll('input[type="range"]'); - range.forEach(range => { - range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%'; - }); -} - -function addChuanhuButton(botElement) { - var rawMessage = null; - var mdMessage = null; - rawMessage = botElement.querySelector('.raw-message'); - mdMessage = botElement.querySelector('.md-message'); - if (!rawMessage) { - var buttons = botElement.querySelectorAll('button.chuanhu-btn'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - return; - } - var copyButton = null; - var toggleButton = null; - copyButton = botElement.querySelector('button.copy-bot-btn'); - toggleButton = botElement.querySelector('button.toggle-md-btn'); - if (copyButton) copyButton.remove(); - if (toggleButton) toggleButton.remove(); - - // Copy bot button - var copyButton = document.createElement('button'); - copyButton.classList.add('chuanhu-btn'); - copyButton.classList.add('copy-bot-btn'); - copyButton.setAttribute('aria-label', 'Copy'); - copyButton.innerHTML = copyIcon; - copyButton.addEventListener('click', () => { - const textToCopy = rawMessage.innerText; - navigator.clipboard - .writeText(textToCopy) - .then(() => { - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - }) - .catch(() => { - console.error("copy failed"); - }); - }); - botElement.appendChild(copyButton); - - // Toggle button - var toggleButton = document.createElement('button'); - toggleButton.classList.add('chuanhu-btn'); - toggleButton.classList.add('toggle-md-btn'); - toggleButton.setAttribute('aria-label', 'Toggle'); - var renderMarkdown = mdMessage.classList.contains('hideM'); - toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon; - toggleButton.addEventListener('click', () => { - renderMarkdown = mdMessage.classList.contains('hideM'); - if (renderMarkdown){ - renderMarkdownText(botElement); - toggleButton.innerHTML=rawIcon; - } else { - removeMarkdownText(botElement); - toggleButton.innerHTML=mdIcon; - } - }); - botElement.insertBefore(toggleButton, copyButton); -} - -function renderMarkdownText(message) { - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.remove('hideM'); - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.add('hideM'); -} -function removeMarkdownText(message) { - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.remove('hideM'); - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.add('hideM'); -} - -let timeoutId; -let isThrottled = false; -var mmutation -// 监听所有元素中 bot message 的变化,为 bot 消息添加复制按钮。 -var mObserver = new MutationObserver(function (mutationsList) { - for (mmutation of mutationsList) { - if (mmutation.type === 'childList') { - for (var node of mmutation.addedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - } - if (node.tagName === 'INPUT' && node.getAttribute('type') === 'range') { - setSlider(); - } - } - for (var node of mmutation.removedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - } - } - } else if (mmutation.type === 'attributes') { - if (mmutation.target.nodeType === 1 && mmutation.target.classList.contains('message') && mmutation.target.getAttribute('data-testid') === 'bot') { - if (isThrottled) break; // 为了防止重复不断疯狂渲染,加上等待_(:з」∠)_ - isThrottled = true; - clearTimeout(timeoutId); - timeoutId = setTimeout(() => { - isThrottled = false; - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - saveHistoryHtml(); - }, 500); - } - } - } -}); -mObserver.observe(document.documentElement, { attributes: true, childList: true, subtree: true }); - -var loadhistorytime = 0; // for debugging -function saveHistoryHtml() { - var historyHtml = document.querySelector('#chuanhu_chatbot > .wrap'); - localStorage.setItem('chatHistory', historyHtml.innerHTML); - // console.log("History Saved") - historyLoaded = false; -} -function loadHistoryHtml() { - var historyHtml = localStorage.getItem('chatHistory'); - if (!historyHtml) { - historyLoaded = true; - return; // no history, do nothing - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged){ - historyLoaded = true; - return; // logged in, do nothing - } - if (!historyLoaded) { - var tempDiv = document.createElement('div'); - tempDiv.innerHTML = historyHtml; - var buttons = tempDiv.querySelectorAll('button.chuanhu-btn'); - var gradioCopyButtons = tempDiv.querySelectorAll('button.copy_code_button'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - for (var i = 0; i < gradioCopyButtons.length; i++) { - gradioCopyButtons[i].parentNode.removeChild(gradioCopyButtons[i]); - } - var fakeHistory = document.createElement('div'); - fakeHistory.classList.add('history-message'); - fakeHistory.innerHTML = tempDiv.innerHTML; - webLocale(); - chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - // var fakeHistory = document.createElement('div'); - // fakeHistory.classList.add('history-message'); - // fakeHistory.innerHTML = historyHtml; - // chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - historyLoaded = true; - console.log("History Loaded"); - loadhistorytime += 1; // for debugging - } else { - historyLoaded = false; - } -} -function clearHistoryHtml() { - localStorage.removeItem("chatHistory"); - historyMessages = chatbotWrap.querySelector('.history-message'); - if (historyMessages) { - chatbotWrap.removeChild(historyMessages); - console.log("History Cleared"); - } -} -function emptyHistory() { - empty_botton.addEventListener("click", function () { - clearHistoryHtml(); - }); -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); - historyLoaded = false; -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); - -// button svg code -const copyIcon = ''; -const copiedIcon = ''; -const mdIcon = ''; -const rawIcon = ''; diff --git a/spaces/zadkiel04/rvc-yoshino/infer_pack/modules.py b/spaces/zadkiel04/rvc-yoshino/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/zadkiel04/rvc-yoshino/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/zej97/AI-Research-Assistant/agent/toolkits.py b/spaces/zej97/AI-Research-Assistant/agent/toolkits.py deleted file mode 100644 index 29e7ee8ad5d83e2b5a6ca1aba9c689eb8f319641..0000000000000000000000000000000000000000 --- a/spaces/zej97/AI-Research-Assistant/agent/toolkits.py +++ /dev/null @@ -1,15 +0,0 @@ -from agent import prompts, llm_utils -from config import Config - -CFG = Config() - -def english_polishing(content): - prompt = prompts.generate_english_polishing_prompt(content) - messages = [{ - "role": "user", - "content": prompt, - }] - - yield from llm_utils.llm_stream_response( - model=CFG.fast_llm_model, - messages=messages) diff --git a/spaces/zhang-wei-jian/docker/node_modules/pstree.remy/lib/tree.js b/spaces/zhang-wei-jian/docker/node_modules/pstree.remy/lib/tree.js deleted file mode 100644 index bac7cce65cda9e39a45af6675853cba8c4414c0d..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/pstree.remy/lib/tree.js +++ /dev/null @@ -1,37 +0,0 @@ -const spawn = require('child_process').spawn; - -module.exports = function (rootPid, callback) { - const pidsOfInterest = new Set([parseInt(rootPid, 10)]); - var output = ''; - - // *nix - const ps = spawn('ps', ['-A', '-o', 'ppid,pid']); - ps.stdout.on('data', (data) => { - output += data.toString('ascii'); - }); - - ps.on('close', () => { - try { - const res = output - .split('\n') - .slice(1) - .map((_) => _.trim()) - .reduce((acc, line) => { - const pids = line.split(/\s+/); - const ppid = parseInt(pids[0], 10); - - if (pidsOfInterest.has(ppid)) { - const pid = parseInt(pids[1], 10); - acc.push(pid); - pidsOfInterest.add(pid); - } - - return acc; - }, []); - - callback(null, res); - } catch (e) { - callback(e, null); - } - }); -}; diff --git a/spaces/zhang-wei-jian/docker/node_modules/tsscmp/lib/index.js b/spaces/zhang-wei-jian/docker/node_modules/tsscmp/lib/index.js deleted file mode 100644 index d52e5b542ba8730758244421c2c9378b1a7bd5f0..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/tsscmp/lib/index.js +++ /dev/null @@ -1,38 +0,0 @@ -'use strict'; - -// Implements Brad Hill's Double HMAC pattern from -// https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2011/february/double-hmac-verification/. -// The approach is similar to the node's native implementation of timing safe buffer comparison that will be available on v6+. -// https://github.com/nodejs/node/issues/3043 -// https://github.com/nodejs/node/pull/3073 - -var crypto = require('crypto'); - -function bufferEqual(a, b) { - if (a.length !== b.length) { - return false; - } - // `crypto.timingSafeEqual` was introduced in Node v6.6.0 - // - if (crypto.timingSafeEqual) { - return crypto.timingSafeEqual(a, b); - } - for (var i = 0; i < a.length; i++) { - if (a[i] !== b[i]) { - return false; - } - } - return true; -} - -function timeSafeCompare(a, b) { - var sa = String(a); - var sb = String(b); - var key = crypto.pseudoRandomBytes(32); - var ah = crypto.createHmac('sha256', key).update(sa).digest(); - var bh = crypto.createHmac('sha256', key).update(sb).digest(); - - return bufferEqual(ah, bh) && a === b; -} - -module.exports = timeSafeCompare; diff --git a/spaces/zhangguofen/Real-CUGAN/app.py b/spaces/zhangguofen/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/zhangguofen/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
            ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
            ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch()